Given a double, need to find how many digits in total - c

I have a double which is not necessarily positive but usually. It can be 0.xxxx000 or X.xxxx00000 or XX.00000 or 0.xxx0xxx00000, where eventually there are all 0's to the right of the last number. I need to keep track of how many digits there are. I've been having trouble with this, any help? This is C.

A double has 52 mantissa bits plus an implicit "1" bit, so you should be able to type-pun a double pointer to a 64-bit integer (getting the raw bits into an integer), &= this with (1<<52)-1, and |= the result with (1<<52).
The log10 of that would be the number of decimal digits.
Though, I'm almost inclined to say "go with jonsca's solution" because it is so ingeniously simple (it deserves a +1 in any case for being KISS).

Use sprintf to turn it into a string and do whatever counting/testing you need to do on the digits

The representation of the double is not decimal - it is binary (like all the other numbers in a computer). The problem you defined makes little sense really. Consider the example: number 1.2 is converted to binary - 1+1/5 = 1.(0011) binary [0011 in period]. If you cut it to 52 bits of precision (double) - you'll have 1.0011001100110011001100110011001100110011001100110011 binary that equals 1+(1-1/2^52)/5. If you represent this number in decimal form precisely you will get 52 decimals before all-zeroes that is a lot more than the maximum decimal precision of a double that is 16 digits (and all those digits of representation from 17 to 52 are just meaningless).
Anyway if you have purely abstract problem (like in school):
int f( double x )
{
int n = 0;
x = fabs(x);
x -= floor(x);
while( x != floor(x) )
{
x *= 2;
++n;
}
return n;
}
The function returns number of binary digits before all-zeroes and it is also the number of decimal digits before all-zeroes (the last decimal digit is always 5 if returned value > 0).

Related

Determining the number of decimal digits in a floating number

I am trying to write a program that outputs the number of the digits in the decimal portion of a given number (0.128).
I made the following program:
#include <stdio.h>
#include <math.h>
int main(){
float result = 0;
int count = 0;
int exp = 0;
for(exp = 0; int(1+result) % 10 != 0; exp++)
{
result = 0.128 * pow(10, exp);
count++;
}
printf("%d \n", count);
printf("%f \n", result);
return 0;
}
What I had in mind was that exp keeps being incremented until int(1+result) % 10 outputs 0. So for example when result = 0.128 * pow(10,4) = 1280, result mod 10 (int(1+result) % 10) will output 0 and the loop will stop.
I know that on a bigger scale this method is still inefficient since if result was a given input like 1.1208 the program would basically stop at one digit short of the desired value; however, I am trying to first find out the reason why I'm facing the current issue.
My Issue: The loop won't just stop at 1280; it keeps looping until its value reaches 128000000.000000.
Here is the output when I run the program:
10
128000000.000000
Apologies if my description is vague, any given help is very much appreciated.
I am trying to write a program that outputs the number of the digits in the decimal portion of a given number (0.128).
This task is basically impossible, because on a conventional (binary) machine the goal is not meaningful.
If I write
float f = 0.128;
printf("%f\n", f);
I see
0.128000
and I might conclude that 0.128 has three digits. (Never mind about the three 0's.)
But if I then write
printf("%.15f\n", f);
I see
0.128000006079674
Wait a minute! What's going on? Now how many digits does it have?
It's customary to say that floating-point numbers are "not accurate" or that they suffer from "roundoff error". But in fact, floating-point numbers are, in their own way, perfectly accurate — it's just that they're accurate in base two, not the base 10 we're used to thinking about.
The surprising fact is that most decimal (base 10) fractions do not exist as finite binary fractions. This is similar to the way that the number 1/3 does not even exist as a finite decimal fraction. You can approximate 1/3 as 0.333 or 0.3333333333 or 0.33333333333333333333, but without an infinite number of 3's it's only an approximation. Similarly, you can approximate 1/10 in base 2 as 0b0.00011 or 0b0.000110011 or 0b0.000110011001100110011001100110011, but without an infinite number of 0011's it, too, is only an approximation. (That last rendition, with 33 bits past the binary point, works out to about 0.0999999999767.)
And it's the same with most decimal fractions you can think of, including 0.128. So when I wrote
float f = 0.128;
what I actually got in f was the binary number 0b0.00100000110001001001101111, which in decimal is exactly 0.12800000607967376708984375.
Once a number has been stored as a float (or a double, for that matter) it is what it is: there is no way to rediscover that it was initially initialized from a "nice, round" decimal fraction like 0.128. And if you try to "count the number of decimal digits", and if your code does a really precise job, you're liable to get an answer of 26 (that is, corresponding to the digits "12800000607967376708984375"), not 3.
P.S. If you were working with computer hardware that implemented decimal floating point, this problem's goal would be meaningful, possible, and tractable. And implementations of decimal floating point do exist. But the ordinary float and double values any of is likely to use on any of today's common, mass-market computers are invariably going to be binary (specifically, conforming to IEEE-754).
P.P.S. Above I wrote, "what I actually got in f was the binary number 0b0.00100000110001001001101111". And if you count the number of significant bits there — 100000110001001001101111 — you get 24, which is no coincidence at all. You can read at single precision floating-point format that the significand portion of a float has 24 bits (with 23 explicitly stored), and here, you're seeing that in action.
float vs. code
A binary float cannot encode 0.128 exactly as it is not a dyadic rational.
Instead, it takes on a nearby value: 0.12800000607967376708984375. 26 digits.
Rounding errors
OP's approach incurs rounding errors in result = 0.128 * pow(10, exp);.
Extended math needed
The goal is difficult. Example: FLT_TRUE_MIN takes about 149 digits.
We could use double or long double to get us somewhat there.
Simply multiply the fraction by 10.0 in each step.
d *= 10.0; still incurs rounding errors, but less so than OP's approach.
#include <stdio.h>
#include <math.h> int main(){
int count = 0;
float f = 0.128f;
double d = f - trunc(f);
printf("%.30f\n", d);
while (d) {
d *= 10.0;
double ipart = trunc(d);
printf("%.0f", ipart);
d -= ipart;
count++;
}
printf("\n");
printf("%d \n", count);
return 0;
}
Output
0.128000006079673767089843750000
12800000607967376708984375
26
Usefulness
Typically, past FLT_DECMAL_DIG (9) or so significant decimal places, OP’s goal is usually not that useful.
As others have said, the number of decimal digits is meaningless when using binary floating-point.
But you also have a flawed termination condition. The loop test is (int)(1+result) % 10 != 0 meaning that it will stop whenever we reach an integer whose last digit is 9.
That means that 0.9, 0.99 and 0.9999 all give a result of 2.
We also lose precision by truncating the double value we start with by storing into a float.
The most useful thing we could do is terminate when the remaining fractional part is less than the precision of the type used.
Suggested working code:
#include <math.h>
#include <float.h>
#include <stdio.h>
int main(void)
{
double val = 0.128;
double prec = DBL_EPSILON;
double result;
int count = 0;
while (fabs(modf(val, &result)) > prec) {
++count;
val *= 10;
prec *= 10;
}
printf("%d digit(s): %0*.0f\n", count, count, result);
}
Results:
3 digit(s): 128

How does the int to float cast work for large numbers?

If we cast an integer to a float it needs to be rounded or truncated when it gets too large to be represented exactly by a floating-point number.
Here is a small test program to take a look at this rounding.
#include <stdio.h>
#define INT2FLOAT(num) printf(" %d: %.0f\n", (num), (float)(num));
int main(void)
{
INT2FLOAT((1<<24) + 1);
INT2FLOAT((1<<24) + 2);
INT2FLOAT((1<<24) + 3);
INT2FLOAT((1<<24) + 4);
INT2FLOAT((1<<24) + 5);
INT2FLOAT((1<<24) + 6);
INT2FLOAT((1<<24) + 7);
INT2FLOAT((1<<24) + 8);
INT2FLOAT((1<<24) + 9);
INT2FLOAT((1<<24) + 10);
return 0;
}
The output is:
16777217: 16777216
16777218: 16777218
16777219: 16777220
16777220: 16777220
16777221: 16777220
16777222: 16777222
16777223: 16777224
16777224: 16777224
16777225: 16777224
16777226: 16777226
Values in the middle between two representable integers get sometimes rounded up, sometimes rounded down. It seems like some sort of round-to-even is applied.
How does this work exactly?
Where can I find the code that is doing this conversion?
The behaviour of this implicit conversion is implementation-defined: (C11 6.3.1.4/2):
If the value being converted is in the range of values that can be represented but cannot be represented exactly, the result is either the nearest higher or nearest lower representable value, chosen in an implementation-defined manner.
This means your compiler should document how it works, but you may not be able to control it.
There are various functions and macros for controlling the rounding direction when rounding a floating-point source to an integer , but I'm not aware of any for the case of converting integer to floating.
In addition to what has been said in other answers, for example, intel floating point units use internally full 80 bit floating point representation with an excess in the number of bits.... so when it rounds the number to the nearest 23 bit float number (as I assume from your output) think that it is able to be very precise and consider all the bits in an int.
IEEE-752 specifies a 32bit float as a number with 23 bits dedicated to store the significand, which means that, for a normalized number, in which the most significant bit is implicit (not stored, as it is always a 1 bit) you have actually 24 bits of significand of the form 1xxxxxxx_xxxxxxxx_xxxxxxxx, which means the number 2^24-1 is the last you'll be able to represent exactly (11111111_11111111_11111111 actually). After it, you can represent all the even numbers, but not the odds, as you lack the least significant bit to represent them. This should mean you are able to represent:
v decimal dot.
16777210 == 2^24-6 11111111_11111111_11111010.
16777211 == 2^24-5 11111111_11111111_11111011.
16777212 == 2^24-4 11111111_11111111_11111100.
16777213 == 2^24-3 11111111_11111111_11111101.
16777214 == 2^24-2 11111111_11111111_11111110.
16777215 == 2^24-1 11111111_11111111_11111111.
16777216 == 2^24 10000000_00000000_00000000_. <-- here the leap becomes 2 as there are no more than 23 bits to play with.
16777217 == 2^24+1 10000000_00000000_00000000_. (there should be a 1 bit after the last 0)
16777218 == 2^24+2 10000000_00000000_00000001_.
...
33554430 == 2^25-2 11111111_11111111_11111111_.
33554432 == 2^26 10000000_00000000_00000000__. <-- here the leap becomes 4 as there's another shift
33554436 == 2^26+4 10000000_00000000_00000001__.
...
If you imagine the problem in base 10, assume we have floating point numbers of just 3 decimal digits in significand, and an exponent of ten to raise the power. When we begin counting from 0, we get this:
1 => 1.00E0
...
8 => 8.00E0
9 => 9.00E0
10 => 1.00E1 <<< see what happened here... this is the same number as the first but with the ten's exponent incremented, meaning a one digit shift of every digit to the left.
11 => 1.10E1
...
98 => 9.80E1
99 => 9.90E1
100 => 1.00E2 <<< and here.
101 => 1.01E2
...
996 => 9.96E2
997 => 9.97E2
998 => 9.98E2
999 => 9.99E2
1000 => 1.00E3 <<< exact, but here you don't have anymore a fourth digit to represent units.
1001 => 1.00E3 (this number cannot be represented exactly)
...
1004 => 1.00E3 (this number cannot be represented exactly)
1005 => 1.01E3 (this number cannot be represented exactly) <<< here rounding is applied, but the implementation is free to do whatever it wants.
...
1009 => 1.01E3 (this number cannot be represented exactly)
1010 => 1.01E3 <<< this is the next number that can be represent exactly with three floating point digits. So we switched from an increment of one by one to an increment of ten by ten.
...
Note
The case you show, is one of the rounding modes specified for the intel processors, it rounds to the even number closer, but in case it is half the distance, it counts the number of one bits in the significand and rounds up when it is odd, and rounds down when it is even (this is to avoid the rounding up always so importan in banking sometimes ---banks never use floating point because they don't have precise control on the rounding)

Why does my computer arbitrarily change the decimal digits at the end of a float number when that number is the actual parameter of a function?

I was trying to satisfy this question: Write a function print_dig_float(float f) which prints the value of each digit of a floating point number f. For example, if f is 2345.1234 the print_dig_float(f) will print integer values of digits 2, 3, 4, 5, 1, 2, 3, and 4 in succession.
What I did is: given a number with some decimals, I try to move the digits to the left (Ex: 3.45 -> 345) by multiplying it with 10. After that, I store each digit in an array by taking the remainder and put it in an element. Then, I print them out.
So my program looks like this:
#include <stdio.h>
void print_dig_float(float f);
int main(int argc, char const *argv[]) {
print_dig_float(23432.214122);
return 0;
}
void print_dig_float(float f) {
printf("%f\n", f);
int i = 0, arr[50], conv;
//move digits to the left
do {
f = f * 10;
conv = f;
printf("%i\n", conv);
} while (conv % 10 != 0);
conv = f / 10;
//store digits in an array
while (conv > 1) {
arr[i] = conv % 10;
conv = conv / 10;
i++;
}
for (int j = i - 1; j >= 0; j--) {
printf("%i ", arr[j]);
}
printf("\n");
}
When I tested it with the number: 23432.214122, this is what I get (according to Linux terminal):
23432.214844
234322
2343221
23432216
234322160
2 3 4 3 2 2 1 6
The problem is that, as you can see above, the computer arbitrarily changes the decimal digits at the end of the number even before I do anything with it. I don't know if this is my fault or the computer's fault for this problem.
Per C 2018 5.2.4.2.2, a floating-point number is represented with a sign, a fixed base to some exponent, and a numeral formed of digits in that base. Most commonly, two is used as the base.
When the base is two, 23432.214122 cannot be represented in floating-point, because every representable number is necessarily some integer multiple of a power of the base (possible a negative power). 23432.214122 is not a multiple of ½, ¼, ⅛, 1/24, 1/25, or any other power of two.
When 23432.214122 is used in source code, it is converted to a value that is representable. In good C implementations, the nearest representable value is used, but the C standard permits either the representable value that is the nearest larger or nearest smaller value to be used. Other than this, the digits that appear are not arbitrary; they are a consequence of the mathematics.
When IEEE-754 binary32 is used for float, the representable value nearest to 23432.214122 is exactly 23432.21484375.
Because, when a C implementation uses base two for floating-point numbers, floating-point numbers have binary digits and do not have decimal digits. It is generally not meaningful to attempt to extract decimal digits from a thing that does not have decimal digits. It is possible to determine the decimal digits that were in an original numeral up to some limit affected by the floating-point format. However, “23432.214122” has too many digits to do this with a 32-bit floating-point type. With a 64-bit type, as is commonly used for double, you could recover the original digits providing you knew how many decimal digits there were to start with. It is not generally possible to recover the original numeral without that information—as you have seen the trailing digits will be different, and there is no indication in the floating-point number itself of where the differences start.

How does a float get converted to scientific notation for storage?

http://www.cs.yale.edu/homes/aspnes/pinewiki/C(2f)FloatingPoint.html
I was looking into why there are sometimes rounding issues when storing a float. I read the above link, and see that floats are converted to scientific notation.
https://babbage.cs.qc.cuny.edu/IEEE-754/index.xhtml
Base is always 2. So, 8 is stored as 1 * 2^3. 9 is stored as 1.001 * 2^3.
What is the math algorithm to determine the mantissa/significand and exponent?
Here is C++ code to convert a decimal string to a binary floating-point value. Although the question is tagged C, I presume the question is more about the algorithm and calculations than the programming language.
The DecimalToFloat class is constructed with a string that contains solely decimal digits and a decimal point (a period, most one). In its constructor, it shows how to use elementary school multiplication and long division to convert the number from decimal to binary. This demonstrates the fundamental concepts using elementary arithmetic. Real implementations of decimal-to-floating-point conversion in commercial software using algorithms that are faster and more complicated. They involve prepared tables, analysis, and proofs and are the subjects of academic papers. A significant problem of quality implementations of decimal-to-binary-floating-point conversion is getting the rounding correct. The disparate nature of powers of ten to powers of two (both positive and negative powers) makes it tricky to correctly determine when some values are above or below a point where rounding changes. Normally, when we are parsing something like 123e300, we want to figure out the binary floating-point result without actually calculating 10300. That is a much more extensive subject.
The GetValue routine finishes the preparation fo the number, taking the information prepared by the constructor and rounding it to the final floating-point form.
Negative numbers and exponential (scientific) notation are not handled. Handling negative numbers is of course easy. Exponential notation could be accommodated by shifting the input—moving the decimal point right for positive exponents or left for negative exponents. Again, this is not the fastest way to perform the conversion, but it demonstrates fundamental ideas.
/* This code demonstrates conversion of decimal numerals to binary
floating-point values using the round-to-nearest-ties-to-even rule.
Infinities and subnormal values are supported and assumed.
The basic idea is to convert the decimal numeral to binary using methods
taught in elementary school. The integer digits are repeatedly divided by
two to extract a string of bits in low-to-high position-value order. Then
sub-integer digits are repeatedly multiplied by two to continue extracting
a string of bits in high-to-low position-value order. Once we have enough
bits to determine the rounding direction or the processing exhausts the
input, the final value is computed.
This code is not (and will not be) designed to be efficient. It
demonstrates the fundamental mathematics and rounding decisions.
*/
#include <algorithm>
#include <limits>
#include <cmath>
#include <cstring>
template<typename Float> class DecimalToFloat
{
private:
static_assert(std::numeric_limits<Float>::radix == 2,
"This code requires the floatng-point radix to be two.");
// Abbreviations for parameters describing the floating-point format.
static const int Digits = std::numeric_limits<Float>::digits;
static const int MaximumExponent = std::numeric_limits<Float>::max_exponent;
static const int MinimumExponent = std::numeric_limits<Float>::min_exponent;
/* For any rounding rule supported by IEEE 754 for binary floating-point,
the direction in which a floating-point result should be rounded is
completely determined by the bit in the position of the least
significant bit (LSB) of the significand and whether the value of the
trailing bits are zero, between zero and 1/2 the value of the LSB,
exactly 1/2 the LSB, or between 1/2 the LSB and 1.
In particular, for round-to-nearest, ties-to-even, the decision is:
LSB Trailing Bits Direction
0 0 Down
0 In (0, 1/2) Down
0 1/2 Down
0 In (1/2, 1) Up
1 0 Down
1 In (0, 1/2) Down
1 1/2 Up
1 In (1/2, 1) Up
To determine whether the value of the trailing bits is 0, in (0, 1/2),
1/2, or in (1/2, 1), it suffices to know the first of the trailing bits
and whether the remaining bits are zeros or not:
First Remaining Value of Trailing Bits
0 All zeros 0
0 Not all zeros In (0, 1/2)
1 All zeros 1/2
1 Not all zeros In (1/2, 1)
To capture that information, we maintain two bits in addition to the
bits in the significand. The first is called the Round bit. It is the
first bit after the position of the least significand bit in the
significand. The second is called the Sticky bit. It is set if any
trailing bit after the first is set.
The bits for the significand are kept in an array along with the Round
bit and the Sticky bit. The constants below provide array indices for
locating the LSB, the Round Bit, and the Sticky bit in that array.
*/
static const int LowBit = Digits-1; // Array index for LSB in significand.
static const int Round = Digits; // Array index for rounding bit.
static const int Sticky = Digits+1; // Array index for sticky bit.
char *Decimal; // Work space for the incoming decimal numeral.
int N; // Number of bits incorporated so far.
char Bits[Digits+2]; // Bits for significand plus two for rounding.
int Exponent; // Exponent adjustment needed.
/* PushBitHigh inserts a new bit into the high end of the bits we are
accumulating for the significand of a floating-point number.
First, the Round bit shifted down by incorporating it into the Sticky
bit, using an OR so that the Sticky bit is set iff any bit pushed below
the Round bit is set.
Then all bits from the significand are shifted down one position,
which moves the least significant bit into the Round position and
frees up the most significant bit.
Then the new bit is put into the most significant bit.
*/
void PushBitHigh(char Bit)
{
Bits[Sticky] |= Bits[Round];
std::memmove(Bits+1, Bits, Digits * sizeof *Bits);
Bits[0] = Bit;
++N; // Count the number of bits we have put in the significand.
++Exponent; // Track the absolute position of the leading bit.
}
/* PushBitLow inserts a new bit into the low end of the bits we are
accumulating for the significand of a floating-point number.
If we have no previous bits and the new bit is zero, we are just
processing leading zeros in a number less than 1. These zeros are not
significant. They tell us the magnitude of the number. We use them
only to track the exponent that records the position of the leading
significant bit. (However, exponent is only allowed to get as small as
MinimumExponent, after which we must put further bits into the
significand, forming a subnormal value.)
If the bit is significant, we record it. If we have not yet filled the
regular significand and the Round bit, the new bit is recorded in the
next space. Otherwise, the new bit is incorporated into the Sticky bit
using an OR so that the Sticky bit is set iff any bit below the Round
bit is set.
*/
void PushBitLow(char Bit)
{
if (N == 0 && Bit == 0 && MinimumExponent < Exponent)
--Exponent;
else
if (N < Sticky)
Bits[N++] = Bit;
else
Bits[Sticky] |= Bit;
}
/* Determined tells us whether the final value to be produced can be
determined without any more low bits. This is true if and only if:
we have all the bits to fill the significand, and
we have at least one more bit to help determine the rounding, and
either we know we will round down because the Round bit is 0 or we
know we will round up because the Round bit is 1 and at least one
further bit is 1 or the least significant bit is 1.
*/
bool Determined() const
{
if (Digits < N)
if (Bits[Round])
return Bits[LowBit] || Bits[Sticky];
else
return 1;
else
return 0;
}
// Get the floating-point value that was parsed from the source numeral.
Float GetValue() const
{
// Decide whether to round up or not.
bool RoundUp = Bits[Round] && (Bits[LowBit] || Bits[Sticky]);
/* Now we prepare a floating-point number that contains a significand
with the bits we received plus, if we are rounding up, one added to
the least significant bit.
*/
// Start with the adjustment to the LSB for rounding.
Float x = RoundUp;
// Add the significand bits we received.
for (int i = Digits-1; 0 <= i; --i)
x = (x + Bits[i]) / 2;
/* If we rounded up, the addition may have carried out of the
initial significand. In this case, adjust the scale.
*/
int e = Exponent;
if (1 <= x)
{
x /= 2;
++e;
}
// Apply the exponent and return the value.
return MaximumExponent < e ? INFINITY : std::scalbn(x, e);
}
public:
/* Constructor.
Note that this constructor allocates work space. It is bad form to
allocate in a constructor, but this code is just to demonstrate the
mathematics, not to provide a conversion for use in production
software.
*/
DecimalToFloat(const char *Source) : N(), Bits(), Exponent()
{
// Skip leading sources.
while (*Source == '0')
++Source;
size_t s = std::strlen(Source);
/* Count the number of integer digits (digits before the decimal
point if it is present or before the end of the string otherwise)
and calculate the number of digits after the decimal point, if any.
*/
size_t DigitsBefore = 0;
while (Source[DigitsBefore] != '.' && Source[DigitsBefore] != 0)
++DigitsBefore;
size_t DigitsAfter = Source[DigitsBefore] == '.' ? s-DigitsBefore-1 : 0;
/* Allocate space for the integer digits or the sub-integer digits,
whichever is more numerous.
*/
Decimal = new char[std::max(DigitsBefore, DigitsAfter)];
/* Copy the integer digits into our work space, converting them from
digit characters ('0' to '9') to numbers (0 to 9).
*/
for (size_t i = 0; i < DigitsBefore; ++i)
Decimal[i] = Source[i] - '0';
/* Convert the integer portion of the numeral to binary by repeatedly
dividing it by two. The remainders form a bit string representing
a binary numeral for the integer part of the number. They arrive
in order from low position value to high position value.
This conversion continues until the numeral is exhausted (High <
Low is false) or we see it is so large the result overflows
(Exponent <= MaximumExponent is false).
Note that Exponent may exceed MaximumExponent while we have only
produced 0 bits during the conversion. However, because we skipped
leading zeros above, we know there is a 1 bit coming. That,
combined with the excessive Exponent, guarantees the result will
overflow.
*/
for (char *High = Decimal, *Low = Decimal + DigitsBefore;
High < Low && Exponent <= MaximumExponent;)
{
// Divide by two.
char Remainder = 0;
for (char *p = High; p < Low; ++p)
{
/* This is elementary school division: We bring in the
remainder from the higher digit position and divide by the
divisor. The remainder is kept for the next position, and
the quotient becomes the new digit in this position.
*/
char n = *p + 10*Remainder;
Remainder = n % 2;
n /= 2;
/* As the number becomes smaller, we discard leading zeros:
If the new digit is zero and is in the highest position,
we discard it and shorten the number we are working with.
Otherwise, we record the new digit.
*/
if (n == 0 && p == High)
++High;
else
*p = n;
}
// Push remainder into high end of the bits we are accumulating.
PushBitHigh(Remainder);
}
/* Copy the sub-integer digits into our work space, converting them
from digit characters ('0' to '9') to numbers (0 to 9).
The convert the sub-integer portion of the numeral to binary by
repeatedly multiplying it by two. The carry-outs continue the bit
string. They arrive in order from high position value to low
position value.
*/
for (size_t i = 0; i < DigitsAfter; ++i)
Decimal[i] = Source[DigitsBefore + 1 + i] - '0';
for (char *High = Decimal, *Low = Decimal + DigitsAfter;
High < Low && !Determined();)
{
// Multiply by two.
char Carry = 0;
for (char *p = Low; High < p--;)
{
/* This is elementary school multiplication: We multiply
the digit by the multiplicand and add the carry. The
result is separated into a single digit (n % 10) and a
carry (n / 10).
*/
char n = *p * 2 + Carry;
Carry = n / 10;
n %= 10;
/* Here we discard trailing zeros: If the new digit is zero
and is in the lowest position, we discard it and shorten
the numeral we are working with. Otherwise, we record the
new digit.
*/
if (n == 0 && p == Low-1)
--Low;
else
*p = n;
}
// Push carry into low end of the bits we are accumulating.
PushBitLow(Carry);
}
delete [] Decimal;
}
// Conversion operator. Returns a Float converted from this object.
operator Float() const { return GetValue(); }
};
#include <iostream>
#include <cstdio>
#include <cstdlib>
static void Test(const char *Source)
{
std::cout << "Testing " << Source << ":\n";
DecimalToFloat<float> x(Source);
char *end;
float e = std::strtof(Source, &end);
float o = x;
/* Note: The C printf is used here for the %a conversion, which shows the
bits of floating-point values clearly. If your C++ implementation does
not support this, this may be replaced by any display of floating-point
values you desire, such as printing them with all the decimal digits
needed to distinguish the values.
*/
std::printf("\t%a, %a.\n", e, o);
if (e != o)
{
std::cout << "\tError, results do not match.\n";
std::exit(EXIT_FAILURE);
}
}
int main(void)
{
Test("0");
Test("1");
Test("2");
Test("3");
Test(".25");
Test(".0625");
Test(".1");
Test(".2");
Test(".3");
Test("3.14");
Test(".00000001");
Test("9841234012398123");
Test("340282346638528859811704183484516925440");
Test("340282356779733661637539395458142568447");
Test("340282356779733661637539395458142568448");
Test(".00000000000000000000000000000000000000000000140129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125");
// This should round to the minimum positive (subnormal), as it is just above mid-way.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015626");
// This should round to zero, as it is mid-way, and the even rule applies.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015625");
// This should round to zero, as it is just below mid-way.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015624");
}
One of the surprising things about a real, practical computer -- surprising to beginning programmers who have been tasked with writing artificial little binary-to-decimal conversion programs, anyway -- is how thoroughly ingrained the binary number system is in an actual computer, and how few and how diffuse any actual binary/decimal conversion routines actually are. In the C world, for example (and if we confine our attention to integers for the moment), there is basically one binary-to-decimal conversion routine, and it's buried inside printf, where the %d directive is processed. There are perhaps three decimal-to-binary converters: atof(), strtol(), and the %d conversion inside scanf. (There might be another one inside the C compiler, where it converts your decimal constants into binary, although the compiler might just call strtol() directly for those, too.)
I bring this all up for background. The question of "what's the actual algorithm for constructing floating-point numbers internally?" is a fair one, and I'd like to think I know the answer, but as I mentioned in the comments, I'm chagrined to discover that I don't, really: I can't describe a clear, crisp "algorithm". I can and will show you some code that gets the job done, but you'll probably find it unsatisfying, as if I'm cheating somehow -- because a number of the interesting details happen more or less automatically, as we'll see.
Basically, I'm going to write a version of the standard library function atof(). Here are my ground rules:
I'm going to assume that the input is a string of characters. (This isn't really an assumption at all; it's a restatement of the original problem, which is to write a version of atof.)
I'm going to assume that we can construct the floating-point number "0.0". (In IEEE 754 and most other formats, it's all-bits-0, so that's not too hard.)
I'm going to assume that we can convert the integers 0-9 to their corresponding floating-point equivalents.
I'm going to assume that we can add and multiply any floating-point numbers we want to. (This is the biggie, although I'll describe those algorithms later.) But on any modern computer, there's almost certainly a floating-point unit, that has built-in instructions for the basic floating-point operations like addition and multiplication, so this isn't an unreasonable assumption, either. (But it does end up hiding some of the interesting aspects of the algorithm, passing the buck to the hardware designer to have implemented the instructions correctly.)
I'm going to initially assume that we have access to the standard library functions atoi and pow. This is a pretty big assumption, but again, I'll describe later how we could write those from scratch if we wanted to. I'm also going to assume the existence of the character classification functions in <ctype.h>, especially isdigit().
But that's about it. With those prerequisites, it turns out we can write a fully-functional version of atof() all by ourselves. It might not be fast, and it almost certainly won't have all the right rounding behaviors out at the edges, but it will work pretty well. (I'm even going to handle negative numbers, and exponents.) Here's how it works:
skip leading whitespace
look for '-'
scan digit characters, converting each one to the corresponding digit by subtracting '0' (aka ASCII 48)
accumulate a floating-point number (with no fractional part yet) representing the integer implied by the digits -- the significand -- and this is the real math, multiplying the running accumulation by 10 and adding the next digit
if we see a decimal point, count the number of digits after it
when we're done scanning digits, see if there's an e/E and some more digits indicating an exponent
if necessary, multiply or divide our accumulated number by a power of 10, to take care of digits past the decimal, and/or the explicit exponent.
Here's the code:
#include <ctype.h>
#include <stdlib.h> /* just for atoi() */
#include <math.h> /* just for pow() */
#define TRUE 1
#define FALSE 0
double my_atof(const char *str)
{
const char *p;
double ret;
int negflag = FALSE;
int exp;
int expflag;
p = str;
while(isspace(*p))
p++;
if(*p == '-')
{
negflag = TRUE;
p++;
}
ret = 0.0; /* assumption 2 */
exp = 0;
expflag = FALSE;
while(TRUE)
{
if(*p == '.')
expflag = TRUE;
else if(isdigit(*p))
{
int idig = *p - '0'; /* assumption 1 */
double fdig = idig; /* assumption 3 */
ret = 10. * ret + fdig; /* assumption 4 */
if(expflag)
exp--;
}
else break;
p++;
}
if(*p == 'e' || *p == 'E')
exp += atoi(p+1); /* assumption 5a */
if(exp != 0)
ret *= pow(10., exp); /* assumption 5b */
if(negflag)
ret = -ret;
return ret;
}
Before we go further, I encourage you to copy-and-paste this code into a nearby C compiler, and compile it, to convince yourself that I haven't cheated too badly. Here's a little main() to invoke it with:
#include <stdio.h>
int main(int argc, char *argv[])
{
double d = my_atof(argv[1]);
printf("%s -> %g\n", argv[1], d);
}
(If you or your IDE aren't comfortable with command-line invocations, you can use fgets or scanf to read the string to hand to my_atof, instead.)
But, I know, your question was "How does 9 get converted to 1.001 * 2^3 ?", and I still haven't really answered that, have I? So let's see if we can find where that happens.
First of all, that bit pattern 10012 for 9 came from... nowhere, or everywhere, or it was there all along, or something. The character 9 came in, probably with a bit pattern of 1110012 (in ASCII). We subtracted 48 = 1100002, and out popped 10012. (Even before doing the subtraction, you can see it hiding there at the end of 111001.)
But then what turned 1001 into 1.001E3? That was basically my "assumption 3", as embodied in the line
double fdig = idig;
It's easy to write that line in C, so we don't really have to know how it's done, and the compiler probably turns it into a 'convert integer to float' instruction, so the compiler writer doesn't have to know how to do it, either.
But, if we did have to implement that ourselves, at the lowest level, we could. We know we have a single-digit (decimal) number, occupying at most 4 bits. We could stuff those bits into the significand field of our floating-point format, with a fixed exponent (perhaps -3). We might have to deal with the peculiarities of an "implicit 1" bit, and if we didn't want to inadvertently create a denormalized number, we might have to some more tinkering, but it would be straightforward enough, and relatively easy to get right, because there are only 10 cases to test. (Heck, if we found writing code to do the bit manipulations troublesome, we could even use a 10-entry lookup table.)
Since 9 is a single-digit number, we're done. But for a multiple-digit number, our next concern is the arithmetic we have to do: multiplying the running sum by 10, and adding in the next digit. How does that work, exactly?
Again, if we're writing a C (or even an assembly language) program, we don't really need to know, because our machine's floating-point 'add' and 'multiply' instructions will do everything for us. But, also again, if we had to do it ourselves, we could. (This answer's getting way too long, so I'm not going to discuss floating-point addition and multiplication algorithms just yet. Maybe farther down.)
Finally, the code as presented so far "cheated" by calling the library functions atoi and pow. I won't have any trouble convincing you that we could have implemented atoi ourselves if we wanted/had to: it's basically just the same digit-accumulation code we already wrote. And pow isn't too hard, either, because in our case we don't need to implement it in full generality: we're always raising to integer powers, so it's straightforward repeated multiplication, and we've already assumed we know how to do multiplication.
(With that said, computing a large power of 10 as part of our decimal-to-binary algorithm is problematic. As #Eric Postpischil noted in his answer, "Normally we want to figure out the binary floating-point result without actually calculating 10N." Me, since I don't know any better, I'll compute it anyway, but if I wrote my own pow() I'd use the binary exponentiation algorithm, since it's super easy to implement and quite nicely efficient.)
I said I'd discuss floating-point addition and multiplication routines. Suppose you want to add two floating-point numbers. If they happen to have the same exponent, it's easy: add the two significands (and keep the exponent the same), and that's your answer. (How do you add the significands? Well, I assume you have a way to add integers.) If the exponents are different, but relatively close to each other, you can pick the smaller one and add N to it to make it the same as the larger one, while simultaneously shifting the significand to the right by N bits. (You've just created a denormalized number.) Once the exponents are the same, you can add the significands, as before. After the addition, it may be important to renormalize the numbers, that is, to detect if one or more leading bits ended up as 0 and, if so, shift the significand left and decrement the exponent. Finally, if the exponents are too different, such that shifting one significand to the right by N bits would shift it all away, this means that one number is so much smaller than the other that all of it gets lost in the roundoff when adding them.
Multiplication: Floating-point multiplication is actually somewhat easier than addition. You don't have to worry about matching up the exponents: the final product is basically a new number whose significand is the product of the two significands, and whose exponent is the sum of the two exponents. The only trick is that the product of the two M-bit significands is nominally 2M bits, and you may not have a multiplier that can do that. If the only multiplier you have available maxes out at an M-bit product, you can take your two M-bit significands and literally split them in half by bits:
signif1 = a * 2M/2 + b
signif2 = c * 2M/2 + d
So by ordinary algebra we have
signif1 × signif2 = ac × 2M + ad × 2M/2 + bc × 2M/2 + bd
Each of those partial products ac, ad, etc. is an M-bit product. Multiplying by 2M/2 or 2M is easy, because it's just a left shift. And adding the terms up is something we already know how to do. We actually only care about the upper M bits of the product, so since we're going to throw away the rest, I imagine we could cheat and skip the bd term, since it contributes nothing (although it might end up slightly influencing a properly-rounded result).
But anyway, the details of the addition and multiplication algorithms, and the knowledge they contain about the floating-point representation we're using, end up forming the other half of the answer to the question of the decimal-to-binary "algorithm" you're looking for. If you convert, say, the number 5.703125 using the code I've shown, out will pop the binary floating-point number 1.011011012 × 22, but nowhere did we explicitly compute that significand 1.01101101 or that exponent 2 -- they both just fell out of all the digitwise multiplications and additions we did.
Finally, if you're still with me, here's a quick and easy integer-power-only pow function using binary exponentiation:
double my_pow(double a, unsigned int b)
{
double ret = 1;
double fac = a;
while(1) {
if(b & 1) ret *= fac;
b >>= 1;
if(b == 0) break;
fac *= fac;
}
return ret;
}
This is a nifty little algorithm. If we ask it to compute, say, 1021, it does not multiply 10 by itself 21 times. Instead, it repeatedly squares 10, leading to the exponential sequence 101, 102, 104, 108, or rather, 10, 100, 10000, 100000000... Then it looks at the binary representation of 21, namely 10101, and selects only the intermediate results 101, 104, and 1016 to multiply into its final return value, yielding 101+4+16, or 1021, as desired. It therefore runs in time O(log2(N)), not O(N).
And, tune in tomorrow for our next exciting episode when we'll go in the opposite direction, writing a binary-to-decimal converter which will require us to do... (ominous chord)
floating point long division!
Here's a completely different answer, that tries to focus on the "algorithm" part of the question. I'll start with the example you asked about, converting the decimal integer 9 to the binary scientific notation number 1.0012×23. The algorithm is in two parts: (1) convert the decimal integer 9 to the binary integer 10012, and (2) convert that binary integer into binary scientific notation.
Step 1. Convert a decimal integer to a binary integer. (You can skip over this part if you already know it. Also, although this part of the algorithm is going to look perfectly fine, it turns out it's not the sort of thing that's actually used anywhere on a practical binary computer.)
The algorithm is built around a number we're working on, n, and a binary number we're building up, b.
Set n initially to the number we're converting, 9.
Set b to 0.
Compute the remainder when dividing n by 2. In our example, the remainder of 9 ÷ 2 is 1.
The remainder is one bit of our binary number. Tack it on to b. In our example, b is now 1. Also, here we're going to be tacking bits on to b on the left.
Divide n by 2 (discarding the remainder). In our example, n is now 4.
If n is now 0, we're done.
Go back to step 3.
At the end of the first trip through the algorithm, n is 4 and b is 1.
The next trip through the loop will extract the bit 0 (because 4 divided by 2 is 2, remainder 0). So b goes to 01, and n goes to 2.
The next trip through the loop will extract the bit 0 (because 2 divided by 2 is 1, remainder 0). So b goes to 001, and n goes to 1.
The next trip through the loop will extract the bit 1 (because 1 divided by 2 is 0, remainder 1). So b goes to 1001, and n goes to 0.
And since n is now 0, we're done. Meanwhile, we've built up the binary number 1001 in b, as desired.
Here's that example again, in tabular form. At each step, we compute n divided by two (or in C, n/2), and the remainder when dividing n by 2, which in C is n%2. At the next step, n gets replaced by n/2, and the next bit (which is n%2) gets tacked on at the left of b.
step n b n/2 n%2
0 9 0 4 1
1 4 1 2 0
2 2 01 1 0
3 1 001 0 1
4 0 1001
Let's run through that again, for the number 25:
step n b n/2 n%2
0 25 0 12 1
1 12 1 6 0
2 6 01 3 0
3 3 001 1 1
4 1 1001 0 1
5 0 11001
You can clearly see that the n column is driven by the n/2 column, because in step 5 of the algorithm as stated we divided n by 2. (In C this would be n = n / 2, or n /= 2.) You can clearly see the binary result appearing (in right-to-left order) in the n%2 column.
So that's one way to convert decimal integers to binary. (As I mentioned, though, it's likely not the way your computer does it. Among other things, the act of tacking a bit on to the left end of b turns out to be rather unorthodox.)
Step 2. Convert a binary integer to a binary number in scientific notation.
Before we begin with this half of the algorithm, it's important to realize that scientific (or "exponential") representations are typically not unique. Returning to decimal for a moment, let's think about the number "one thousand". Most often we'll represent that as 1 × 103. But we could also represent it as 10 × 102, or 100 × 101, or even crazier representations like 10000 × 10-1, or 0.01 × 105.
So, in practice, when we're working in scientific notation, we'll usually set up an additional rule or guideline, stating that we'll try to keep the mantissa (also called the "significand") within a certain range. For base 10, usually the goal is either to keep it in the range 0 ≤ mantissa < 10, or 0 ≤ mantissa < 1. That is, we like numbers like 1 × 103 or 0.1 × 104, but we don't like numbers like 100 × 101 or 0.01 × 105.
How do we keep our representations in the range we like? What if we've got a number (perhaps the intermediate result of a calculation) that's in a form we don't like? The answer is simple, and it depends on a pattern you've probably already noticed: If you multiply the mantissa by 10, and if you simultaneously subtract 1 from the exponent, you haven't changed the value of the number. Similarly, you can divide the mantissa by 10 and increment the exponent, again without changing anything.
When we convert a scientific-notation number into the form we like, we say we're normalizing the number.
One more thing: since 100 is 1, we can preliminarily convert any integer to scientific notation by simply multiplying it by 100. That is, 9 is 9×100, and 25 is 25×100. If we do it that way we'll usually get a number that's in a form we "don't like" (that is "nonnormalized"), but now we have an idea of how to fix that.
So let's return to base 2, and the rest of this second half of our algorithm. Everything we've said so far about decimal scientific notation is also true about binary scientific notation, as long as we make the obvious changes of "10" to "2".
To convert the binary integer 10012 to binary scientific notation, we first multiply it by 20, resulting in: 10012×20. So actually we're almost done, except that this number is nonnormalized.
What's our definition of a normalized base-two scientific notation number? We haven't said, but the requirement is usually that the mantissa is between 0 and 102 (that is, between 0 and 210), or stated another way, that the high-order bit of the mantissa is always 1 (unless the whole number is 0). That is, these mantissas are normalized: 1.0012, 1.12, 1.02, 0.02. These mantissas are nonnormalized: 10.012, 0.0012.
So to normalize a number, we may need to multiply or divide the mantissa by 2, while incrementing or decrementing the exponent.
Putting this all together in step-by-step form: to convert a binary integer to a binary scientific number:
Multiply the integer by 20: set the mantissa to the number we're converting, and the exponent to 0.
If the number is normalized (if the mantissa is 0, or if its leading bit is 1), we're done.
If the mantissa has more than one bit to the left of the decimal point (really the "radix point" or "binary point"), divide the mantissa by 2, and increment the exponent by 1. Return to step 2.
(This step will never be necessary if the number we started with was an integer.) If the mantissa is nonzero but the bit to the left of the radix point is 0, multiply the mantissa by 2, and decrement the exponent by 1. Return to step 2.
Running this algorithm in tabular form for our number 9, we have:
step mantissa exponent
0 1001. 0
1 100.1 1
2 10.01 2
3 1.001 3
So, if you're still with me, that's how we can convert the decimal integer 9 to the binary scientific notation (or floating-point) number 1.0012×23.
And, with all of that said, the algorithm as stated so far only works for decimal integers. What if we wanted to convert, say, the decimal number 1.25 to the binary number 1.012×20, or 34.125 to 1.000100012×25? That's a discussion that will have to wait for another day (or for this other answer), I guess.

Why float is more precise than it ought to be?

#include <stdio.h>
#include <float.h>
int main(int argc, char** argv)
{
long double pival = 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899L;
float pival_float = pival;
printf("%1.80f\n", pival_float);
return 0;
}
The output I got on gcc is :
3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
The float uses 23 bits mantisa. So the maximum fraction that can be represented is 2^23 = 8388608 = 7 decimal digits of precision.
But the above output shows 23 decimal digits of precision (3.14159274101257324218750). I expected it print 3.1415927000000000000....)
What did I miss to understand ?
You only got 7 digits of precision. Pi is
3.1415926535897932384626433832795028841971693993751058209...
But the output you got from printing your float approximation to Pi was
3.14159274101257324218750000...
As you can see the values diverge starting from the 7th digit after the decimal point.
If you ask printf() for 80 digits after the decimal place, it will print out that many digits of the decimal representation of the binary value stored in the float, even if that many digits is far more than the precision allowed by the float representation.
A binary floating-point value can't represent 3.1415927 exactly (since that's not an exact binary fraction). The nearest value that it can represent is 3.1415927410125732421875, so that's the actual value of your pival_float. When you print pival_float with eighty digits, you see its exact value, plus a bunch of zeroes for good measure.
The closest float value to pi has binary encoding...
0 10000000 10010010000111111011011
...in which I've inserted spaces between the sign, exponent and mantissa. The exponent is biased, so the bits above encode a multiplier of 2^1 == 2, and the mantissa encodes a fraction above 1, with the first bit being worth a half, and each bit thereafter being worth half as much as the bit before.
Therefore, the mantissa bits above are worth:
1 x 0.5
0 x 0.25
0 x 0.125
1 x 0.0625
0 x 0.03125
0 x 0.015625
1 x 0.0078125
0 x 0.00390625
0 x 0.001953125
0 x 0.0009765625
0 x 0.00048828125
1 x 0.000244140625
1 x 0.0001220703125
1 x 0.00006103515625
1 x 0.000030517578125
1 x 0.0000152587890625
1 x 0.00000762939453125
0 x 0.000003814697265625
1 x 0.0000019073486328125
1 x 0.00000095367431640625
0 x 0.000000476837158203125
1 x 0.0000002384185791015625
1 x 0.00000011920928955078125
So, the least significant bit after multiplying by the exponent-encoded value "2" is worth...
0.000 000 238 418 579 101 562 5
I added spaces to make it easier to count that the last non-0 digit is in the 22nd decimal place.
The value the question says printf() displayed appears below alongside the contribution of the least significant bit in the mantissa:
3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
0.0000002384185791015625
Clearly the least significant digits line up properly. If you added up all the mantissa contributions above, added the implicit 1, then multiplied by 2, you'd get the exact value printf displayed. That explains how the float value is precisely (in the mathematical sense of zero randomness) the value shown by printf, but the comparison below against pi shows only the first 6 decimal places are accurate given the particular value we want it to store.
3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899
^
In computing, it's common to refer to the precision of floating point types when we're actually interested in the accuracy we can rely on. I suppose you could argue that while taken in isolation the precision of floats and doubles is infinite, the rounding necessary when using them to approximate numbers that they can't encode perfectly is for most practical purposes random, and in that sense they offer finite significant digits of precision at encoding such numbers.
So, printf isn't wrong to display so many digits; some application might be using a float to encode that exact number (almost certainly because the nature of the app's calculations involve sums of 1/2^n values), but that'd be the exception rather than the rule.
Carrying on from Tony's answer, one way to prove this limitation on decimal precision to yourself in a practical way is simply to declare pi to as many decimals points as you like while assigning the value to a float. Then look at how it is stored in memory.
What you find, is no matter how many decimal points you give it, the 32-bit value in memory will always be the equivalent of the unsigned value 1078530011 or 01000000010010010000111111011011 in binary. That is due, as others explained, to the IEEE-754 Single Precision Floating Point Format Below is a simple bit of code that will allow you to prove to yourself that this limitation means pi, as a float, is limited to six decimal precision:
#include <stdio.h>
#include <stdlib.h>
#if defined (__LP64__) || defined (_LP64)
# define BUILD_64 1
#endif
#ifdef BUILD_64
# define BITS_PER_LONG 64
#else
# define BITS_PER_LONG 32
#endif
char *binpad (unsigned long n, size_t sz);
int main (void) {
float fPi = 3.1415926535897932384626433;
printf ("\n fPi : %f, in memory : %s unsigned : %u\n\n",
fPi, binpad (*(unsigned*)&fPi, 32), *(unsigned*)&fPi);
return 0;
}
char *binpad (unsigned long n, size_t sz)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
register size_t i;
for (i = 0; i < sz; i++)
*(--p) = (n>>i & 1) ? '1' : '0';
return p;
}
Output
$ ./bin/ieee754_pi
fPi : 3.141593, in memory : 01000000010010010000111111011011 unsigned : 1078530011

Resources