e format in printf() and precision modifiers - c

Could you explain me why
printf("%2.2e", 1201.0);
gives a result 1.20e+03 and not just 12.01e2?
My way of thinking: default number is 1201.0, specifier tells are that there should be 2 numbers after the digit.
What is wrong?

According to Wikipedia:
In normalized scientific notation, the exponent b is chosen so that the absolute value of a remains at least one but less than ten (1 ≤ |a| < 10). Thus 350 is written as 3.5×102. This form allows easy comparison of numbers, as the exponent b gives the number's order of magnitude. In normalized notation, the exponent b is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as 5×10−1). The 10 and exponent are often omitted when the exponent is 0.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalised form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation—although the latter term is more general and also applies when a is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (as in 3.15× 220).

The first 2 in "%2.2e" is the minimum character width to print. 1.20e+03 is 8 characters which is more than 2.
e directs that the number is printed: (sign), 1 digit, '.', followed by some digits and an exponent.
The 2nd 2 in "%2.2e" is the number of digits after the decimal point to print. 6 is used if this 2nd value is not provided.

The %e format uses scientific notation, i.e. one digit before the decimal separator and an exponent for scaling. You can't set the digits before the decimal separator using this format.

This is just how the scientific notation is defined. The result you expect is a very weird notation. I don't think you can get it with printf.
The number before the dot in the format specifier defines the minimum width of the resulting sub-string. Try %20.2e to see what that means.

Related

Book says c Standard provides floating point accuracy to six significant figures, but this isnt true?

I am reading C Primer Plus by Stephen Prata, and one of the first ways it introduces floats is talking about how they are accurate to a certain point. It says specifically "The C standard provides that a float has to be able to represent at least six significant figures...A float has to represent accurately the first six numbers, for example, 33.333333"
This is odd to me, because it makes it sound like a float is accurate up to six digits, but that is not true. 1.4 is stored as 1.39999... and so on. You still have errors.
So what exactly is being provided? Is there a cutoff for how accurate a number is supposed to be?
In C, you can't store more than six significant figures in a float without getting a compiler warning, but why? If you were to do more than six figures it seems to go just as accurately.
This is made even more confusing by the section on underflow and subnormal numbers. When you have a number that is the smallest a float can be, and divide it by 10, the errors you get don't seem to be subnormal? They seem to just be the regular rounding errors mentioned above.
So why is the book saying floats are accurate to six digits and how is subnormal different from regular rounding errors?
Suppose you have a decimal numeral with q significant digits:
dq−1.dq−2dq−3…d0,
and let’s also make it a floating-point decimal numeral, meaning we scale it by a power of ten:
dq−1.dq−2dq−3…d0•10e.
Next, we convert this number to float. Many such numbers cannot be exactly represented in float, so we round the result to the nearest representable value. (If there is a tie, we round to make the low digit even.) The result (if we did not overflow or underflow) is some floating-point number x. By the definition of floating-point numbers (in C 2018 5.2.4.2.2 3), it is represented by some number of digits in some base scaled by that base to a power. Supposing it is base two, x is:
bp−1.bp−2bp−3…b0•2p.
Next, we convert this float x back to decimal with q significant digits. Similarly, the float value x might not be exactly representable as a decimal numeral with q digits, so we get some possibly new number:
nq−1.nq−2nq−3…n0•10m.
It turns out that, for any float format, there is some number q such that, if the decimal numeral we started with is limited to q digits, then the result of this round-trip conversion will equal the original number. Each decimal numeral of q digits, when rounded to float and then back to q decimal digits, results in the starting number.
In the 2018 C standard, clause 5.2.4.2.2, paragraph 12, tells us this number q must be at least 6 (a C implementation may support larger values), and the C implementation should define a preprocessor symbol for it (in float.h) called FLT_DIG.
So considering your example number, 1.4, when we convert it to float in the IEEE-754 basic 32-bit binary format, we get exactly 1.39999997615814208984375 (that is its mathematical value, shown in decimal for convenience; the actual bits in the object represented it in binary). When we convert that to decimal with full precision, we get “1.39999997615814208984375”. But if we convert it to decimal with rounding six digits, we get “1.40000”. So 1.4 survives the round trip.
In other words, it is not true in general that six decimal digits can be represented in float without change, but it is true that float carries enough information that you can recover six decimal digits from it.
Of course, once you start doing arithmetic, errors will generally compound, and you can no longer rely on six decimal digits.
Thanks to Govind Parmar for citing an on-line example of C11 (or, for that matter C99).
The "6" you're referring to is "FLT_DECIMAL_DIG".
http://c0x.coding-guidelines.com/5.2.4.2.2.html
number of decimal digits, n, such that any floating-point number with
p radix b digits can be rounded to a floating-point number with n
decimal digits and back again without change to the value,
{ p log10 b if b is a power of 10
{
{ [^1 + p log10 b^] otherwise
FLT_DECIMAL_DIG 6
DBL_DECIMAL_DIG 10 LDBL_DECIMAL_DIG
10
"Subnormal" means:
What is a subnormal floating point number?
A number is subnormal when the exponent bits are zero and the mantissa
is non-zero. They're numbers between zero and the smallest normal
number. They don't have an implicit leading 1 in the mantissa.
STRONG SUGGESTION:
If you're unfamiliar with "floating point arithmetic" (or, frankly, even if you are), this is an excellent article to read (or review):
What Every Programmer Should Know About Floating-Point Arithmetic

Understanding casts from integer to float

Could someone explain this weird looking output on a 32 bit machine?
#include <stdio.h>
int main() {
printf("16777217 as float is %.1f\n",(float)16777217);
printf("16777219 as float is %.1f\n",(float)16777219);
return 0;
}
Output
16777217 as float is 16777216.0
16777219 as float is 16777220.0
The weird thing is that 16777217 casts to a lower value and 16777219 casts to a higher value...
In the IEEE-754 basic 32-bit binary floating-point format, all integers from −16,777,216 to +16,777,216 are representable. From 16,777,216 to 33,554,432, only even integers are representable. Then, from 33,554,432 to 67,108,864, only multiples of four are representable. (Since the question does not necessitate discussion of which numbers are representable, I will omit explanation and just take this for granted.)
The most common default rounding mode is to round the exact mathematical result to the nearest representable value and, in case of a tie, to round to the representable value which has zero in the low bit of its significand.
16,777,217 is equidistant between the two representable values 16,777,216 and 16,777,218. These values are represented as 1000000000000000000000002•21 and 1000000000000000000000012•21. The former has 0 in the low bit of its significand, so it is chosen as the result.
16,777,219 is equidistant between the two representable values 16,777,218 and 16,777,220. These values are represented as 1000000000000000000000012•21 and 1000000000000000000000102•21. The latter has 0 in the low bit of its significand, so it is chosen as the result.
You may have heard of the concept of "precision", as in "this fractional representation has 3 digits of precision".
This is very easy to think about in a fixed-point representation. If I have, say, three digits of precision past the decimal, then I can exactly represent 1/2 = 0.5, and I can exactly represent 1/4 = 0.25, and I can exactly represent 1/8 = 0.125, but if I try to represent 1/16, I can not get 0.0625; I will either have to settle for 0.062 or 0.063.
But that's for fixed-point. The computer you're using uses floating-point, which is a lot like scientific notation. You get a certain number of significant digits total, not just digits to the right of the decimal point. For example, if you have 3 decimal digits worth of precision in a floating-point format, you can represent 0.123 but not 0.1234, and you can represent 0.0123 and 0.00123, but not 0.01234 or 0.001234. And if you have digits to the left of the decimal point, those take away away from the number you can use to the right of the decimal point. You can use 1.23 but not 1.234, and 12.3 but not 12.34, and 123.0 but not 123.4 or 123.anythingelse.
And -- you can probably see the pattern by now -- if you're using a floating-point format with only three significant digits, you can't represent all numbers greater than 999 perfectly accurately at all, even though they don't have a fractional part. You can represent 1230 but not 1234, and 12300 but not 12340.
So that's decimal floating-point formats. Your computer, on the other hand, uses a binary floating-point format, which ends up being somewhat trickier to think about. We don't have an exact number of decimal digits' worth of precision, and the numbers that can't be exactly represented don't end up being nice even multiples of 10 or 100.
In particular, type float on most machines has 24 binary bits worth of precision, which works out to 6-7 decimal digits' worth of precision. That's obviously not enough for numbers like 16777217.
So where did the numbers 16777216 and 16777220 come from? As Eric Postpischil has already explained, it ends up being because they're multiples of 2. If we look at the binary representations of nearby numbers, the pattern becomes clear:
16777208 111111111111111111111000
16777209 111111111111111111111001
16777210 111111111111111111111010
16777211 111111111111111111111011
16777212 111111111111111111111100
16777213 111111111111111111111101
16777214 111111111111111111111110
16777215 111111111111111111111111
16777216 1000000000000000000000000
16777218 1000000000000000000000010
16777220 1000000000000000000000100
16777215 is the biggest number that can be represented exactly in 24 bits. After that, you can represent only even numbers, because the low-order bit is the 25th, and essentially has to be 0.
Type float cannot hold that much significance. The significand can only hold 24 bits. Of those 23 are stored and the 24th is 1 and not stored, because the significand is normalised.
Please read this which says "Integers in [ − 16777216 , 16777216 ] can be exactly represented", but yours are out of that range.
Floating representation follows a method similar to what we use in everyday life and we call exponential representation. This is a number using a number of digits that we decide will suffice to realistically represent the value, we call it mantissa, or significant, that we will multiply to a base, or radix, value elevated to a power that we call exponent. In plain words:
num*base^exp
We generally use 10 as base, because we have 10 finger in our hands, so we are habit to numbers like 1e2, which is 100=1*10^2.
Of course we regret to use exponential representation for so small numbers, but we prefer to use it when acting on very large numbers, or, better, when our number has a number of digits that we consider enough to represent the entity we are valorizing.
The correct number of digits could be how many we can handle by mind, or what are required for an engineering application. When we decided how many digits we need we will not care anymore for how adherent to the real value will be the numeric representation we are going to handle. I.e. for a number like 123456.789e5 it is understood that adding up 99 unit we can tolerate the rounded representation and consider it acceptable anyway, if not we should change the representation and use a different one with appropriate number of digits as in 12345678900.
On a computer when you have to handle very large numbers, that couldn't fit in a standard integer, or when the you have to represent a real number (with decimal part) the right choice is a floating or double floating point representation. It uses the same layout we discussed above, but the base is 2 instead of 10. This because a computer can have only 2 fingers, the states 0 or 1. Se the formula we used before, to represent 100, become:
100100*2^0
That's still isn't the real floating point representation, but gives the idea. Now consider that in a computer the floating point format is standardized and for a standard float, as per IEE-754, it uses, as memory layout (we will see after why it is assumed 1 more bit for the mantissa), 23bits for the mantissa, 1bit for the sign and 8bits for the exponent biased by -127 (that simply means that it will range between -126 and +127 without the need for a sign bit, and the values 0x00 and 0xff reserved for special meaning).
Now consider using 0 as exponent, this means that the value 2^exponent=2^0=1 multiplied by mantissa give the same behavior of a 23bits integer. This imply that incrementing a count as in:
float f = 0;
while(1)
{
f +=1;
printf ("%f\n", f);
}
You will see that the printed value linearly increase by one until it saturates the 23bits and the exponent will become to grow.
If the base, or radix, of our floating point number would have been 10, we would see an increase each 10 loops for the first 100 (10^2) values, than an increase of 100 for the next 1000 (10^3) values and so on. You see that this corresponds to the *truncation** we have to make due to the limited number of available digits.
The same phenomenon will be observed when using the binary base, only the changes happens on powers of 2 interval.
What we discussed up to now is called the denormalized form of a floating point, what is normally used is the counterpart normalized. The latter simply means that there is a 24th bit, not stored, that is always 1. In plane words we wouldn't use an exponent of 0 for number less that 2^24, but we shift it (multiply by 2) up to the MSbit==1 reach the 24th bit, than the exponent is adjusted to such a negative value that force the conversion to shift back the number to its original value.
Remember the reserved value of the exponent we talked above? Well an exponent==0x00 means that we have a denormalized number. exponent==0xff indicate a nan (not-a-number) or +/-infinity if mantissa==0.
It should be clear now that when the number we express is beyond the 24bits of the significant (mantissa), we should expect approximation of the real value depending on how much far we are from 2^24.
Now the number you are using are just on the edge of 2^24=16,277,216 :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|1|0|0|1|0|1|1|0|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1|1| = 16,277,215
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
s\______ _______/\_____________________ _______________________/
i v v
g exponent mantissa
n
Now increasing by 1 we have:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|1|0|0|1|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0| = 16,277,216
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
s\__ exponent __/\_________________ mantissa __________________/
Note that we have triggered to 1 the 24th bit, but from now on we are above the 24 bit representation, and each possible further representation is in steps of 2^1=2. Simply advance by 2 or can represent only even numbers (multiples of 2^1=2). I.e. setting to 1 the Less Significant bit we have:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|1|0|0|1|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1| = 16,277,218
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
s\__ exponent __/\_________________ mantissa __________________/
Increasing again:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|1|0|0|1|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|0| = 16,277,220
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
s\__ exponent __/\_________________ mantissa __________________/
As you can see we cannot exactly represent 16,277,219. In your code:
// This will print 16777216, because 1 increment isn't enough to
// increase the significant that can express only intervals
// that are > 2^1
printf("16777217 as float is %.1f\n",(float)16777217);
// This will print 16777220, because an increment of 3 on
// the base 16777216=2^24 will trigger an exponent increase rounded
// to the closer exact representation
printf("16777219 as float is %.1f\n",(float)16777219);
As said above the choice of the numeric format must be appropriate for the usage, a floating point is only an approximate representation of a real number, and is definitively our duty to carefully use the right type.
In the case if we need more precision we could use a double, or an integer long long int.
Just for sake of completeness I would add few words on the approximate representation for irriducible numbers. This numbers are not divisible by a fraction of 2, so the representation in float format will always be not exact, and need to be rounded to the correct value during conversion to decimal representation.
For more details see:
https://en.wikipedia.org/wiki/IEEE_754
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
Online demo applets:
https://babbage.cs.qc.cuny.edu/IEEE-754/
https://evanw.github.io/float-toy/
https://www.h-schmidt.net/FloatConverter/IEEE754.html

Why can C represent some floating points but not others with same amount of decimals [duplicate]

There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation.
What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.
What's going on?
Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are:
... 1000 100 10 1 1/10 1/100 ...
In binary, they would be:
... 8 4 2 1 1/2 1/4 1/8 ...
There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.
Decimal numbers can be represented exactly, if you have enough space - just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in .NET) then plenty of values which can't be represented exactly in binary floating point can be exactly represented.
Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.
The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to multiply a number by 3: you can always multiply by an integer without running into problems. But when you divide by a number which is prime and isn't a factor of your base, you can run into trouble (and will do so if you try to divide 1 by that number).
Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/5 - and 5 is the prime that causes problems between decimal and binary.
Side note to deal with the problem of finite representations:
Some floating decimal point types have a fixed size like System.Decimal others like java.math.BigDecimal are "arbitrarily large" - but they'll hit a limit at some point, whether it's system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number of decimal digits, you can exactly represent any number which is exactly representable as a floating binary point.
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
Let's step away for a moment from the particulars of bases 10 and 2. Let's ask - in base b, what numbers have terminating representations, and what numbers don't? A moment's thought tells us that a number x has a terminating b-representation if and only if there exists an integer n such that x b^n is an integer.
So, for example, x = 11/500 has a terminating 10-representation, because we can pick n = 3 and then x b^n = 22, an integer. However x = 1/3 does not, because whatever n we pick we will not be able to get rid of the 3.
This second example prompts us to think about factors, and we can see that for any rational x = p/q (assumed to be in lowest terms), we can answer the question by comparing the prime factorisations of b and q. If q has any prime factors not in the prime factorisation of b, we will never be able to find a suitable n to get rid of these factors.
Thus for base 10, any p/q where q has prime factors other than 2 or 5 will not have a terminating representation.
So now going back to bases 10 and 2, we see that any rational with a terminating 10-representation will be of the form p/q exactly when q has only 2s and 5s in its prime factorisation; and that same number will have a terminating 2-representatiion exactly when q has only 2s in its prime factorisation.
But one of these cases is a subset of the other! Whenever
q has only 2s in its prime factorisation
it obviously is also true that
q has only 2s and 5s in its prime factorisation
or, put another way, whenever p/q has a terminating 2-representation, p/q has a terminating 10-representation. The converse however does not hold - whenever q has a 5 in its prime factorisation, it will have a terminating 10-representation , but not a terminating 2-representation. This is the 0.1 example mentioned by other answers.
So there we have the answer to your question - because the prime factors of 2 are a subset of the prime factors of 10, all 2-terminating numbers are 10-terminating numbers, but not vice versa. It's not about 61 versus 6.1 - it's about 10 versus 2.
As a closing note, if by some quirk people used (say) base 17 but our computers used base 5, your intuition would never have been led astray by this - there would be no (non-zero, non-integer) numbers which terminated in both cases!
The root (mathematical) reason is that when you are dealing with integers, they are countably infinite.
Which means, even though there are an infinite amount of them, we could "count out" all of the items in the sequence, without skipping any. That means if we want to get the item in the 610000000000000th position in the list, we can figure it out via a formula.
However, real numbers are uncountably infinite. You can't say "give me the real number at position 610000000000000" and get back an answer. The reason is because, even between 0 and 1, there are an infinite number of values, when you are considering floating-point values. The same holds true for any two floating point numbers.
More info:
http://en.wikipedia.org/wiki/Countable_set
http://en.wikipedia.org/wiki/Uncountable_set
Update:
My apologies, I appear to have misinterpreted the question. My response is about why we cannot represent every real value, I hadn't realized that floating point was automatically classified as rational.
To repeat what I said in my comment to Mr. Skeet: we can represent 1/3, 1/9, 1/27, or any rational in decimal notation. We do it by adding an extra symbol. For example, a line over the digits that repeat in the decimal expansion of the number. What we need to represent decimal numbers as a sequence of binary numbers are 1) a sequence of binary numbers, 2) a radix point, and 3) some other symbol to indicate the repeating part of the sequence.
Hehner's quote notation is a way of doing this. He uses a quote symbol to represent the repeating part of the sequence. The article: http://www.cs.toronto.edu/~hehner/ratno.pdf and the Wikipedia entry: http://en.wikipedia.org/wiki/Quote_notation.
There's nothing that says we can't add a symbol to our representation system, so we can represent decimal rationals exactly using binary quote notation, and vice versa.
BCD - Binary-coded Decimal - representations are exact. They are not very space-efficient, but that's a trade-off you have to make for accuracy in this case.
This is a good question.
All your question is based on "how do we represent a number?"
ALL the numbers can be represented with decimal representation or with binary (2's complement) representation. All of them !!
BUT some (most of them) require infinite number of elements ("0" or "1" for the binary position, or "0", "1" to "9" for the decimal representation).
Like 1/3 in decimal representation (1/3 = 0.3333333... <- with an infinite number of "3")
Like 0.1 in binary ( 0.1 = 0.00011001100110011.... <- with an infinite number of "0011")
Everything is in that concept. Since your computer can only consider finite set of digits (decimal or binary), only some numbers can be exactly represented in your computer...
And as said Jon, 3 is a prime number which isn't a factor of 10, so 1/3 cannot be represented with a finite number of elements in base 10.
Even with arithmetic with arbitrary precision, the numbering position system in base 2 is not able to fully describe 6.1, although it can represent 61.
For 6.1, we must use another representation (like decimal representation, or IEEE 854 that allows base 2 or base 10 for the representation of floating-point values)
If you make a big enough number with floating point (as it can do exponents), then you'll end up with inexactness in front of the decimal point, too. So I don't think your question is entirely valid because the premise is wrong; it's not the case that shifting by 10 will always create more precision, because at some point the floating point number will have to use exponents to represent the largeness of the number and will lose some precision that way as well.
It's the same reason you cannot represent 1/3 exactly in base 10, you need to say 0.33333(3). In binary it is the same type of problem but just occurs for different set of numbers.
(Note: I'll append 'b' to indicate binary numbers here. All other numbers are given in decimal)
One way to think about things is in terms of something like scientific notation. We're used to seeing numbers expressed in scientific notation like, 6.022141 * 10^23. Floating point numbers are stored internally using a similar format - mantissa and exponent, but using powers of two instead of ten.
Your 61.0 could be rewritten as 1.90625 * 2^5, or 1.11101b * 2^101b with the mantissa and exponents. To multiply that by ten and (move the decimal point), we can do:
(1.90625 * 2^5) * (1.25 * 2^3) = (2.3828125 * 2^8) = (1.19140625 * 2^9)
or in with the mantissa and exponents in binary:
(1.11101b * 2^101b) * (1.01b * 2^11b) = (10.0110001b * 2^1000b) = (1.00110001b * 2^1001b)
Note what we did there to multiply the numbers. We multiplied the mantissas and added the exponents. Then, since the mantissa ended greater than two, we normalized the result by bumping the exponent. It's just like when we adjust the exponent after doing an operation on numbers in decimal scientific notation. In each case, the values that we worked with had a finite representation in binary, and so the values output by the basic multiplication and addition operations also produced values with a finite representation.
Now, consider how we'd divide 61 by 10. We'd start by dividing the mantissas, 1.90625 and 1.25. In decimal, this gives 1.525, a nice short number. But what is this if we convert it to binary? We'll do it the usual way -- subtracting out the largest power of two whenever possible, just like converting integer decimals to binary, but we'll use negative powers of two:
1.525 - 1*2^0 --> 1
0.525 - 1*2^-1 --> 1
0.025 - 0*2^-2 --> 0
0.025 - 0*2^-3 --> 0
0.025 - 0*2^-4 --> 0
0.025 - 0*2^-5 --> 0
0.025 - 1*2^-6 --> 1
0.009375 - 1*2^-7 --> 1
0.0015625 - 0*2^-8 --> 0
0.0015625 - 0*2^-9 --> 0
0.0015625 - 1*2^-10 --> 1
0.0005859375 - 1*2^-11 --> 1
0.00009765625...
Uh oh. Now we're in trouble. It turns out that 1.90625 / 1.25 = 1.525, is a repeating fraction when expressed in binary: 1.11101b / 1.01b = 1.10000110011...b Our machines only have so many bits to hold that mantissa and so they'll just round the fraction and assume zeroes beyond a certain point. The error you see when you divide 61 by 10 is the difference between:
1.100001100110011001100110011001100110011...b * 2^10b
and, say:
1.100001100110011001100110b * 2^10b
It's this rounding of the mantissa that leads to the loss of precision that we associate with floating point values. Even when the mantissa can be expressed exactly (e.g., when just adding two numbers), we can still get numeric loss if the mantissa needs too many digits to fit after normalizing the exponent.
We actually do this sort of thing all the time when we round decimal numbers to a manageable size and just give the first few digits of it. Because we express the result in decimal it feels natural. But if we rounded a decimal and then converted it to a different base, it'd look just as ugly as the decimals we get due to floating point rounding.
I'm surprised no one has stated this yet: use continued fractions. Any rational number can be represented finitely in binary this way.
Some examples:
1/3 (0.3333...)
0; 3
5/9 (0.5555...)
0; 1, 1, 4
10/43 (0.232558139534883720930...)
0; 4, 3, 3
9093/18478 (0.49209871198181621387596060179673...)
0; 2, 31, 7, 8, 5
From here, there are a variety of known ways to store a sequence of integers in memory.
In addition to storing your number with perfect accuracy, continued fractions also have some other benefits, such as best rational approximation. If you decide to terminate the sequence of numbers in a continued fraction early, the remaining digits (when recombined to a fraction) will give you the best possible fraction. This is how approximations to pi are found:
Pi's continued fraction:
3; 7, 15, 1, 292 ...
Terminating the sequence at 1, this gives the fraction:
355/113
which is an excellent rational approximation.
In the equation
2^x = y ;
x = log(y) / log(2)
Hence, I was just wondering if we could have a logarithmic base system for binary like,
2^1, 2^0, 2^(log(1/2) / log(2)), 2^(log(1/4) / log(2)), 2^(log(1/8) / log(2)),2^(log(1/16) / log(2)) ........
That might be able to solve the problem, so if you wanted to write something like 32.41 in binary, that would be
2^5 + 2^(log(0.4) / log(2)) + 2^(log(0.01) / log(2))
Or
2^5 + 2^(log(0.41) / log(2))
The problem is that you do not really know whether the number actually is exactly 61.0 . Consider this:
float a = 60;
float b = 0.1;
float c = a + b * 10;
What is the value of c? It is not exactly 61, because b is not really .1 because .1 does not have an exact binary representation.
The number 61.0 does indeed have an exact floating-point operation—but that's not true for all integers. If you wrote a loop that added one to both a double-precision floating point number and a 64-bit integer, eventually you'd reach a point where the 64-bit integer perfectly represents a number, but the floating point doesn't—because there aren't enough significant bits.
It's just much easier to reach the point of approximation on the right side of the decimal point. If you started writing out all the numbers in binary floating point, it'd make more sense.
Another way of thinking about it is that when you note that 61.0 is perfectly representable in base 10, and shifting the decimal point around doesn't change that, you're performing multiplication by powers of ten (10^1, 10^-1). In floating point, multiplying by powers of two does not affect the precision of the number. Try taking 61.0 and dividing it by three repeatedly for an illustration of how a perfectly precise number can lose its precise representation.
There's a threshold because the meaning of the digit has gone from integer to non-integer. To represent 61, you have 6*10^1 + 1*10^0; 10^1 and 10^0 are both integers. 6.1 is 6*10^0 + 1*10^-1, but 10^-1 is 1/10, which is definitely not an integer. That's how you end up in Inexactville.
A parallel can be made of fractions and whole numbers. Some fractions eg 1/7 cannot be represented in decimal form without lots and lots of decimals. Because floating point is binary based the special cases change but the same sort of accuracy problems present themselves.
There are an infinite number of rational numbers, and a finite number of bits with which to represent them. See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
you know integer numbers right? each bit represent 2^n
2^4=16
2^3=8
2^2=4
2^1=2
2^0=1
well its the same for floating point(with some distinctions) but the bits represent 2^-n
2^-1=1/2=0.5
2^-2=1/(2*2)=0.25
2^-3=0.125
2^-4=0.0625
Floating point binary representation:
sign Exponent Fraction(i think invisible 1 is appended to the fraction )
B11 B10 B9 B8 B7 B6 B5 B4 B3 B2 B1 B0
The high scoring answer above nailed it.
First you were mixing base 2 and base 10 in your question, then when you put a number on the right side that is not divisible into the base you get problems. Like 1/3 in decimal because 3 doesnt go into a power of 10 or 1/5 in binary which doesnt go into a power of 2.
Another comment though NEVER use equal with floating point numbers, period. Even if it is an exact representation there are some numbers in some floating point systems that can be accurately represented in more than one way (IEEE is bad about this, it is a horrible floating point spec to start with, so expect headaches). No different here 1/3 is not EQUAL to the number on your calculator 0.3333333, no matter how many 3's there are to the right of the decimal point. It is or can be close enough but is not equal. so you would expect something like 2*1/3 to not equal 2/3 depending on the rounding. Never use equal with floating point.
As we have been discussing, in floating point arithmetic, the decimal 0.1 cannot be perfectly represented in binary.
Floating point and integer representations provide grids or lattices for the numbers represented. As arithmetic is done, the results fall off the grid and have to be put back onto the grid by rounding. Example is 1/10 on a binary grid.
If we use binary coded decimal representation as one gentleman suggested, would we be able to keep numbers on the grid?
For a simple answer: The computer doesn't have infinite memory to store fraction (after representing the decimal number as the form of scientific notation). According to IEEE 754 standard for double-precision floating-point numbers, we only have a limit of 53 bits to store fraction.
For more info: http://mathcenter.oxford.emory.edu/site/cs170/ieee754/
I will not bother to repeat what the other 20 answers have already summarized, so I will just answer briefly:
The answer in your content:
Why can't base two numbers represent certain ratios exactly?
For the same reason that decimals are insufficient to represent certain ratios, namely, irreducible fractions with denominators containing prime factors other than two or five which will always have an indefinite string in at least the mantissa of its decimal expansion.
Why can't decimal numbers be represented exactly in binary?
This question at face value is based on a misconception regarding values themselves. No number system is sufficient to represent any quantity or ratio in a manner that the thing itself tells you that it is both a quantity, and at the same time also gives the interpretation in and of itself about the intrinsic value of the representation. As such, all quantitative representations, and models in general, are symbolic and can only be understood a posteriori, namely, after one has been taught how to read and interpret these numbers.
Since models are subjective things that are true insofar as they reflect reality, we do not strictly need to interpret a binary string as sums of negative and positive powers of two. Instead, one may observe that we can create an arbitrary set of symbols that use base two or any other base to represent any number or ratio exactly. Just consider that we can refer to all of infinity using a single word and even a single symbol without "showing infinity" itself.
As an example, I am designing a binary encoding for mixed numbers so that I can have more precision and accuracy than an IEEE 754 float. At the time of writing this, the idea is to have a sign bit, a reciprocal bit, a certain number of bits for a scalar to determine how much to "magnify" the fractional portion, and then the remaining bits are divided evenly between the integer portion of a mixed number, and the latter a fixed-point number which, if the reciprocal bit is set, should be interpreted as one divided by that number. This has the benefit of allowing me to represent numbers with infinite decimal expansions by using their reciprocals which do have terminating decimal expansions, or alternatively, as a fraction directly, potentially as an approximation, depending on my needs.
You can't represent 0.1 exactly in binary for the same reason you can't measure 0.1 inch using a conventional English ruler.
English rulers, like binary fractions, are all about halves. You can measure half an inch, or a quarter of an inch (which is of course half of a half), or an eighth, or a sixteenth, etc.
If you want to measure a tenth of an inch, though, you're out of luck. It's less than an eighth of an inch, but more than a sixteenth. If you try to get more exact, you find that it's a little more than 3/32, but a little less than 7/64. I've never seen an actual ruler that had gradations finer than 64ths, but if you do the math, you'll find that 1/10 is less than 13/128, and it's more than 25/256, and it's more than 51/512. You can keep going finer and finer, to 1024ths and 2048ths and 4096ths and 8192nds, but you will never find an exact marking, even on an infinitely-fine base-2 ruler, that exactly corresponds to 1/10, or 0.1.
You will find something interesting, though. Let's look at all the approximations I've listed, and for each one, record explicitly whether 0.1 is less or greater:
fraction
decimal
0.1 is...
as 0/1
1/2
0.5
less
0
1/4
0.25
less
0
1/8
0.125
less
0
1/16
0.0625
greater
1
3/32
0.09375
greater
1
7/64
0.109375
less
0
13/128
0.1015625
less
0
25/256
0.09765625
greater
1
51/512
0.099609375
greater
1
103/1024
0.1005859375
less
0
205/2048
0.10009765625
less
0
409/4096
0.099853515625
greater
1
819/8192
0.0999755859375
greater
1
Now, if you read down the last column, you get 0001100110011. It's no coincidence that the infinitely-repeating binary fraction for 1/10 is 0.0001100110011...

Floating Point Number storage in c [duplicate]

There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation.
What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.
What's going on?
Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are:
... 1000 100 10 1 1/10 1/100 ...
In binary, they would be:
... 8 4 2 1 1/2 1/4 1/8 ...
There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.
Decimal numbers can be represented exactly, if you have enough space - just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in .NET) then plenty of values which can't be represented exactly in binary floating point can be exactly represented.
Let's look at it another way - in base 10 which you're likely to be comfortable with, you can't express 1/3 exactly. It's 0.3333333... (recurring). The reason you can't represent 0.1 as a binary floating point number is for exactly the same reason. You can represent 3, and 9, and 27 exactly - but not 1/3, 1/9 or 1/27.
The problem is that 3 is a prime number which isn't a factor of 10. That's not an issue when you want to multiply a number by 3: you can always multiply by an integer without running into problems. But when you divide by a number which is prime and isn't a factor of your base, you can run into trouble (and will do so if you try to divide 1 by that number).
Although 0.1 is usually used as the simplest example of an exact decimal number which can't be represented exactly in binary floating point, arguably 0.2 is a simpler example as it's 1/5 - and 5 is the prime that causes problems between decimal and binary.
Side note to deal with the problem of finite representations:
Some floating decimal point types have a fixed size like System.Decimal others like java.math.BigDecimal are "arbitrarily large" - but they'll hit a limit at some point, whether it's system memory or the theoretical maximum size of an array. This is an entirely separate point to the main one of this answer, however. Even if you had a genuinely arbitrarily large number of bits to play with, you still couldn't represent decimal 0.1 exactly in a floating binary point representation. Compare that with the other way round: given an arbitrary number of decimal digits, you can exactly represent any number which is exactly representable as a floating binary point.
For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.
Let's step away for a moment from the particulars of bases 10 and 2. Let's ask - in base b, what numbers have terminating representations, and what numbers don't? A moment's thought tells us that a number x has a terminating b-representation if and only if there exists an integer n such that x b^n is an integer.
So, for example, x = 11/500 has a terminating 10-representation, because we can pick n = 3 and then x b^n = 22, an integer. However x = 1/3 does not, because whatever n we pick we will not be able to get rid of the 3.
This second example prompts us to think about factors, and we can see that for any rational x = p/q (assumed to be in lowest terms), we can answer the question by comparing the prime factorisations of b and q. If q has any prime factors not in the prime factorisation of b, we will never be able to find a suitable n to get rid of these factors.
Thus for base 10, any p/q where q has prime factors other than 2 or 5 will not have a terminating representation.
So now going back to bases 10 and 2, we see that any rational with a terminating 10-representation will be of the form p/q exactly when q has only 2s and 5s in its prime factorisation; and that same number will have a terminating 2-representatiion exactly when q has only 2s in its prime factorisation.
But one of these cases is a subset of the other! Whenever
q has only 2s in its prime factorisation
it obviously is also true that
q has only 2s and 5s in its prime factorisation
or, put another way, whenever p/q has a terminating 2-representation, p/q has a terminating 10-representation. The converse however does not hold - whenever q has a 5 in its prime factorisation, it will have a terminating 10-representation , but not a terminating 2-representation. This is the 0.1 example mentioned by other answers.
So there we have the answer to your question - because the prime factors of 2 are a subset of the prime factors of 10, all 2-terminating numbers are 10-terminating numbers, but not vice versa. It's not about 61 versus 6.1 - it's about 10 versus 2.
As a closing note, if by some quirk people used (say) base 17 but our computers used base 5, your intuition would never have been led astray by this - there would be no (non-zero, non-integer) numbers which terminated in both cases!
The root (mathematical) reason is that when you are dealing with integers, they are countably infinite.
Which means, even though there are an infinite amount of them, we could "count out" all of the items in the sequence, without skipping any. That means if we want to get the item in the 610000000000000th position in the list, we can figure it out via a formula.
However, real numbers are uncountably infinite. You can't say "give me the real number at position 610000000000000" and get back an answer. The reason is because, even between 0 and 1, there are an infinite number of values, when you are considering floating-point values. The same holds true for any two floating point numbers.
More info:
http://en.wikipedia.org/wiki/Countable_set
http://en.wikipedia.org/wiki/Uncountable_set
Update:
My apologies, I appear to have misinterpreted the question. My response is about why we cannot represent every real value, I hadn't realized that floating point was automatically classified as rational.
To repeat what I said in my comment to Mr. Skeet: we can represent 1/3, 1/9, 1/27, or any rational in decimal notation. We do it by adding an extra symbol. For example, a line over the digits that repeat in the decimal expansion of the number. What we need to represent decimal numbers as a sequence of binary numbers are 1) a sequence of binary numbers, 2) a radix point, and 3) some other symbol to indicate the repeating part of the sequence.
Hehner's quote notation is a way of doing this. He uses a quote symbol to represent the repeating part of the sequence. The article: http://www.cs.toronto.edu/~hehner/ratno.pdf and the Wikipedia entry: http://en.wikipedia.org/wiki/Quote_notation.
There's nothing that says we can't add a symbol to our representation system, so we can represent decimal rationals exactly using binary quote notation, and vice versa.
BCD - Binary-coded Decimal - representations are exact. They are not very space-efficient, but that's a trade-off you have to make for accuracy in this case.
This is a good question.
All your question is based on "how do we represent a number?"
ALL the numbers can be represented with decimal representation or with binary (2's complement) representation. All of them !!
BUT some (most of them) require infinite number of elements ("0" or "1" for the binary position, or "0", "1" to "9" for the decimal representation).
Like 1/3 in decimal representation (1/3 = 0.3333333... <- with an infinite number of "3")
Like 0.1 in binary ( 0.1 = 0.00011001100110011.... <- with an infinite number of "0011")
Everything is in that concept. Since your computer can only consider finite set of digits (decimal or binary), only some numbers can be exactly represented in your computer...
And as said Jon, 3 is a prime number which isn't a factor of 10, so 1/3 cannot be represented with a finite number of elements in base 10.
Even with arithmetic with arbitrary precision, the numbering position system in base 2 is not able to fully describe 6.1, although it can represent 61.
For 6.1, we must use another representation (like decimal representation, or IEEE 854 that allows base 2 or base 10 for the representation of floating-point values)
If you make a big enough number with floating point (as it can do exponents), then you'll end up with inexactness in front of the decimal point, too. So I don't think your question is entirely valid because the premise is wrong; it's not the case that shifting by 10 will always create more precision, because at some point the floating point number will have to use exponents to represent the largeness of the number and will lose some precision that way as well.
It's the same reason you cannot represent 1/3 exactly in base 10, you need to say 0.33333(3). In binary it is the same type of problem but just occurs for different set of numbers.
(Note: I'll append 'b' to indicate binary numbers here. All other numbers are given in decimal)
One way to think about things is in terms of something like scientific notation. We're used to seeing numbers expressed in scientific notation like, 6.022141 * 10^23. Floating point numbers are stored internally using a similar format - mantissa and exponent, but using powers of two instead of ten.
Your 61.0 could be rewritten as 1.90625 * 2^5, or 1.11101b * 2^101b with the mantissa and exponents. To multiply that by ten and (move the decimal point), we can do:
(1.90625 * 2^5) * (1.25 * 2^3) = (2.3828125 * 2^8) = (1.19140625 * 2^9)
or in with the mantissa and exponents in binary:
(1.11101b * 2^101b) * (1.01b * 2^11b) = (10.0110001b * 2^1000b) = (1.00110001b * 2^1001b)
Note what we did there to multiply the numbers. We multiplied the mantissas and added the exponents. Then, since the mantissa ended greater than two, we normalized the result by bumping the exponent. It's just like when we adjust the exponent after doing an operation on numbers in decimal scientific notation. In each case, the values that we worked with had a finite representation in binary, and so the values output by the basic multiplication and addition operations also produced values with a finite representation.
Now, consider how we'd divide 61 by 10. We'd start by dividing the mantissas, 1.90625 and 1.25. In decimal, this gives 1.525, a nice short number. But what is this if we convert it to binary? We'll do it the usual way -- subtracting out the largest power of two whenever possible, just like converting integer decimals to binary, but we'll use negative powers of two:
1.525 - 1*2^0 --> 1
0.525 - 1*2^-1 --> 1
0.025 - 0*2^-2 --> 0
0.025 - 0*2^-3 --> 0
0.025 - 0*2^-4 --> 0
0.025 - 0*2^-5 --> 0
0.025 - 1*2^-6 --> 1
0.009375 - 1*2^-7 --> 1
0.0015625 - 0*2^-8 --> 0
0.0015625 - 0*2^-9 --> 0
0.0015625 - 1*2^-10 --> 1
0.0005859375 - 1*2^-11 --> 1
0.00009765625...
Uh oh. Now we're in trouble. It turns out that 1.90625 / 1.25 = 1.525, is a repeating fraction when expressed in binary: 1.11101b / 1.01b = 1.10000110011...b Our machines only have so many bits to hold that mantissa and so they'll just round the fraction and assume zeroes beyond a certain point. The error you see when you divide 61 by 10 is the difference between:
1.100001100110011001100110011001100110011...b * 2^10b
and, say:
1.100001100110011001100110b * 2^10b
It's this rounding of the mantissa that leads to the loss of precision that we associate with floating point values. Even when the mantissa can be expressed exactly (e.g., when just adding two numbers), we can still get numeric loss if the mantissa needs too many digits to fit after normalizing the exponent.
We actually do this sort of thing all the time when we round decimal numbers to a manageable size and just give the first few digits of it. Because we express the result in decimal it feels natural. But if we rounded a decimal and then converted it to a different base, it'd look just as ugly as the decimals we get due to floating point rounding.
I'm surprised no one has stated this yet: use continued fractions. Any rational number can be represented finitely in binary this way.
Some examples:
1/3 (0.3333...)
0; 3
5/9 (0.5555...)
0; 1, 1, 4
10/43 (0.232558139534883720930...)
0; 4, 3, 3
9093/18478 (0.49209871198181621387596060179673...)
0; 2, 31, 7, 8, 5
From here, there are a variety of known ways to store a sequence of integers in memory.
In addition to storing your number with perfect accuracy, continued fractions also have some other benefits, such as best rational approximation. If you decide to terminate the sequence of numbers in a continued fraction early, the remaining digits (when recombined to a fraction) will give you the best possible fraction. This is how approximations to pi are found:
Pi's continued fraction:
3; 7, 15, 1, 292 ...
Terminating the sequence at 1, this gives the fraction:
355/113
which is an excellent rational approximation.
In the equation
2^x = y ;
x = log(y) / log(2)
Hence, I was just wondering if we could have a logarithmic base system for binary like,
2^1, 2^0, 2^(log(1/2) / log(2)), 2^(log(1/4) / log(2)), 2^(log(1/8) / log(2)),2^(log(1/16) / log(2)) ........
That might be able to solve the problem, so if you wanted to write something like 32.41 in binary, that would be
2^5 + 2^(log(0.4) / log(2)) + 2^(log(0.01) / log(2))
Or
2^5 + 2^(log(0.41) / log(2))
The problem is that you do not really know whether the number actually is exactly 61.0 . Consider this:
float a = 60;
float b = 0.1;
float c = a + b * 10;
What is the value of c? It is not exactly 61, because b is not really .1 because .1 does not have an exact binary representation.
The number 61.0 does indeed have an exact floating-point operation—but that's not true for all integers. If you wrote a loop that added one to both a double-precision floating point number and a 64-bit integer, eventually you'd reach a point where the 64-bit integer perfectly represents a number, but the floating point doesn't—because there aren't enough significant bits.
It's just much easier to reach the point of approximation on the right side of the decimal point. If you started writing out all the numbers in binary floating point, it'd make more sense.
Another way of thinking about it is that when you note that 61.0 is perfectly representable in base 10, and shifting the decimal point around doesn't change that, you're performing multiplication by powers of ten (10^1, 10^-1). In floating point, multiplying by powers of two does not affect the precision of the number. Try taking 61.0 and dividing it by three repeatedly for an illustration of how a perfectly precise number can lose its precise representation.
There's a threshold because the meaning of the digit has gone from integer to non-integer. To represent 61, you have 6*10^1 + 1*10^0; 10^1 and 10^0 are both integers. 6.1 is 6*10^0 + 1*10^-1, but 10^-1 is 1/10, which is definitely not an integer. That's how you end up in Inexactville.
A parallel can be made of fractions and whole numbers. Some fractions eg 1/7 cannot be represented in decimal form without lots and lots of decimals. Because floating point is binary based the special cases change but the same sort of accuracy problems present themselves.
There are an infinite number of rational numbers, and a finite number of bits with which to represent them. See http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.
you know integer numbers right? each bit represent 2^n
2^4=16
2^3=8
2^2=4
2^1=2
2^0=1
well its the same for floating point(with some distinctions) but the bits represent 2^-n
2^-1=1/2=0.5
2^-2=1/(2*2)=0.25
2^-3=0.125
2^-4=0.0625
Floating point binary representation:
sign Exponent Fraction(i think invisible 1 is appended to the fraction )
B11 B10 B9 B8 B7 B6 B5 B4 B3 B2 B1 B0
The high scoring answer above nailed it.
First you were mixing base 2 and base 10 in your question, then when you put a number on the right side that is not divisible into the base you get problems. Like 1/3 in decimal because 3 doesnt go into a power of 10 or 1/5 in binary which doesnt go into a power of 2.
Another comment though NEVER use equal with floating point numbers, period. Even if it is an exact representation there are some numbers in some floating point systems that can be accurately represented in more than one way (IEEE is bad about this, it is a horrible floating point spec to start with, so expect headaches). No different here 1/3 is not EQUAL to the number on your calculator 0.3333333, no matter how many 3's there are to the right of the decimal point. It is or can be close enough but is not equal. so you would expect something like 2*1/3 to not equal 2/3 depending on the rounding. Never use equal with floating point.
As we have been discussing, in floating point arithmetic, the decimal 0.1 cannot be perfectly represented in binary.
Floating point and integer representations provide grids or lattices for the numbers represented. As arithmetic is done, the results fall off the grid and have to be put back onto the grid by rounding. Example is 1/10 on a binary grid.
If we use binary coded decimal representation as one gentleman suggested, would we be able to keep numbers on the grid?
For a simple answer: The computer doesn't have infinite memory to store fraction (after representing the decimal number as the form of scientific notation). According to IEEE 754 standard for double-precision floating-point numbers, we only have a limit of 53 bits to store fraction.
For more info: http://mathcenter.oxford.emory.edu/site/cs170/ieee754/
I will not bother to repeat what the other 20 answers have already summarized, so I will just answer briefly:
The answer in your content:
Why can't base two numbers represent certain ratios exactly?
For the same reason that decimals are insufficient to represent certain ratios, namely, irreducible fractions with denominators containing prime factors other than two or five which will always have an indefinite string in at least the mantissa of its decimal expansion.
Why can't decimal numbers be represented exactly in binary?
This question at face value is based on a misconception regarding values themselves. No number system is sufficient to represent any quantity or ratio in a manner that the thing itself tells you that it is both a quantity, and at the same time also gives the interpretation in and of itself about the intrinsic value of the representation. As such, all quantitative representations, and models in general, are symbolic and can only be understood a posteriori, namely, after one has been taught how to read and interpret these numbers.
Since models are subjective things that are true insofar as they reflect reality, we do not strictly need to interpret a binary string as sums of negative and positive powers of two. Instead, one may observe that we can create an arbitrary set of symbols that use base two or any other base to represent any number or ratio exactly. Just consider that we can refer to all of infinity using a single word and even a single symbol without "showing infinity" itself.
As an example, I am designing a binary encoding for mixed numbers so that I can have more precision and accuracy than an IEEE 754 float. At the time of writing this, the idea is to have a sign bit, a reciprocal bit, a certain number of bits for a scalar to determine how much to "magnify" the fractional portion, and then the remaining bits are divided evenly between the integer portion of a mixed number, and the latter a fixed-point number which, if the reciprocal bit is set, should be interpreted as one divided by that number. This has the benefit of allowing me to represent numbers with infinite decimal expansions by using their reciprocals which do have terminating decimal expansions, or alternatively, as a fraction directly, potentially as an approximation, depending on my needs.
You can't represent 0.1 exactly in binary for the same reason you can't measure 0.1 inch using a conventional English ruler.
English rulers, like binary fractions, are all about halves. You can measure half an inch, or a quarter of an inch (which is of course half of a half), or an eighth, or a sixteenth, etc.
If you want to measure a tenth of an inch, though, you're out of luck. It's less than an eighth of an inch, but more than a sixteenth. If you try to get more exact, you find that it's a little more than 3/32, but a little less than 7/64. I've never seen an actual ruler that had gradations finer than 64ths, but if you do the math, you'll find that 1/10 is less than 13/128, and it's more than 25/256, and it's more than 51/512. You can keep going finer and finer, to 1024ths and 2048ths and 4096ths and 8192nds, but you will never find an exact marking, even on an infinitely-fine base-2 ruler, that exactly corresponds to 1/10, or 0.1.
You will find something interesting, though. Let's look at all the approximations I've listed, and for each one, record explicitly whether 0.1 is less or greater:
fraction
decimal
0.1 is...
as 0/1
1/2
0.5
less
0
1/4
0.25
less
0
1/8
0.125
less
0
1/16
0.0625
greater
1
3/32
0.09375
greater
1
7/64
0.109375
less
0
13/128
0.1015625
less
0
25/256
0.09765625
greater
1
51/512
0.099609375
greater
1
103/1024
0.1005859375
less
0
205/2048
0.10009765625
less
0
409/4096
0.099853515625
greater
1
819/8192
0.0999755859375
greater
1
Now, if you read down the last column, you get 0001100110011. It's no coincidence that the infinitely-repeating binary fraction for 1/10 is 0.0001100110011...

Number of significant digits for a floating point type

The description for type float in C mentions that the number of significant digits is 6. However,
float f = 12345.6;
and then printing it using printf() does not print 12345.6, it prints 12345.599609. So what does "6 significant digits" (or "15 in case of a double") mean for a floating point type?
6 significant digits means that the maximum error is approximately +/- 0.0001%. The single float value actually has about 7.2 digits of precision (source). This means that the error is about +/- 12345.6/10^7 = 0.00123456. Which is on the order of your error (0.000391).
According to the standard, not all decimal number can be stored exactly in memory. Depending on the size of the representation, the error can get to a certain maximum. For float this is 0.0001% (6 significant digits = 10^-6 = 10^-4 %).
In your case the error is (12345.6 - 12345.599609) / 12345.6 = 3.16e-08 far lower than the maximum error for floats.
What you're seeing is not really any issue with significant digits, but the fact that numbers on a computer are stored in binary, and there is no finite binary representation for 3/5 (= 0.6). 3/5 in binary looks like 0.100110011001..., with the "1001" pattern repeating forever. This sequence is equivalent to 0.599999... repeating. You're actually getting to three decimal places to the right of the decimal point before any error related to precision kicks in.
This is similar to how there is no finite base-10 representation of 1/3; we have 0.3333 repeating forever.
The problem here is that you cannot assure a number can be stored in a float. You need to represent this number with mantissa, base and exponent as IEEE 754 explains. The number printf(...) shows you is the real float number that was stored. You can not assure a number of significant digits in a float number.

Resources