Generally we say that a float has precision of 6 digits after the decimal point. But if we store a large number of the order of 10^30 we won't get 6 digits after the decimal point. So is it correct to say that floats have a precision of 6 digits after the decimal point?
"6 digits after the decimal point" is nonesnse, and your example is a good demonstration of this.
This is an exact specification of the float data type.
The precision of the float is 24 bits. There are 23 bits denoting the fraction after the binary point, plus there's also an "implicit leading bit", according to the online source. This gives 24 significant bits in total.
Hence in decimal digits this is approximately:
24 * log(2) / log(10) = 7.22
It sounds like you're asking about precision to decimal places (digits after the decimal point), whereas significant figures (total number of digits excluding leading and traling zeroes) is a better way to describe accuracy of numbers.
You're correct in that the number of digits after the decimal point will change when the number is larger - but if we're talking precision, the number of significant figures will not change when the number is larger. However, the answer isn't simple for decimal numbers:
Most systems these days use IEE floating point format to represent numbers in C. However, if you're on something unusual, it's worth checking. Single precision IEE float numbers are made up of three parts:
The sign bit (is this number positive or negative)
The (generally also signed) exponent
The fraction (the number before the exponent is applied)
As we'd expect, this is all stored in binary.
How many significant figures?
If you are using IEE-754 numbers, "how many significant figures" probably isn't an easy way to think about it, because the precision is measured in binary significant figures rather than decimal. floats have only 23 bits of accuracy for the fraction part, but because there's an implicit leading bit (unless the fraction part is all zeroes, which indicates a final value of 1), there are 24 effective bits of precision.
This means there are 24 significant binary digits, which does not translate to an exact number of decimal significant figures. You can use the formula 24 * log(2) / log(10) to determine that there are 7.225 digits of decimal precision, which isn't a very good answer to your question, since there are numbers of 24 significant binary digits which only have 6 significant decimal digits.
So, single precision floating point numbers have 6-9 significant decimal digits of precision, depending on the number.
Interestingly, you can also use this precision to work out the largest consecutive integer (counting from zero) that you can successfully represent in a single precision float. It is 2^24, or 16,777,216. You can exactly store larger integers, but only if they can be represented in 24 significant binary digits.
Further trivia: The limited size of the fraction component is the same thing that causes this in Javascript:
> console.log(9999999999999999);
10000000000000000
Javascript numbers are always represented as double precision floats, which have 53 bits of precision. This means between 2^53 and 2^54, only even numbers can be represented, because the final bit of any odd number is lost.
The precision of floating point numbers should be measured in binary digits, not decimal digits. This is because computers operate on binary numbers, and a binary fraction can only approximate a decimal fraction.
Language lawyers will say that the exact width of a float is unspecified by the C standard and therefore implementation-dependent, but on any platform you are likely to encounter a C float means an IEEE754 single-precision number.
IEEE754 specifies that a floating point number is in scientific notation: (-1)s×2e×m
where s is one bit wide, e is eight bits wide, and m is twenty three bits wide. Mathematically, m is 24 bits wide because it's always assumed that the top bit is 1.
So, the maximum number of decimal digits that can be approximated with this representation is: log10(224) = 7.22 .
That approximates seven significant decimal digits, and an exponent ranging from 2-126 to 2127.
Notice that the exponent is measured separately. This is exactly like if you were using ordinary scientific notation, like "A person weighs 72.3 kilograms = 7.23×104 grams". Notice that there are three significant digits here, representing that the number is only accurate to within 100 grams. But there is also an exponent which is a different number entirely. You can have a very big exponent with very few significant digits, like "the sun weighs 1.99×1033 grams." Big number, few digits.
In a nutshell, a float can store about 7-8 significant decimal digits. Let me illustrate this with an example:
1234567001.00
^
+---------------- this information is lost
.01234567001
^
+-------------- this information is lost
Basically, the float stores two values: 1234567 and the position of the decimal point.
Now, this is a simplified example. Floats store binary values instead of decimal values. A 32-bit IEEE 754 float has space for 23 "significant bits" (plus the first one which is always assumed to be 1), which corresponds to roughly 7-8 decimal digits.
1234567001.00 (dec) =
1001001100101011111111101011001.00 (bin) gets rounded to
1001001100101011111111110000000.00 =
| 23 bits |
1234567040.00 (dec)
And this is exactly what C produces:
void main() {
float a = 1234567001;
printf("%f", a); // outputs 1234567040
}
Related
Generally we say that a float has precision of 6 digits after the decimal point. But if we store a large number of the order of 10^30 we won't get 6 digits after the decimal point. So is it correct to say that floats have a precision of 6 digits after the decimal point?
"6 digits after the decimal point" is nonesnse, and your example is a good demonstration of this.
This is an exact specification of the float data type.
The precision of the float is 24 bits. There are 23 bits denoting the fraction after the binary point, plus there's also an "implicit leading bit", according to the online source. This gives 24 significant bits in total.
Hence in decimal digits this is approximately:
24 * log(2) / log(10) = 7.22
It sounds like you're asking about precision to decimal places (digits after the decimal point), whereas significant figures (total number of digits excluding leading and traling zeroes) is a better way to describe accuracy of numbers.
You're correct in that the number of digits after the decimal point will change when the number is larger - but if we're talking precision, the number of significant figures will not change when the number is larger. However, the answer isn't simple for decimal numbers:
Most systems these days use IEE floating point format to represent numbers in C. However, if you're on something unusual, it's worth checking. Single precision IEE float numbers are made up of three parts:
The sign bit (is this number positive or negative)
The (generally also signed) exponent
The fraction (the number before the exponent is applied)
As we'd expect, this is all stored in binary.
How many significant figures?
If you are using IEE-754 numbers, "how many significant figures" probably isn't an easy way to think about it, because the precision is measured in binary significant figures rather than decimal. floats have only 23 bits of accuracy for the fraction part, but because there's an implicit leading bit (unless the fraction part is all zeroes, which indicates a final value of 1), there are 24 effective bits of precision.
This means there are 24 significant binary digits, which does not translate to an exact number of decimal significant figures. You can use the formula 24 * log(2) / log(10) to determine that there are 7.225 digits of decimal precision, which isn't a very good answer to your question, since there are numbers of 24 significant binary digits which only have 6 significant decimal digits.
So, single precision floating point numbers have 6-9 significant decimal digits of precision, depending on the number.
Interestingly, you can also use this precision to work out the largest consecutive integer (counting from zero) that you can successfully represent in a single precision float. It is 2^24, or 16,777,216. You can exactly store larger integers, but only if they can be represented in 24 significant binary digits.
Further trivia: The limited size of the fraction component is the same thing that causes this in Javascript:
> console.log(9999999999999999);
10000000000000000
Javascript numbers are always represented as double precision floats, which have 53 bits of precision. This means between 2^53 and 2^54, only even numbers can be represented, because the final bit of any odd number is lost.
The precision of floating point numbers should be measured in binary digits, not decimal digits. This is because computers operate on binary numbers, and a binary fraction can only approximate a decimal fraction.
Language lawyers will say that the exact width of a float is unspecified by the C standard and therefore implementation-dependent, but on any platform you are likely to encounter a C float means an IEEE754 single-precision number.
IEEE754 specifies that a floating point number is in scientific notation: (-1)s×2e×m
where s is one bit wide, e is eight bits wide, and m is twenty three bits wide. Mathematically, m is 24 bits wide because it's always assumed that the top bit is 1.
So, the maximum number of decimal digits that can be approximated with this representation is: log10(224) = 7.22 .
That approximates seven significant decimal digits, and an exponent ranging from 2-126 to 2127.
Notice that the exponent is measured separately. This is exactly like if you were using ordinary scientific notation, like "A person weighs 72.3 kilograms = 7.23×104 grams". Notice that there are three significant digits here, representing that the number is only accurate to within 100 grams. But there is also an exponent which is a different number entirely. You can have a very big exponent with very few significant digits, like "the sun weighs 1.99×1033 grams." Big number, few digits.
In a nutshell, a float can store about 7-8 significant decimal digits. Let me illustrate this with an example:
1234567001.00
^
+---------------- this information is lost
.01234567001
^
+-------------- this information is lost
Basically, the float stores two values: 1234567 and the position of the decimal point.
Now, this is a simplified example. Floats store binary values instead of decimal values. A 32-bit IEEE 754 float has space for 23 "significant bits" (plus the first one which is always assumed to be 1), which corresponds to roughly 7-8 decimal digits.
1234567001.00 (dec) =
1001001100101011111111101011001.00 (bin) gets rounded to
1001001100101011111111110000000.00 =
| 23 bits |
1234567040.00 (dec)
And this is exactly what C produces:
void main() {
float a = 1234567001;
printf("%f", a); // outputs 1234567040
}
While trying to understand int, if I was given the size of int in bits, I could use the formula of permutations to determine the maximum positive and negative base-10 values of int. So if a signed int is 16 bits wide, I can use 2^16 to determine the number of possible permutations and then can calculate the maximum number of positive numbers and the maximum number of negative numbers by using 2^15.
In a 32 bit float, 24 bits are assigned for the significand and its sign. 2^23 would be the maximum number of permutations, if we consider the sign to be positive. How can I get the maximum value of the significand from this number 2^23? Or is my understanding of floating point numbers flawed?
ieee-754 uses significand rather than mantissa.
C does not define mantissa. C uses significand.
Common float normal1 values have a 24-bit significand that is made up of 1 implied bit with a value of 1 and 23 explicitly encoded binary fractional bits. All 224 combination are possible.
The maximum significand is 1.11111111 11111111 11111112 or 1.9999998807907104492187510 or (2.0-2-23).
When this is combined with the maximum binary exponent for finite numbers 2(254-127), the maximum float, FLT_MAX is 340282346638528859811704183484516925440.0 or about 3.402823466e+38.
1For sub-numerals, there is no implied bit.
That maximum significand is 0.11111111 11111111 11111112 or 0.9999998807907104492187510
The number of possible values of a normal significand of a float is (FLT_RADIX-1)/FLT_EPSILON, where FLT_RADIX and FLT_EPSILON are defined by including <float.h>.
This is because FLT_EPSILON is the step size from 1 to the next greater representable number, so it is a change of 1 in the significand bits (when they are interpreted as a binary integer and we are starting from the floating-point number 1.000…000). FLT_RADIX/FLT_EPSILON calculates how many steps the significand could go through, starting from 0, until it wraps or overflows its leading digit. However, we do not start at zero; the question requests excluding the implicit leading 1 bit. The leading bit of a normalized binary-based floating-point number is 1, but, when we generalize to other bases, the leading digit of a floating-point number may be something other than 1 for a normalized number; it can be a non-zero integer less than FLT_RADIX. So, starting from 1 instead of 0, there are (FLT_RADIX-1)/FLT_EPSILON possible values of normal significands.
Note that (FLT_RADIX-1)/FLT_EPSILON has an integer value but floating-point type. To use it as an integer type, you may need a cast, such as when printing it with %d.
The floating-point number with the same scale (exponent) as 1 but the maximum significand is FLT_RADIX - FLT_EPSILON. The maximum value of the significand as an integer is FLT_RADIX/FLT_EPSILON - 1. Note that the latter includes the leading digit.
Notes
“Significand” is the preferred term for the fraction portion of a floating-point number. “Mantissa” is an old term for the fraction portion of a logarithm. Significands are linear; multiplying a significand multiplies the number represented. Mantissas are logarithm; adding to a mantissa multiplies the number represented.
“Permutation” refers to moving things around; (1 2 3 4) and (3 4 2 1) are permutations of each other. You appear to want the number of different values the significand bits can have.
The magnitude of the mantissa has no meaning without considering the exponent. What the 23 tells you is that the number of significant decimal digits is 23 * log(2) ≈ 7.
However a 24th bit is implied, giving 24 * log(2) which is > 7. So all 7-digit integer values can be stored without loss of precision.
Also, any integer that has a power of 2 as a factor, and when divided by that factor has 7 digits or less, can also be exactly represented, as the power of 2 is taken up by the exponent (subject to the limit of the exponent value).
So it is the exponent size that gives the range of values that can be stored, while the mantissa (significand) size gives the precision.
While studying C I came to know that range of long double is 3.4E-4932 to 1.1E+4932. What is E here ? Size of long double in 10 bytes. If I assume E is 10 then how long double stores numbers till 19 places after decimal.
3.4E-4932 means . Both floats and doubles are stored in a format that keeps the exponent and the mantissa separate. In your example, -4392 will be encoded in the exponent, and 3.4 will be encoded in the mantissa, both as binary numbers.
Note that IEEE floating point formats come in a variety of ranges with availability that varies by platform. Refer to IEEE floating point for more details. As pointed out by Joe Farrell, your range is probably x86 Extended Precision Format. That format carries 1 bit for sign (s), 15 bits of binary exponent (e) with a bias of -16383, and 1 + 63 bits of binary mantissa (m). For normalized numbers, the value is computed as .
The smallest positive normalized number in this format has a sign bit of 0, an exponent of 1, and a mantissa of 1.0, corresponding to or . In binary, that number looks like:
The range of a long double (or, indeed, any floating point width) on Intel hardware is typically [-∞, ∞]. Between those endpoints many finite numbers are also representable:
0
±m×2e, where:
m is an integer between 1 and 264-1, and
e is an integer between -16445 and 16320
That means that the smallest non-zero long double is 2-16445 and the largest finite long double is (264-1)·216320 (or 216384-216320), which are approximately equal to the decimal numbers in scientific notation in the question.
See this Wikipedia article for details on the representation (which is binary, not decimal).
If you print a float with more precision than is stored in memory, aren't the extra places supposed to have zeros in them? I have code that is something like this:
double z[2*N]="0";
...
for( n=1; n<=2*N; n++) {
fprintf( u1, "%.25g", z[n-1]);
fputc( n<2*N ? ',' : '\n', u1);
}
Which is creating output like this:
0,0.7071067811865474617150085,....
A float should have only 17 decimal places (right? Doesn't 53 bits comes out to 17 decimal places). If that's so, then the 18th, 19th... 25th places should have zeros. Notice in the above output that they have digits other than 0 in them.
Am I misunderstanding something? If so, what?
No, 53 bits means that the 17 decimal places are what you can trust, but because base-10 notation that we use is in a different base from which the double is stored (binary), the later digits are just because 1/2^53 is not exactly 1/10^n, i.e.,
1/2^53 = .0000000000000001110223024625156540423631668090820312500000000
The string printed by your implementation shows the exact value of the double in your example, and this is permitted by the C standard, as I show below.
First, we should understand what the floating-point object represents. The C standard does a poor job of this, but, presuming your implementation uses the IEEE 754 floating-point standard, a normal floating-point object represents exactly (-1)s•2e•(1+f) for some sign bit s (0 or 1), exponent e (in range for the specific type, -1022 to 1023 for double), and fraction f (also in range, 52 bits after a radix point for double). Many people use the object to approximate nearby values, but, according to the standard, the object only represents the one value it is defined to be.
The value you show, 0.7071067811865474617150085, is exactly representable as a double (sign bit 0, exponent -1, and fraction bits [in hexadecimal] .6a09e667f3bcc16). It is important to understand the double with this value represents exactly that value; it does not represent nearby values, such as 0.707106781186547461715.
Now that we know the value being passed to fprintf, we can consider what the C standard says about this. First, the C standard defines a constant named DECIMAL_DIG. C 2011 5.2.4.2.2 11 defines this to be the number of decimal digits such that any floating-point number in the widest supported type can be rounded to that many decimal digits and back again without change to the value. The precision you passed to fprintf, 25, is likely greater than the value of DECIMAL_DIG on your system.
In C 2011 7.21.6.1 13, the standard says “If the number of significant decimal digits is more than DECIMAL_DIG but the source value is exactly representable with DECIMAL_DIG digits, then the result should be an exact representation with trailing zeros. Otherwise, the source value is bounded by two adjacent decimal strings L < U , both having DECIMAL_DIG significant digits; the value of the resultant decimal string D should satisfy L ≤ D ≤ U, with the extra stipulation that the error should have a correct sign for the current rounding direction.”
This wording allows the compiler some wiggle room. The intent is that the result must be accurate enough that it can be converted back to the original double with no error. It may be more accurate, and some C implementations will produce the exactly correct value, which is permitted since it satisfies the paragraph above.
Incidentally, the value you show is not the double closest to sqrt(2)/2. That value is +0x1.6A09E667F3BCDp-1 = 0.70710678118654757273731092936941422522068023681640625.
There is enough precision to represent 0.7071067811865474617150085 in double precision floating point. The 64 bit output is actually 3FE6A09E667F3BCC
The formula used to evaluate the number is an exponentiation, so you cannot say that 53 bits will take 17 decimal places.
EDIT:
Look at the example below in the wiki article for another instance:
0.333333333333333314829616256247390992939472198486328125
=2^(−54) × 15 5555 5555 5555 base16
=2^(−2) × (15 5555 5555 5555 base16 × 2^(−52) )
You are asking for float, but in your code appears double.
Anyway, neither float or double have always the same number of decimals. Float have assigned 32 bits (4 bytes) for a floating point representation according to IEEE 754.
From Wikipedia:
The IEEE 754 standard specifies a binary32 as having:
Sign bit: 1 bit
Exponent width: 8 bits
Significand precision: 24 (23 explicitly stored)
This gives from 6 to 9 significant decimal digits precision (if a
decimal string with at most 6 significant decimal is converted to IEEE
754 single precision and then converted back to the same number of
significant decimal, then the final string should match the original;
and if an IEEE 754 single precision is converted to a decimal string
with at least 9 significant decimal and then converted back to single,
then the final number must match the original).
In the case of double, from Wikipedia again:
Double-precision binary floating-point is a commonly used format on
PCs, due to its wider range over single-precision floating point, in
spite of its performance and bandwidth cost. As with single-precision
floating-point format, it lacks precision on integer numbers when
compared with an integer format of the same size. It is commonly known
simply as double. The IEEE 754 standard specifies a binary64 as
having:
Sign bit: 1 bit
Exponent width: 11 bits
Significand precision: 53 bits (52 explicitly stored)
This gives from 15 - 17 significant
decimal digits precision. If a decimal string with at most 15
significant decimal is converted to IEEE 754 double precision and then
converted back to the same number of significant decimal, then the
final string should match the original; and if an IEEE 754 double
precision is converted to a decimal string with at least 17
significant decimal and then converted back to double, then the final
number must match the original.
On the other hand, you can't expect that if you have a float and print it out with more precision that the really stored, the rest of digits will fill with 0s. The compiler can't imagine the tricks you are trying to do.
The code
float x = 3.141592653589793238;
double z = 3.141592653589793238;
printf("x=%f\n", x);
printf("z=%f\n", z);
printf("x=%20.18f\n", x);
printf("z=%20.18f\n", z);
will give you the output
x=3.141593
z=3.141593
x=3.141592741012573242
z=3.141592653589793116
where on the third line of output 741012573242 is garbage and on the fourth line 116 is garbage. Do doubles always have 16 significant figures while floats always have 7 significant figures? Why don't doubles have 14 significant figures?
Floating point numbers in C use IEEE 754 encoding.
This type of encoding uses a sign, a significand, and an exponent.
Because of this encoding, many numbers will have small changes to allow them to be stored.
Also, the number of significant digits can change slightly since it is a binary representation, not a decimal one.
Single precision (float) gives you 23 bits of significand, 8 bits of exponent, and 1 sign bit.
Double precision (double) gives you 52 bits of significand, 11 bits of exponent, and 1 sign bit.
Do doubles always have 16 significant
figures while floats always have 7
significant figures?
No. Doubles always have 53 significant bits and floats always have 24 significant bits (except for denormals, infinities, and NaN values, but those are subjects for a different question). These are binary formats, and you can only speak clearly about the precision of their representations in terms of binary digits (bits).
This is analogous to the question of how many digits can be stored in a binary integer: an unsigned 32 bit integer can store integers with up to 32 bits, which doesn't precisely map to any number of decimal digits: all integers of up to 9 decimal digits can be stored, but a lot of 10-digit numbers can be stored as well.
Why don't doubles
have 14 significant figures?
The encoding of a double uses 64 bits (1 bit for the sign, 11 bits for the exponent, 52 explicit significant bits and one implicit bit), which is double the number of bits used to represent a float (32 bits).
float: 23 bits of significand, 8 bits of exponent, and 1 sign bit.
double: 52 bits of significand, 11 bits of exponent, and 1 sign bit.
It's usually based on significant figures of both the exponent and significand in base 2, not base 10. From what I can tell in the C99 standard, however, there is no specified precision for floats and doubles (other than the fact that 1 and 1 + 1E-5 / 1 + 1E-7 are distinguishable [float and double repsectively]). However, the number of significant figures is left to the implementer (as well as which base they use internally, so in other words, an implementation could decide to make it based on 18 digits of precision in base 3). [1]
If you need to know these values, the constants FLT_RADIX and FLT_MANT_DIG (and DBL_MANT_DIG / LDBL_MANT_DIG) are defined in float.h.
The reason it's called a double is because the number of bytes used to store it is double the number of a float (but this includes both the exponent and significand). The IEEE 754 standard (used by most compilers) allocate relatively more bits for the significand than the exponent (23 to 9 for float vs. 52 to 12 for double), which is why the precision is more than doubled.
1: Section 5.2.4.2.2 ( http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf )
A float has 23 bits of precision, and a double has 52.
It's not exactly double precision because of how IEEE 754 works, and because binary doesn't really translate well to decimal. Take a look at the standard if you're interested.