Why double and %f don't want to print 10 decimals? - c

I am learning c programming language and am figuring out format specifiers, but it seems as if double and %f are not working corectly.
Here is my code
#include <stdio.h>
int main(void)
{
double a = 15.1234567899876;
printf("%13.10f", a);
}
In my textbook it's stated that in "%13.10f" 13 stands for total number of digits we want to be printed(including dot) and 10 is number of decimals. So i expected to get 15.1234567899 but didn't.
After running it I get 15.1234567900. It's not just not enough decimals, but decimals are not printed correctly. Variable a has 8 after 7 and before 9, but printed number does not.
Can someone please tell me where am I wrong.
Thank you. Lp

printf is supposed to round the result to the number of digits you asked for.
you asked: 15.1234567899876
you got: 15.1234567900
digit count: 1234567890
So printf is behaving correctly.
You should beware, though, that both types float and double have finite precision. Also their finite precision is as a number of binary bits, not decimal digits. So after about 7 digits for a float, and about 16 digits for a double, you'll start seeing results that can seem quite strange if you don't realize what's going on. You can see this if you start printing more digits:
printf("%18.15f\n", a);
you asked: 15.1234567899876
you got: 15.123456789987600
So that's okay. But:
printf("%23.20f\n", a);
you asked: 15.1234567899876
you got: 15.12345678998759979095
Here we see that, at the 15th digit, the number actually stored internally begins to differ slightly from the number you asked for. You can read more about this at Is floating point math broken?
Footnote: What was the number actually stored internally? It was the hexadecimal floating-point number 0xf.1f9add3b7744, or expressed in C's %a format, 0x1.e3f35ba76ee88p+3. Converted back to decimal, it's exactly 15.1234567899875997909475699998438358306884765625. All those other renditions (15.1234567900, 15.123456789987600, and 15.12345678998759979095) are rounded to some smaller number of digits. The internal value makes the most sense, perhaps, expressed in binary, where it's 0b1111.0001111110011010110111010011101101110111010001000, with exactly 53 significant bits, of which 52 are explicit and one implicit, per IEEE-754 double precision.

Related

Why does C print float values after the decimal point different from the input value? [duplicate]

This question already has answers here:
Why IEEE754 single-precision float has only 7 digit precision?
(2 answers)
Closed 1 year ago.
Why does C print float values after the decimal point different from the input value?
Following is the code.
CODE:
#include <stdio.h>
#include<math.h>
void main()
{
float num=2118850.132000;
printf("num:%f",num);
}
OUTPUT:
num:2118850.250000
This should have printed 2118850.132000, But instead it is changing the digits after the decimal to .250000. Why is it happening so?
Also, what can one do to avoid this?
Please guide me.
Your computer uses binary floating point internally. Type float has 24 bits of precision, which translates to approximately 7 decimal digits of precision.
Your number, 2118850.132, has 10 decimal digits of precision. So right away we can see that it probably won't be possible to represent this number exactly as a float.
Furthermore, due to the properties of binary numbers, no decimal fraction that ends in 1, 2, 3, 4, 6, 7, 8, or 9 (that is, numbers like 0.1 or 0.2 or 0.132) can be exactly represented in binary. So those numbers are always going to experience some conversion or roundoff error.
When you enter the number 2118850.132 as a float, it is converted internally into the binary fraction 1000000101010011000010.01. That's equivalent to the decimal fraction 2118850.25. So that's why the .132 seems to get converted to 0.25.
As I mentioned, float has only 24 bits of precision. You'll notice that 1000000101010011000010.01 is exactly 24 bits long. So we can't, for example, get closer to your original number by using something like 1000000101010011000010.001, which would be equivalent to 2118850.125, which would be closer to your 2118850.132. No, the next lower 24-bit fraction is 1000000101010011000010.00 which is equivalent to 2118850.00, and the next higher one is 1000000101010011000010.10 which is equivalent to 2118850.50, and both of those are farther away from your 2118850.132. So 2118850.25 is as close as you can get with a float.
If you used type double you could get closer. Type double has 53 bits of precision, which translates to approximately 16 decimal digits. But you still have the problem that .132 ends in 2 and so can never be exactly represented in binary. As type double, your number would be represented internally as the binary number 1000000101010011000010.0010000111001010110000001000010 (note 53 bits), which is equivalent to 2118850.132000000216066837310791015625, which is much closer to your 2118850.132, but is still not exact. (Also notice that 2118850.132000000216066837310791015625 begins to diverge from your 2118850.1320000000 after 16 digits.)
So how do you avoid this? At one level, you can't. It's a fundamental limitation of finite-precision floating-point numbers that they cannot represent all real numbers with perfect accuracy. Also, the fact that computers typically use binary floating-point internally means that they can almost never represent "exact-looking" decimal fractions like .132 exactly.
There are two things you can do:
If you need more than about 7 digits worth of precision, definitely use type double, don't try to use type float.
If you believe your data is accurate to three places past the decimal, print it out using %.3f. If you take 2118850.132 as a double, and printf it using %.3f, you'll get 2118850.132, like you want. (But if you printed it with %.12f, you'd get the misleading 2118850.132000000216.)
This will work if you use double instead of float:
#include <stdio.h>
#include<math.h>
void main()
{
double num=2118850.132000;
printf("num:%f",num);
}

Is there a way to automatically printf a float to the number of decimal places it has?

I've written a program to display floats to the appropriate number of decimal places:
#include <stdio.h>
int main() {
printf("%.2f, %.10f, %.5f, %.5f\n", 1.27, 345.1415926535, 1.22013, 0.00008);
}
Is there any kind of conversion character that is like %.(however many decimal places the number has)f or do they always have to be set manually?
Is there a way to automatically printf a float to the number of decimal places it has?
Use "%g". "%g" lops off trailing zero digits.
... unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining. C17dr ยง 7.21.6.1 8.
All finite floating point values are exactly representable in decimal - some need many digits to print exactly. Up to DBL_DECIMAL_DIG from <float.h> (typically 17) significant digits is sufficient - rarely a need for more.
Pass in a precision to encourage enough output, but not too much.
Remember values like 0.00008 are not exactly encoded in the typical binary floating point double, but a nearby value is used like 8.00000000000000065442...e-05
printf("%.*g\n", DBL_DECIMAL_DIG, some_double);
printf("%.17g, %.17g, %.17g, %.17g\n", 1.27, 345.1415926535, 1.22013, 0.00008);
// 1.27, 345.14159265350003, 1.2201299999999999, 8.0000000000000007e-05
DBL_DIG (e.g. 15) may better meet OP's goal.
printf("%.15g, %.15g, %.15g, %.15g\n", 1.27, 345.1415926535, 1.22013, 0.00008);
// 1.27, 345.1415926535, 1.22013, 8e-05
Function to print a double - exactly may take 100s of digits.
sprintf() could help you
There is no direct way to do this in my experience
here is a simple algorithm to help you with this function
munber = float/double input
n = number of decimal places in float/double
char format[999];
sprintf(format ,"%%.%df" ,n);
printf(format, number);
sprintf is like printf but instead of writing to stdout, sprintf writes to a string.
Now you are left with finding number of digits after the precision.

How does printf for float does not print the correct value for floating point [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
So i have been trying to make my own printf and now i stuck at %f.
The problem i have is i don't know what printf does in the background when i give it a float number like: f = 1.4769996 it print 1.477000.
but when i give it f = 1.4759995 it print the value 1.475999
float f = 1.4769996;
printf("%f\n", f); // 1.477000
f = 1.4759995;
printf("%f\n", f); // 1.475999
what i thought of is that printf see the 5 at last and it adds one but not working in the second example.
What is the logic behind this floating point ?
Your C implementation likely uses the IEEE-754 binary32 and binary64 formats for float and double. Given this, float f = 1.4769996; results in setting f to 1.47699964046478271484375, and f = 1.4759995; results in setting f to 1.47599947452545166015625.
Then it is easy to see that rounding 1.47699964046478271484375 to six digits after the decimal point results in 1.477000 (because the next digit is 6, so we round up), and rounding 1.47599947452545166015625 to six digits after the decimal point results in 1.475999 (because the next digit is 4, so we round down).
When working with floating-point numbers, it is important to understand each floating-point value represents one number exactly (unless it is a Not a Number [NaN] encoding). When you write 1.4769996 in source code, it is converted to a value representable in double. When you assign it to a float, it is converted to a value representable in float. Operations on the floating-point object behave as if the object have exactly the value it represents, not as if its value is the numeral you wrote in source code.
To provide some further details, the C standard requires (in C 2018 7.21.6.1 13) that formatting with f be correctly rounded if the number of digits requested is at most DECIMAL_DIG. DECIMAL_DIG is the number of decimal digits in the widest floating-point format the implementation supports such that converting any number in that format to a numeral with DECIMAL_DIG significant decimal digits and back to the floating-point format yields the original value (5.2.4.2.2 12). DECIMAL_DIG must be at least 10. If more than DECIMAL_DIG digits are requested, the C standard allows some leeway in rounding. However, high-quality C implementations will round correctly as specified by IEEE-754 (to the nearest number with the requested number of digits, with ties favoring an even low digit).
If you are trying to write your own printf, and if you are stuck on %f, there are three or four things you need to know:
When a "varargs" function like printf is called, arguments of type float are always implicitly promoted to type double. So when you've seen %f in the format string, and you're using va_arg() to pluck the next argument from the list, you'll want to pluck an argument of type double, not float. (This also means that you have just one case to handle, not two. Inside printf, you don't have to worry about handling type float at all.)
Printing the whole-number part of a double is easy; it's more or less the same problem as printing an int, which I'm guessing you've already figured out, if you've got %d working. And to do a straightforward, simpleminded job of printing the fractional part, it usually works pretty well to just repeatedly multiply by 10. That is, if you're trying to print 123.456, and you've already got the 123 part taken care of, you can then proceed to print the rest by taking the fractional part 0.456, multiplying by 10 to get 4.56 then truncating to get 4, then taking the new fractional part 0.56 and repeating.
There is no such number as 1.4769996. (There's no such number as the 123.456 I was just using, either.) When we write numbers like 1.4769996 and 123.456 we're thinking about decimal fractions, but most computers (including the one you're using) use binary fractions internally, and you can't represent decimal fractions like 1.4769996 and 123.456 exactly in binary, so the actual numbers are always a little bit different than you expect, which is why you often get slight "roundoff error", or extra 999's at the end when you expected 000.
Doing a proper job on this stuff is really, really hard. If you're trying to write your own printf, and you've gotten to %f, and if you can get it working pretty well most of the time, consider yourself lucky, and call it a day. Don't get bogged down on the last digit -- or if you're bound and determined to get the last digit right in every case (which is certainly a noble goal), do some research and set aside some time, because you're going to be working at it for a while.

what's the difference between printf a floating-point variable and constant?

Here's my code:
float x = 21.195;
printf("%.2f\n", x);
printf("%.2f\n", 21.195);
I would expect both print statements to have identical output, but instead, the first prints 21.19, and the second prints 21.20.
Could someone explain why the output is different?
The values are different. The first is a float, which is typically 4 bytes. The second is a double, which is typically 8 bytes.
The rules for rounding are based on the third digit after the decimal place. So, in one case, the value is something like 21.19499997 and the other 21.1950000000001, or something like that. (These are made up to illustrate the issue with rounding and imprecise numeric formats.)
By default 21.195 is a double.
If you want a float, write :
21.195F
or
(float)21.195
Regards
When a floating point variable is defined in C, by default it is set to double. So x is set to float because you mentioned it explicitly, else 21.195 is considered to be double.
Now, as mentioned above, float is usually about 4 bytes and double is about 8 bytes. So a float value has 24 significant bits with 7 digits of precision and double has 53 significant bits with 15 to 16 digits of precision.
A roundoff function %.2f works to round off the number correct to 2 decimal places and checks the third digit after the decimal point for rounding off.
So 21.195 in float expands to 21.19499998 and is then reduced to 21.19 after %.2f an 21.195 in double expands to 21.1950000000000001 and hence reduces to 21.20.
Hope it helps!

Why do I need 17 significant digits (and not 16) to represent a double?

Can someone give me an example of a floating point number (double precision), that needs more than 16 significant decimal digits to represent it?
I have found in this thread that sometimes you need up to 17 digits, but I am not able to find an example of such a number (16 seems enough to me).
Can somebody clarify this?
My other answer was dead wrong.
#include <stdio.h>
int
main(int argc, char *argv[])
{
unsigned long long n = 1ULL << 53;
unsigned long long a = 2*(n-1);
unsigned long long b = 2*(n-2);
printf("%llu\n%llu\n%d\n", a, b, (double)a == (double)b);
return 0;
}
Compile and run to see:
18014398509481982
18014398509481980
0
a and b are just 2*(253-1) and 2*(253-2).
Those are 17-digit base-10 numbers. When rounded to 16 digits, they are the same. Yet a and b clearly only need 53 bits of precision to represent in base-2. So if you take a and b and cast them to double, you get your counter-example.
The correct answer is the one by Nemo above. Here I am just pasting a simple Fortran program showing an example of the two numbers, that need 17 digits of precision to print, showing, that one does need (es23.16) format to print double precision numbers, if one doesn't want to loose any precision:
program test
implicit none
integer, parameter :: dp = kind(0.d0)
real(dp) :: a, b
a = 1.8014398509481982e+16_dp
b = 1.8014398509481980e+16_dp
print *, "First we show, that we have two different 'a' and 'b':"
print *, "a == b:", a == b, "a-b:", a-b
print *, "using (es22.15)"
print "(es22.15)", a
print "(es22.15)", b
print *, "using (es23.16)"
print "(es23.16)", a
print "(es23.16)", b
end program
it prints:
First we show, that we have two different 'a' and 'b':
a == b: F a-b: 2.0000000000000000
using (es22.15)
1.801439850948198E+16
1.801439850948198E+16
using (es23.16)
1.8014398509481982E+16
1.8014398509481980E+16
I think the guy on that thread is wrong, and 16 base-10 digits are always enough to represent an IEEE double.
My attempt at a proof would go something like this:
Suppose otherwise. Then, necessarily, two distinct double-precision numbers must be represented by the same 16-significant-digit base-10 number.
But two distinct double-precision numbers must differ by at least one part in 253, which is greater than one part in 1016. And no two numbers differing by more than one part in 1016 could possibly round to the same 16-significant-digit base-10 number.
This is not completely rigorous and could be wrong. :-)
Dig into the single and double precision basics and wean yourself of the notion of this or that (16-17) many DECIMAL digits and start thinking in (53) BINARY digits. The necessary examples may be found here at stackoverflow if you spend some time digging.
And I fail to see how you can award a best answer to anyone giving a DECIMAL answer without qualified BINARY explanations. This stuff is straight-forward but it is not trivial.
The largest continuous range of integers that can be exactly represented by a double (8-byte IEEE) is -253 to 253 (-9007199254740992. to 9007199254740992.). The numbers -253-1 and 253+1 cannot be exactly represented by a double.
Therefore, no more than 16 significant decimal digits to the left of the decimal point will exactly represent a double in the continuous range.

Resources