Why does printf("%.6g, <value>) ignore zeroes after decimal point? - c

I want to print maximum of 6 digits after the decimal point, so I used the following:
printf("%.6g",0.127943989);
the output of printf is 0.127944 which is correct, but when I tried this:
printf("%.6g",0.007943989);
the output became 0.00794399 which is not what I expected to see!
It seems that printf ignores zeros after the decimal point. So how can I force it to output maximum of 6 digit precision?

Is the "%f" format specifier:
printf("%.6f\n",0.007943989) // prints 0.007944
what you're looking for? Keep in mind that this won't automatically switch over to the %e format style if the number warrants it, so this might not be exactly what you're looking for either.
For example, the following:
printf("%.6g\n",0.00007943989);
printf("%.6f\n",0.00007943989);
prints:
7.94399e-005
0.000079
and it's not clear if you want the exponential form for the smaller numbers that "%g" provides.
Note that the spec for %f and %g has a slightly different behavior for how the precision specification is handled.
for %f: "the number of digits after the decimal-point character is equal to the precision specification"
for %g: "with the precision specifying the number of significant digits"
And that the run of zeros after the decimal point do not count toward significant digits (which exlpains the behavior you see for %g).

The ISO C standard requires that
An optional precision, in the form of a period ('.') followed by
an optional decimal digit string... This gives ... the maximum
number of significant digits for g and G conversions
Here is the output that you would get with GNU libc (GCC 4.5.1, libc 2.11; GCC 4.6.3, libc 2.15)
printf("%.6g\n", 0.12);
0.12
printf("%.6g\n", 0.1234567890);
0.123457
printf("%.6g\n", 0.001234567890);
0.00123457
printf("%#.6g\n", 0.001234567890);
0.00123457
printf("%.6g\n", 0.00001234567890);
1.23457e-05
printf("%#.6g\n", 0.00001234567890);
1.23457e-05
printf("%.6f\n", 0.12);
0.120000
printf("%.6f\n", 0.1234567890);
0.123457
printf("%.6f\n", 0.001234567890);
0.001235
printf("%#.6f\n", 0.001234567890);
0.001235
printf("%.6f\n", 0.001234567890);
0.000012
printf("%#.6f\n", 0.001234567890);
0.000012

Related

Is there a way to automatically printf a float to the number of decimal places it has?

I've written a program to display floats to the appropriate number of decimal places:
#include <stdio.h>
int main() {
printf("%.2f, %.10f, %.5f, %.5f\n", 1.27, 345.1415926535, 1.22013, 0.00008);
}
Is there any kind of conversion character that is like %.(however many decimal places the number has)f or do they always have to be set manually?
Is there a way to automatically printf a float to the number of decimal places it has?
Use "%g". "%g" lops off trailing zero digits.
... unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining. C17dr § 7.21.6.1 8.
All finite floating point values are exactly representable in decimal - some need many digits to print exactly. Up to DBL_DECIMAL_DIG from <float.h> (typically 17) significant digits is sufficient - rarely a need for more.
Pass in a precision to encourage enough output, but not too much.
Remember values like 0.00008 are not exactly encoded in the typical binary floating point double, but a nearby value is used like 8.00000000000000065442...e-05
printf("%.*g\n", DBL_DECIMAL_DIG, some_double);
printf("%.17g, %.17g, %.17g, %.17g\n", 1.27, 345.1415926535, 1.22013, 0.00008);
// 1.27, 345.14159265350003, 1.2201299999999999, 8.0000000000000007e-05
DBL_DIG (e.g. 15) may better meet OP's goal.
printf("%.15g, %.15g, %.15g, %.15g\n", 1.27, 345.1415926535, 1.22013, 0.00008);
// 1.27, 345.1415926535, 1.22013, 8e-05
Function to print a double - exactly may take 100s of digits.
sprintf() could help you
There is no direct way to do this in my experience
here is a simple algorithm to help you with this function
munber = float/double input
n = number of decimal places in float/double
char format[999];
sprintf(format ,"%%.%df" ,n);
printf(format, number);
sprintf is like printf but instead of writing to stdout, sprintf writes to a string.
Now you are left with finding number of digits after the precision.

Why does printf() with %f lose a digit after decimal point sometimes?

Why does the statement
printf("%f", sensorvalue)
print out a string like “11312.96” (with two digits after decimal points) most of the time, but sometimes print out a string like “11313.1” (with one digit after decimal point)? sensorvalue is read from a power meter continuously. The values at different times are supposed to have the same format.
It's C running on Linux.
Why does the statement printf("%f", sensorvalue) print out the string like 11312.96 (with two digits after decimal points) at most time, but sometimes print string like 11313.1 (with one digit after decimal point)?
The library is simply not C compliant even if "It's C running on Linux."
The output of
printf("%f\n", 11312.96f);
printf("%f\n", 11312.96);
printf("%f\n", 11313.1f);
printf("%f\n", 11313.1);
... is expected to be like the below with 6 digits after the '.' - perhaps with some variation in the values of the least digits. Even with implementations of varying quality, the output should have been 6 digits after the '.'.
11312.959961
11312.960000
11313.099609
11313.100000
Had the format been "%g", output like below could have occurred.
11313
11313
11313.1
11313.1
If you're using %f exactly as stated, this actually violates the standard (this would be unusual but certainly not unheard of), which states in C11 7.21.6.1 The fprintf function /8:
F, f: A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6.
In other words, this program:
#include <stdio.h>
int main() {
double d1 = 11312.96, d2 = 11313.1;
printf("%f\n%f\n", d1, d2);
return 0;
}
should generate:
11312.960000
11313.100000
If you want it to have a different format (in both your seemingly incorrect case, and the case that complies with the standard), use the precision argument to force it, such as with:
printf("%.2f\n", d1); // gives "11312.96"
You may also want to specify the minimum field width to ensure your numbers are lined up on the right, such as with:
// posn: 123456789
// ---------
printf("%9.2f\n", d1); // gives " 11312.96"
printf("%9.2f\n", 3.1); // gives " 3.10"

What precisely does the %g printf specifier mean?

The %g specifier doesn't seem to behave in the way that most sources document it as behaving.
According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value. For instance, at the time of writing this question, cplusplus.com says that the g specifier means:
Use the shortest representation: %e or %f
And the PHP manual says it means:
g - shorter of %e and %f.
And here's a Stack Overflow answer that claims that
%g uses the shortest representation.
And a Quora answer that claims that:
%g prints the number in the shortest of these two representations
But this behaviour isn't what I see in reality. If I compile and run this program (as C or C++ - it's a valid program with the same behaviour in both):
#include <stdio.h>
int main(void) {
double x = 123456.0;
printf("%e\n", x);
printf("%f\n", x);
printf("%g\n", x);
printf("\n");
double y = 1234567.0;
printf("%e\n", y);
printf("%f\n", y);
printf("%g\n", y);
return 0;
}
... then I see this output:
1.234560e+05
123456.000000
123456
1.234567e+06
1234567.000000
1.23457e+06
Clearly, the %g output doesn't quite match either the %e or %f output for either x or y above. What's more, it doesn't look like %g is minimising the output length either; y could've been formatted more succinctly if, like x, it had not been printed in scientific notation.
Are all of the sources I've quoted above lying to me?
I see identical or similar behaviour in other languages that support these format specifiers, perhaps because under the hood they call out to the printf family of C functions. For instance, I see this output in Python:
>>> print('%g' % 123456.0)
123456
>>> print('%g' % 1234567.0)
1.23457e+06
In PHP:
php > printf('%g', 123456.0);
123456
php > printf('%g', 1234567.0);
1.23457e+6
In Ruby:
irb(main):024:0* printf("%g\n", 123456.0)
123456
=> nil
irb(main):025:0> printf("%g\n", 1234567.0)
1.23457e+06
=> nil
What's the logic that governs this output?
This is the full description of the g/G specifier in the C11 standard:
A double argument representing a floating-point number is
converted in style f or e (or in style F or E in the case of a G
conversion specifier), depending on the value converted and the
precision. Let P equal the precision if nonzero, 6 if the precision is
omitted, or 1 if the precision is zero. Then, if a conversion with
style E would have an exponent of X:
if P > X ≥ −4, the conversion is
with style f (or F) and precision P − (X + 1).
otherwise, the
conversion is with style e (or E) and precision P − 1.
Finally, unless
the # flag is used, any trailing zeros are removed from the fractional
portion of the result and the decimal-point character is removed if
there is no fractional portion remaining.
A double argument
representing an infinity or NaN is converted in the style of an f or F
conversion specifier.
This behaviour is somewhat similar to simply using the shortest representation out of %f and %e, but not equivalent. There are two important differences:
Trailing zeros (and, potentially, the decimal point) get stripped when using %g, which can cause the output of a %g specifier to not exactly match what either %f or %e would've produced.
The decision about whether to use %f-style or %e-style formatting is made based purely upon the size of the exponent that would be needed in %e-style notation, and does not directly depend on which representation would be shorter. There are several scenarios in which this rule results in %g selecting the longer representation, like the one shown in the question where %g uses scientific notation even though this makes the output 4 characters longer than it needs to be.
In case the C standard's wording is hard to parse, the Python documentation provides another description of the same behaviour:
General format. For a given precision p >= 1,
this rounds the number to p significant digits and
then formats the result in either fixed-point format
or in scientific notation, depending on its magnitude.
The precise rules are as follows: suppose that the
result formatted with presentation type 'e' and
precision p-1 would have exponent exp. Then
if -4 <= exp < p, the number is formatted
with presentation type 'f' and precision
p-1-exp. Otherwise, the number is formatted
with presentation type 'e' and precision p-1.
In both cases insignificant trailing zeros are removed
from the significand, and the decimal point is also
removed if there are no remaining digits following it.
Positive and negative infinity, positive and negative
zero, and nans, are formatted as inf, -inf,
0, -0 and nan respectively, regardless of
the precision.
A precision of 0 is treated as equivalent to a
precision of 1. The default precision is 6.
The many sources on the internet that claim that %g just picks the shortest out of %e and %f are simply wrong.
My favorite format for doubles is "%.15g". It seems to do the right thing in every case. I'm pretty sure 15 is the maximum reliable decimal precision in a double as well.

double precision lost when parsing csv file in C

I'm trying to read in a file in c with the following format:
6.43706064058,4.15417249035
3.43706064058,1.15417249035
...
I'm able to parse out the two doubles, but when I print out what I've parsed, I notice that I only get up to 6 decimal places. Here is my code:
long double d1;
long double d2;
fscanf(file, "%Lf,%Lf", &d1, &d2);
printf("x:%Lf, y:%Lf", d1, d2);
Output:
x:6.437061, y:4.154172
...
Where am I losing the precision? Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
That's exactly what's happening. From the printf(3) man page:
... the number of digits after the
decimal-point character is equal to the precision specification.
If the precision is missing, it is taken as 6 ...
Tell printf to show more precision by changing your format string:
printf("x:%.11Lf, y:%.11Lf", d1, d2);
The default %f format only prints 6 places after the decimal point, which gives you much less precision than the actual floating point value (unless the exponent is large) and possibly no precision at all (if the exponent is more than slightly negative). Unless you know all your values are bounded away from zero (e.g. all greater than 1), you really need to use the %g format (which can switch to exponential notation as needed) or the %e format (which always uses exponential notation) to print floating point values in a way that preserves their precision.
You also need to use sufficiently many decimal places. For IEEE double precision, 17 decimal places is sufficient, so %.17g would be the preferred format. For long double, it depends on the type used on your particular implementation. Thankfully, C offers a macro, DECIMAL_DIG, that gives you exactly the number of places you need. So you would use:
printf("%.*Lg", DECIMAL_DIG, x);
or similar. Note that this will print more places than were originally present in your input file. If you know your input always has a particular number of places, you could perhaps just hard-code that instead of using DECIMAL_DIG to get a more uniform output.
The reason you are not as far with the precision as you'd like to be is because the level of the spacing in the number is not enough. In the first number, 6.43706064058, you have 13 numbers, including the decimal, so you'd put
printf("x:%13Lf, y:%Lf", d1, d2);
allowing 13 spaces for the x:
for the second number, 4.15417249035, you have 13 also, so for that, you'd put
printf("x:%13Lf, y:13%Lf", d1, d2);
and that will print:
x:6.43706064058, y:4.15417249035
you must allow room for all of the spaces within the number when doing the printf function.
Hope that helped!

Clear trailing 0's on a double?

I have a double thats got a value of something like 0.50000 but I just want 0.5 - Is there any way to get rid of those trailing 0's? :)
In C,
printf("%g", 0.5000);
Note: (from GNU libc manual)
The %g and %G conversions print the argument in the style of %e or %E (respectively)
if the exponent would be less than -4 or greater than or equal to the precision; otherwise
they use the ‘%f’ style. A precision of 0, is taken as 1. Trailing zeros are removed from the
fractional portion of the result and a decimal-point character appears only if it is followed
by a digit.
standard c format statements.
NSLog(#" %.2f", .5000)

Resources