Why does the statement
printf("%f", sensorvalue)
print out a string like “11312.96” (with two digits after decimal points) most of the time, but sometimes print out a string like “11313.1” (with one digit after decimal point)? sensorvalue is read from a power meter continuously. The values at different times are supposed to have the same format.
It's C running on Linux.
Why does the statement printf("%f", sensorvalue) print out the string like 11312.96 (with two digits after decimal points) at most time, but sometimes print string like 11313.1 (with one digit after decimal point)?
The library is simply not C compliant even if "It's C running on Linux."
The output of
printf("%f\n", 11312.96f);
printf("%f\n", 11312.96);
printf("%f\n", 11313.1f);
printf("%f\n", 11313.1);
... is expected to be like the below with 6 digits after the '.' - perhaps with some variation in the values of the least digits. Even with implementations of varying quality, the output should have been 6 digits after the '.'.
11312.959961
11312.960000
11313.099609
11313.100000
Had the format been "%g", output like below could have occurred.
11313
11313
11313.1
11313.1
If you're using %f exactly as stated, this actually violates the standard (this would be unusual but certainly not unheard of), which states in C11 7.21.6.1 The fprintf function /8:
F, f: A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6.
In other words, this program:
#include <stdio.h>
int main() {
double d1 = 11312.96, d2 = 11313.1;
printf("%f\n%f\n", d1, d2);
return 0;
}
should generate:
11312.960000
11313.100000
If you want it to have a different format (in both your seemingly incorrect case, and the case that complies with the standard), use the precision argument to force it, such as with:
printf("%.2f\n", d1); // gives "11312.96"
You may also want to specify the minimum field width to ensure your numbers are lined up on the right, such as with:
// posn: 123456789
// ---------
printf("%9.2f\n", d1); // gives " 11312.96"
printf("%9.2f\n", 3.1); // gives " 3.10"
Related
The %g specifier doesn't seem to behave in the way that most sources document it as behaving.
According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value. For instance, at the time of writing this question, cplusplus.com says that the g specifier means:
Use the shortest representation: %e or %f
And the PHP manual says it means:
g - shorter of %e and %f.
And here's a Stack Overflow answer that claims that
%g uses the shortest representation.
And a Quora answer that claims that:
%g prints the number in the shortest of these two representations
But this behaviour isn't what I see in reality. If I compile and run this program (as C or C++ - it's a valid program with the same behaviour in both):
#include <stdio.h>
int main(void) {
double x = 123456.0;
printf("%e\n", x);
printf("%f\n", x);
printf("%g\n", x);
printf("\n");
double y = 1234567.0;
printf("%e\n", y);
printf("%f\n", y);
printf("%g\n", y);
return 0;
}
... then I see this output:
1.234560e+05
123456.000000
123456
1.234567e+06
1234567.000000
1.23457e+06
Clearly, the %g output doesn't quite match either the %e or %f output for either x or y above. What's more, it doesn't look like %g is minimising the output length either; y could've been formatted more succinctly if, like x, it had not been printed in scientific notation.
Are all of the sources I've quoted above lying to me?
I see identical or similar behaviour in other languages that support these format specifiers, perhaps because under the hood they call out to the printf family of C functions. For instance, I see this output in Python:
>>> print('%g' % 123456.0)
123456
>>> print('%g' % 1234567.0)
1.23457e+06
In PHP:
php > printf('%g', 123456.0);
123456
php > printf('%g', 1234567.0);
1.23457e+6
In Ruby:
irb(main):024:0* printf("%g\n", 123456.0)
123456
=> nil
irb(main):025:0> printf("%g\n", 1234567.0)
1.23457e+06
=> nil
What's the logic that governs this output?
This is the full description of the g/G specifier in the C11 standard:
A double argument representing a floating-point number is
converted in style f or e (or in style F or E in the case of a G
conversion specifier), depending on the value converted and the
precision. Let P equal the precision if nonzero, 6 if the precision is
omitted, or 1 if the precision is zero. Then, if a conversion with
style E would have an exponent of X:
if P > X ≥ −4, the conversion is
with style f (or F) and precision P − (X + 1).
otherwise, the
conversion is with style e (or E) and precision P − 1.
Finally, unless
the # flag is used, any trailing zeros are removed from the fractional
portion of the result and the decimal-point character is removed if
there is no fractional portion remaining.
A double argument
representing an infinity or NaN is converted in the style of an f or F
conversion specifier.
This behaviour is somewhat similar to simply using the shortest representation out of %f and %e, but not equivalent. There are two important differences:
Trailing zeros (and, potentially, the decimal point) get stripped when using %g, which can cause the output of a %g specifier to not exactly match what either %f or %e would've produced.
The decision about whether to use %f-style or %e-style formatting is made based purely upon the size of the exponent that would be needed in %e-style notation, and does not directly depend on which representation would be shorter. There are several scenarios in which this rule results in %g selecting the longer representation, like the one shown in the question where %g uses scientific notation even though this makes the output 4 characters longer than it needs to be.
In case the C standard's wording is hard to parse, the Python documentation provides another description of the same behaviour:
General format. For a given precision p >= 1,
this rounds the number to p significant digits and
then formats the result in either fixed-point format
or in scientific notation, depending on its magnitude.
The precise rules are as follows: suppose that the
result formatted with presentation type 'e' and
precision p-1 would have exponent exp. Then
if -4 <= exp < p, the number is formatted
with presentation type 'f' and precision
p-1-exp. Otherwise, the number is formatted
with presentation type 'e' and precision p-1.
In both cases insignificant trailing zeros are removed
from the significand, and the decimal point is also
removed if there are no remaining digits following it.
Positive and negative infinity, positive and negative
zero, and nans, are formatted as inf, -inf,
0, -0 and nan respectively, regardless of
the precision.
A precision of 0 is treated as equivalent to a
precision of 1. The default precision is 6.
The many sources on the internet that claim that %g just picks the shortest out of %e and %f are simply wrong.
My favorite format for doubles is "%.15g". It seems to do the right thing in every case. I'm pretty sure 15 is the maximum reliable decimal precision in a double as well.
I'm confused about the behavior of printf("%f", M_PI). It prints out 3.141593, but M_PI is 3.14159265358979323846264338327950288. Why does printf do this, and how can I get it to print out the whole float. I'm aware of the %1.2f format specifiers, but if I use them then I get a bunch of unused 0s and the output is ugly. I want the entire precision of the float, but not anything extra.
Why does printf do this, and how can I get it to print out the whole
float.
By default, the printf() function takes precision of 6 for %f and %F format specifiers. From C11 (N1570) §7.21.6.1/p8 The fprintf function (emphasis mine going forward):
If the precision is missing, it is taken as 6; if the precision is
zero and the # flag is not specified, no decimal-point character
appears. If a decimal-point character appears, at least one digit
appears before it. The value is rounded to the appropriate number
of digits.
Thus call is just equivalent to:
printf("%.6f", M_PI);
The is nothing like "whole float", at least not directly as you think. The double objects are likely to be stored in binary IEEE-754 double precision representation. You can see the exact representation using %a or %A format specifier, that prints it as hexadecimal float. For instance:
printf("%a", M_PI);
outputs it as:
0x1.921fb54442d18p+1
which you can think as "whole float".
If all what you need is "longest decimal approximation", that makes sense, then use DBL_DIG from <float.h> header. C11 5.2.4.2.2/p11 Characteristics of floating types :
number of decimal digits, q, such that any floating-point number with
q decimal digits can be rounded into a floating-point number with p
radix b digits and back again without change to the q decimal digits
For instance:
printf("%.*f", DBL_DIG-1, M_PI);
may print:
3.14159265358979
You can use sprintf to print a float to a string with an overkill display precision and then use a function to trim 0s before passing the string to printf using %s to display it. Proof of concept:
#include <math.h>
#include <string.h>
#include <stdio.h>
void trim_zeros(char *x){
int i;
i = strlen(x)-1;
while(i > 0 && x[i] == '0') x[i--] = '\0';
}
int main(void){
char s1[100];
char s2[100];
sprintf(s1,"%1.20f",23.01);
sprintf(s2,"%1.20f",M_PI);
trim_zeros(s1);
trim_zeros(s2);
printf("s1 = %s, s2 = %s\n",s1,s2);
//vs:
printf("s1 = %1.20f, s2 = %1.20f\n",23.01,M_PI);
return 0;
}
Output:
s1 = 23.010000000000002, s2 = 3.1415926535897931
s1 = 23.01000000000000200000, s2 = 3.14159265358979310000
This illustrates that this approach probably isn't quite what you want. Rather than simply trimming zeros you might want to truncate if the number of consecutive zeros in the decimal part exceeds a certain length (which could be passed as a parameter to trim_zeros. Also — you might want to make sure that 23.0 displays as 23.0 rather than 23. (so maybe keep one zero after a decimal place). This is mostly proof of concept — if you are unhappy with printf use sprintf then massage the result.
Once a piece of text is converted to a float or double, "all" the digits is no longer a meaningful concept. There's no way for the computer to know, for example, that it converted "3.14" or "3.14000000000000000275", and they both happened to produce the same float. You'll simply have to pick the number of digits appropriate to your task, based on what you know about the precision of the numbers involved.
If you want to print as many digits as are likely to be distinctly represented by the format, floats are about 7 digits and doubles are about 15, but that's an approximation.
I have an ascii "15605632.68128593" and I wish to convert
it to a double without losing accuracy
double d;
d=(double)atof("15605632.68128593");
printf("%f",d);
printed result is 15605632.681286
Any ideas?
It's likely you're not getting all the trailing decimal places. Try printf("%.8f", d).
You might also try sscanf("15605632.68128593", "%lf", &d) in place of the atof call.
It's also not necessary to cast the result of atof to double. It's already a double. But the cast does no harm.
Note that - at least about 6 years ago when I looked at this in detail - many printf and scanf implementations were buggy in the sense that they didn't function as perfect inverses as you'd assume. Visual C/C++ and gcc both had problems in their native implementations. This paper is a useful reference.
Cygwin with gcc 4.3.4:
#include <stdio.h>
int main(void)
{
double x;
sscanf("15605632.68128593", "%lf", &x);
printf("%.8f\n", x);
return 0;
}
And then:
# gcc foo.c
# ./a
15605632.68128593
Goal: Convert "15605632.68128593" to a double without losing accuracy.
atof() accomplished that to best the program could do. But since "15605632.68128593" (a 16-digit number) is not exactly representable as a double in your C, it was approximated to 1.560563268128593080...e+07. Thus accuracy was lost, albeit a small loss.
Typical double can represent about 264 different numbers. The nearby candidates and OP's string are shown below for reference.
15605632.68128 592893... previous double
"15605632.68128 593" code's string
15605632.68128 593080... closest double
15605632.68128 6 output
The grief comes when attempting to print, thinking that what printed was the exact value of x. Instead the nearby double value was printed. Printout is also rounded. Using the %f specifier defaults to 6 places to the right of the '.' giving the reported 15605632.681286, a 14 digit number.
A better way to see all the significant digits for all double is to use the %e format with DBL_DIG or DBL_DECIMAL_DIG. DBL_DIG is the most number of digits to the right of the '.', in decimal exponential notation %e, to show all the digits needed to "round-trip" a double (string to double to string without a string difference). Since %e always shows 1 digit to the left of '.', the print below shows 1 + DBL_DIG significant digits. DBL_DECIMAL_DIG is 17 on my mine and many C environments, but it vary.
If you wish to show all the significant digits, you need to qualify what is significant. The nextafter() function shows the next representable double. So we might want to show at least enough digits to distinguish x and the next x. I recommend DBL_DECIMAL_DIG. Details
The exact value the program used for your "1.560563268128593e+07" is 15605632.68128593079745769500732421875. There are few situations where you need to see all those digits. Even is you request lots of digits, at some point, printf() just gives you zeros.
#include <stdio.h>
#include <float.h>
#include <tgmath.h>
int main(int argc, char *argv[]) {
double x;
x = atof("15605632.68128593");
printf("%.*le\n",DBL_DIG, x); // All digits "round-trip" string-to-double-string w/o loss
printf("%.*le\n",DBL_DIG + 1, x); // All the significant digit "one-way" double-string
printf("%.*le\n",DBL_DIG + 1, nextafter(x, 2*x)); // The next representable double
printf("%.*le\n",DBL_DIG + 3, x); // What happens with a few more
printf("%.*le\n",DBL_DIG + 30, x); // What happens if you are a bit loony
return 0;
}
1.560563268128593e+07
1.5605632681285931e+07
1.5605632681285933e+07
1.560563268128593080e+07
1.560563268128593079745769500732421875000000000e+07
double does not have that much precision. It can only round-trip 15 (DBL_DIG from float.h) decimal places from decimal string to double back to decimal string.
Edit: While, in general, my claim is true, it doesn't seem to be your problem here. While there exist 16-decimal-place numbers which can't be round-tripped, this particular input can.
I'm trying to read in a file in c with the following format:
6.43706064058,4.15417249035
3.43706064058,1.15417249035
...
I'm able to parse out the two doubles, but when I print out what I've parsed, I notice that I only get up to 6 decimal places. Here is my code:
long double d1;
long double d2;
fscanf(file, "%Lf,%Lf", &d1, &d2);
printf("x:%Lf, y:%Lf", d1, d2);
Output:
x:6.437061, y:4.154172
...
Where am I losing the precision? Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
That's exactly what's happening. From the printf(3) man page:
... the number of digits after the
decimal-point character is equal to the precision specification.
If the precision is missing, it is taken as 6 ...
Tell printf to show more precision by changing your format string:
printf("x:%.11Lf, y:%.11Lf", d1, d2);
The default %f format only prints 6 places after the decimal point, which gives you much less precision than the actual floating point value (unless the exponent is large) and possibly no precision at all (if the exponent is more than slightly negative). Unless you know all your values are bounded away from zero (e.g. all greater than 1), you really need to use the %g format (which can switch to exponential notation as needed) or the %e format (which always uses exponential notation) to print floating point values in a way that preserves their precision.
You also need to use sufficiently many decimal places. For IEEE double precision, 17 decimal places is sufficient, so %.17g would be the preferred format. For long double, it depends on the type used on your particular implementation. Thankfully, C offers a macro, DECIMAL_DIG, that gives you exactly the number of places you need. So you would use:
printf("%.*Lg", DECIMAL_DIG, x);
or similar. Note that this will print more places than were originally present in your input file. If you know your input always has a particular number of places, you could perhaps just hard-code that instead of using DECIMAL_DIG to get a more uniform output.
The reason you are not as far with the precision as you'd like to be is because the level of the spacing in the number is not enough. In the first number, 6.43706064058, you have 13 numbers, including the decimal, so you'd put
printf("x:%13Lf, y:%Lf", d1, d2);
allowing 13 spaces for the x:
for the second number, 4.15417249035, you have 13 also, so for that, you'd put
printf("x:%13Lf, y:13%Lf", d1, d2);
and that will print:
x:6.43706064058, y:4.15417249035
you must allow room for all of the spaces within the number when doing the printf function.
Hope that helped!
the data-type 'float' displays decimal numbers. by default my compiler displays up-to 6 decimals. i want to see only two decimals. for eg , when the compiler performs the operation "c=2/3"
it displays "0.666666667". i want to see only "0.67" in my output screen.
so what necessary changes should i make in the C program?
You can use a format specifier to limit it to 2 decimal places when outputting the number using printf.
int main() {
double d = 2.0 / 3.0;
printf("%.2f\n",d);
return 0;
}
Here's the output:
---------- Capture Output ----------
> "c:\windows\system32\cmd.exe" /c c:\temp\temp.exe
0.67
> Terminated with exit code 0.
You did not tell us how you display the value at all, therefore I guess you’re using something like printf("%f", x). You can prefix the “f” with a precision specification, which is a dot followed by a number, for example printf("%.2f", x).
The printf formatting for decimals is %. followed by the amount of decimal precision displayed followed by "f".
So displaying two decimal placess would be
printf("%.2f", i);
and displaying six decimal places would be
printf("%.6f", i);