Extra precision values seen for float values - c

Below is the solution code for an excercise from KN King's Modern Programming approach.
#include<stdio.h>
int main(void)
{
int i;
float j,x;
scanf("%f%d%f",&x,&i,&j);
printf("i = %d",i);
printf("x = %f",x);
printf("j = %f",j);
}
Input:
12.3 45.6 789
Expected Result :
i = 45
x = 12.3
j = 0.6
Actual Result:
i = 45
x = 12.300000
j = 0.600000
Question:
Why are there extra decimal values seen for floats? Is there any default precision set for float and how can I change it (Only default - I know I can use formatting strings to control precision values in printf). I am using gcc (Mingw) on windows7.

The floating-point formats record only a mathematical value. They do not record how much precision the value had when scanned (or otherwise obtained). (For that matter, this is true of the integer formats too.)
When you display a floating-point value, you must specify how many digits to display. The C standard defines a default of six digits for the %f specifier:1
A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.
You specify the precision by writing the number of digits after a period in the format specifications, such as %.2f:2
The precision takes the form of a period (.) followed either by an asterisk * (described later) or by an optional decimal integer; if only the period is specified, the precision is taken as zero.
(The asterisk is used to specify the number of digits in a int parameter passed to printf.)
Notes
1 C 2011 (N1570) 7.21.6.1 8.
2 C 2011 (N1570) 7.21.6.1 4.

You can't change the precision of a float, but (as it sounds like you know) you can change the precision of the display of a float.

Why are there extra decimal values seen for floats?
That's why floats are floats.By default %f lets you display 6 digits after decimal.
That's nothing do do with the precision of the number stored.Numbers are stored according to the IEEE 754 FLOATING POINT representation in most systems.And numbers are not stored with decimal points. So, since you do not control how you store numbers, you can't change the precision.Precision is fixed.
Is there any default precision set for float and how can I change it.
Again , precision is not default its not how it is stored. Check its representation for 32 bit or 64 bit.Youwould like to print more/less digits after decimal.You can do this :
float i=2.89674534423;
printf("%.10f",i); Prints 10 digits after decimal.

A double is not infinitely accurate, and it does not store numbers in decimal, so it cannot precisely store 0.085. If you want to print a number to two significant figures, use the precision format specifier - see http://www.cprogramming.com/tutorial/printf-format-strings.html for examples.
double value = 0.085;
NSLog(#"%.2g", value);
This will print "0.085"
Be aware that using the precision specifier normally controls the number of digits displayed after the decimal point - but in the case of a double, it controls significant figures.
float value = 0.085;
NSLog(#"%.2f", value);
Will print "0.09"

Related

How do I print in double precision?

I'm completely new to C and I'm trying to complete an assignment. The exercise is to print tan(x) with x incrementing from 0 to pi/2.
We need to print this in float and double. I wrote a program that seems to work, but I only printed floats, while I expected double.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
double x;
double pi;
pi = M_PI;
for (x = 0; x<=pi/2; x+= pi/20)
{
printf("x = %lf, tan = %lf\n",x, tan(x));
}
exit(0);
}
My question is:
Why do I get floats, while I defined the variables as double and used %lf in the printf function?
What do I need to change to get doubles as output?
"...but I only printed floats, while I expected double"
You are actually outputting double values.
float arguments to variadic functions (including printf()) are implicitly promoted to double in general. reference.
So even if your statement
printf("x = %lf, tan = %lf\n",x, tan(x));
were changed to:
printf("x = %f, tan = %f\n",x, tan(x));
It would still output double as both "%f" and "%lf" are used as double format specifiers for printf() (and other variadic functions).
Edit to address following statement/questions in comments:
"I know that a double notation has 15 digits of [precision]."
Yes. But there is a difference between the actual IEEE 754 specified characteristics of the float/double data types, and the way that they can be _made to appear using format specifiers in the printf() function.
In simplest terms:
double has double (2x) the precision of a float.
float is a 32 bit IEEE 754 single precision Floating Point Number with 1 bit for the sign, 8 bits for the exponent, and 24* for the value, resulting in 7 decimal digits of precision.
double is a 64 bit IEEE 754 double precision Floating Point Number with 1 bit for the sign, 11 bits for the exponent, and 53* bits for the value resulting in 15 decimal digits of precision.
* - including the implicit bit (which always equals 1 for normal numbers, and 0 for subnormal numbers. This implicit bit is not stored in memory), but not the sign bit.
"...But with %.20f I was able to print more digits, how is that possible and where do the digits come from?"
The extra digits are inaccuracies caused by binary representation of analog numbers, coupled with using a width specifier to force more information to display than what is actually represented by the stored value.
Although width specifiers have there rightful place, they can also result in providing misleading results.
Why do I get floats, while I defined the variables as double and used %lf in the printf function?
Code is not getting "floats", output is simply text. Even if the argument coded is a float or a double, the output is the text translation of the floating point number - often rounded.
printf() simply follows the behavior of "%lf": print a floating point value with 6 places after the decimal point. With printf(), "%lf" performs exactly like "%f".
printf("%lf\n%lf\n%f\n%f\n", 123.45, 123.45f, 123.45, 123.45f);
// 123.450000
// 123.449997
// 123.450000
// 123.449997
What do I need to change to get doubles as output?
Nothing, the output is text, not double. To see more digits, print with greater precision.
printf("%.50f\n%.25f\n", 123.45, 123.45f);
// 123.45000000000000284217094304040074348449710000000000
// 123.4499969482421875000000000
how do I manipulate the code so that my output is in float notation?
Try "%e", "%a" for exponential notation. For a better idea of how many digits to print: Printf width specifier to maintain precision of floating-point value.
printf("%.50e\n%.25e\n", 123.45, 123.45f);
printf("%a\n%a\n", 123.45, 123.45f);
// 1.23450000000000002842170943040400743484497100000000e+02
// 1.2344999694824218750000000e+02
// 0x1.edccccccccccdp+6
// 0x1.edccccp+6
printf("%.*e\n%.*e\n", DBL_DECIMAL_DIG-1, 123.45, FLT_DECIMAL_DIG-1,123.45f);
// 1.2345000000000000e+02
// 1.23449997e+02

Decimal precision vs. number of digits in printf(), fprintf() in format %g vs. %f

After surfing for a while I could not find a clear explanation for this issue. Maybe anyone could clarify me why it works so.
In some code I am saving some double numbers to file by fprintf (after properly initializing the file stream). Because, a priori, I don't know what number is passed to my program, and in particular, what its format is, e.g. 0.00011 vs. 1.1e-4, I thought to use the format specifier %.5g instead of %.5f, where, I want to save my data with a 5-digit decimal precision.
However, it turns out that in %g the decimal precision of my saved numbers is correct if the numbers have a integer part equal to 0, otherwise is not, like for example:
FILE *fp;
fp = fopen("mydata.dat","w+"); //Neglecting error check for brevity
double value[2] = {0.00011,1.00011};
printf("\ng-format\n");
for(int i=0;i<2;i++){
frintf(fp,"%.5g\n",value[i]);
printf("%.5g\n",value[i]);
}
printf("\n\nf-format\n");
for(int i=0;i<2;i++){
frintf(fp,"%.5f\n",value[i]);
printf"%.5f\n",value[i]);
}
fclose(fp);
This produces the following output to file (and on the std stream):
g-format
0.00011
1.0001
f-format
0.00011
1.00011
So, why the choice of %g is 'eating' decimal digits as soon as the integer part is not zero?
The %g print x digits from the first digit which is not 0.
So if the x + 1 digit is not in the integer part, it will round it. And if the x + 1 digit is in the integer part it will display your number as scientific notation (rounded too)
The %f just display integer part plus x digit after.
It's not eating decimal digits. With %g the field width specifies the number of significant digits; 1.0001 has 5 significant digits, which is what "%.5g" calls for. That's different from %f, where the field width specifies the number of digits to the right of the decimal point.
To answer what appears to be OP's higher problem:
I want to save my data with a 5-digit decimal precision.
If code needs to save values with 6 total significant figures, use .5e which will print all values* with a non-zero leading digit and 5 places after a decimal point in exponential notation. Do not bother with "%g".
*Of course a value of 0.0 does not print with a leading non-zero digit.

what's the difference between printf a floating-point variable and constant?

Here's my code:
float x = 21.195;
printf("%.2f\n", x);
printf("%.2f\n", 21.195);
I would expect both print statements to have identical output, but instead, the first prints 21.19, and the second prints 21.20.
Could someone explain why the output is different?
The values are different. The first is a float, which is typically 4 bytes. The second is a double, which is typically 8 bytes.
The rules for rounding are based on the third digit after the decimal place. So, in one case, the value is something like 21.19499997 and the other 21.1950000000001, or something like that. (These are made up to illustrate the issue with rounding and imprecise numeric formats.)
By default 21.195 is a double.
If you want a float, write :
21.195F
or
(float)21.195
Regards
When a floating point variable is defined in C, by default it is set to double. So x is set to float because you mentioned it explicitly, else 21.195 is considered to be double.
Now, as mentioned above, float is usually about 4 bytes and double is about 8 bytes. So a float value has 24 significant bits with 7 digits of precision and double has 53 significant bits with 15 to 16 digits of precision.
A roundoff function %.2f works to round off the number correct to 2 decimal places and checks the third digit after the decimal point for rounding off.
So 21.195 in float expands to 21.19499998 and is then reduced to 21.19 after %.2f an 21.195 in double expands to 21.1950000000000001 and hence reduces to 21.20.
Hope it helps!

Get printf to print all float digits

I'm confused about the behavior of printf("%f", M_PI). It prints out 3.141593, but M_PI is 3.14159265358979323846264338327950288. Why does printf do this, and how can I get it to print out the whole float. I'm aware of the %1.2f format specifiers, but if I use them then I get a bunch of unused 0s and the output is ugly. I want the entire precision of the float, but not anything extra.
Why does printf do this, and how can I get it to print out the whole
float.
By default, the printf() function takes precision of 6 for %f and %F format specifiers. From C11 (N1570) §7.21.6.1/p8 The fprintf function (emphasis mine going forward):
If the precision is missing, it is taken as 6; if the precision is
zero and the # flag is not specified, no decimal-point character
appears. If a decimal-point character appears, at least one digit
appears before it. The value is rounded to the appropriate number
of digits.
Thus call is just equivalent to:
printf("%.6f", M_PI);
The is nothing like "whole float", at least not directly as you think. The double objects are likely to be stored in binary IEEE-754 double precision representation. You can see the exact representation using %a or %A format specifier, that prints it as hexadecimal float. For instance:
printf("%a", M_PI);
outputs it as:
0x1.921fb54442d18p+1
which you can think as "whole float".
If all what you need is "longest decimal approximation", that makes sense, then use DBL_DIG from <float.h> header. C11 5.2.4.2.2/p11 Characteristics of floating types :
number of decimal digits, q, such that any floating-point number with
q decimal digits can be rounded into a floating-point number with p
radix b digits and back again without change to the q decimal digits
For instance:
printf("%.*f", DBL_DIG-1, M_PI);
may print:
3.14159265358979
You can use sprintf to print a float to a string with an overkill display precision and then use a function to trim 0s before passing the string to printf using %s to display it. Proof of concept:
#include <math.h>
#include <string.h>
#include <stdio.h>
void trim_zeros(char *x){
int i;
i = strlen(x)-1;
while(i > 0 && x[i] == '0') x[i--] = '\0';
}
int main(void){
char s1[100];
char s2[100];
sprintf(s1,"%1.20f",23.01);
sprintf(s2,"%1.20f",M_PI);
trim_zeros(s1);
trim_zeros(s2);
printf("s1 = %s, s2 = %s\n",s1,s2);
//vs:
printf("s1 = %1.20f, s2 = %1.20f\n",23.01,M_PI);
return 0;
}
Output:
s1 = 23.010000000000002, s2 = 3.1415926535897931
s1 = 23.01000000000000200000, s2 = 3.14159265358979310000
This illustrates that this approach probably isn't quite what you want. Rather than simply trimming zeros you might want to truncate if the number of consecutive zeros in the decimal part exceeds a certain length (which could be passed as a parameter to trim_zeros. Also — you might want to make sure that 23.0 displays as 23.0 rather than 23. (so maybe keep one zero after a decimal place). This is mostly proof of concept — if you are unhappy with printf use sprintf then massage the result.
Once a piece of text is converted to a float or double, "all" the digits is no longer a meaningful concept. There's no way for the computer to know, for example, that it converted "3.14" or "3.14000000000000000275", and they both happened to produce the same float. You'll simply have to pick the number of digits appropriate to your task, based on what you know about the precision of the numbers involved.
If you want to print as many digits as are likely to be distinctly represented by the format, floats are about 7 digits and doubles are about 15, but that's an approximation.

e format in printf() and precision modifiers

Could you explain me why
printf("%2.2e", 1201.0);
gives a result 1.20e+03 and not just 12.01e2?
My way of thinking: default number is 1201.0, specifier tells are that there should be 2 numbers after the digit.
What is wrong?
According to Wikipedia:
In normalized scientific notation, the exponent b is chosen so that the absolute value of a remains at least one but less than ten (1 ≤ |a| < 10). Thus 350 is written as 3.5×102. This form allows easy comparison of numbers, as the exponent b gives the number's order of magnitude. In normalized notation, the exponent b is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as 5×10−1). The 10 and exponent are often omitted when the exponent is 0.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalised form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation—although the latter term is more general and also applies when a is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (as in 3.15× 220).
The first 2 in "%2.2e" is the minimum character width to print. 1.20e+03 is 8 characters which is more than 2.
e directs that the number is printed: (sign), 1 digit, '.', followed by some digits and an exponent.
The 2nd 2 in "%2.2e" is the number of digits after the decimal point to print. 6 is used if this 2nd value is not provided.
The %e format uses scientific notation, i.e. one digit before the decimal separator and an exponent for scaling. You can't set the digits before the decimal separator using this format.
This is just how the scientific notation is defined. The result you expect is a very weird notation. I don't think you can get it with printf.
The number before the dot in the format specifier defines the minimum width of the resulting sub-string. Try %20.2e to see what that means.

Resources