Problem regarding displaying decimal values in C Program - c

the data-type 'float' displays decimal numbers. by default my compiler displays up-to 6 decimals. i want to see only two decimals. for eg , when the compiler performs the operation "c=2/3"
it displays "0.666666667". i want to see only "0.67" in my output screen.
so what necessary changes should i make in the C program?

You can use a format specifier to limit it to 2 decimal places when outputting the number using printf.
int main() {
double d = 2.0 / 3.0;
printf("%.2f\n",d);
return 0;
}
Here's the output:
---------- Capture Output ----------
> "c:\windows\system32\cmd.exe" /c c:\temp\temp.exe
0.67
> Terminated with exit code 0.

You did not tell us how you display the value at all, therefore I guess you’re using something like printf("%f", x). You can prefix the “f” with a precision specification, which is a dot followed by a number, for example printf("%.2f", x).

The printf formatting for decimals is %. followed by the amount of decimal precision displayed followed by "f".
So displaying two decimal placess would be
printf("%.2f", i);
and displaying six decimal places would be
printf("%.6f", i);

Related

Why does printf() with %f lose a digit after decimal point sometimes?

Why does the statement
printf("%f", sensorvalue)
print out a string like “11312.96” (with two digits after decimal points) most of the time, but sometimes print out a string like “11313.1” (with one digit after decimal point)? sensorvalue is read from a power meter continuously. The values at different times are supposed to have the same format.
It's C running on Linux.
Why does the statement printf("%f", sensorvalue) print out the string like 11312.96 (with two digits after decimal points) at most time, but sometimes print string like 11313.1 (with one digit after decimal point)?
The library is simply not C compliant even if "It's C running on Linux."
The output of
printf("%f\n", 11312.96f);
printf("%f\n", 11312.96);
printf("%f\n", 11313.1f);
printf("%f\n", 11313.1);
... is expected to be like the below with 6 digits after the '.' - perhaps with some variation in the values of the least digits. Even with implementations of varying quality, the output should have been 6 digits after the '.'.
11312.959961
11312.960000
11313.099609
11313.100000
Had the format been "%g", output like below could have occurred.
11313
11313
11313.1
11313.1
If you're using %f exactly as stated, this actually violates the standard (this would be unusual but certainly not unheard of), which states in C11 7.21.6.1 The fprintf function /8:
F, f: A double argument representing a floating-point number is converted to decimal notation in the style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6.
In other words, this program:
#include <stdio.h>
int main() {
double d1 = 11312.96, d2 = 11313.1;
printf("%f\n%f\n", d1, d2);
return 0;
}
should generate:
11312.960000
11313.100000
If you want it to have a different format (in both your seemingly incorrect case, and the case that complies with the standard), use the precision argument to force it, such as with:
printf("%.2f\n", d1); // gives "11312.96"
You may also want to specify the minimum field width to ensure your numbers are lined up on the right, such as with:
// posn: 123456789
// ---------
printf("%9.2f\n", d1); // gives " 11312.96"
printf("%9.2f\n", 3.1); // gives " 3.10"

Decimal precision vs. number of digits in printf(), fprintf() in format %g vs. %f

After surfing for a while I could not find a clear explanation for this issue. Maybe anyone could clarify me why it works so.
In some code I am saving some double numbers to file by fprintf (after properly initializing the file stream). Because, a priori, I don't know what number is passed to my program, and in particular, what its format is, e.g. 0.00011 vs. 1.1e-4, I thought to use the format specifier %.5g instead of %.5f, where, I want to save my data with a 5-digit decimal precision.
However, it turns out that in %g the decimal precision of my saved numbers is correct if the numbers have a integer part equal to 0, otherwise is not, like for example:
FILE *fp;
fp = fopen("mydata.dat","w+"); //Neglecting error check for brevity
double value[2] = {0.00011,1.00011};
printf("\ng-format\n");
for(int i=0;i<2;i++){
frintf(fp,"%.5g\n",value[i]);
printf("%.5g\n",value[i]);
}
printf("\n\nf-format\n");
for(int i=0;i<2;i++){
frintf(fp,"%.5f\n",value[i]);
printf"%.5f\n",value[i]);
}
fclose(fp);
This produces the following output to file (and on the std stream):
g-format
0.00011
1.0001
f-format
0.00011
1.00011
So, why the choice of %g is 'eating' decimal digits as soon as the integer part is not zero?
The %g print x digits from the first digit which is not 0.
So if the x + 1 digit is not in the integer part, it will round it. And if the x + 1 digit is in the integer part it will display your number as scientific notation (rounded too)
The %f just display integer part plus x digit after.
It's not eating decimal digits. With %g the field width specifies the number of significant digits; 1.0001 has 5 significant digits, which is what "%.5g" calls for. That's different from %f, where the field width specifies the number of digits to the right of the decimal point.
To answer what appears to be OP's higher problem:
I want to save my data with a 5-digit decimal precision.
If code needs to save values with 6 total significant figures, use .5e which will print all values* with a non-zero leading digit and 5 places after a decimal point in exponential notation. Do not bother with "%g".
*Of course a value of 0.0 does not print with a leading non-zero digit.

double precision lost when parsing csv file in C

I'm trying to read in a file in c with the following format:
6.43706064058,4.15417249035
3.43706064058,1.15417249035
...
I'm able to parse out the two doubles, but when I print out what I've parsed, I notice that I only get up to 6 decimal places. Here is my code:
long double d1;
long double d2;
fscanf(file, "%Lf,%Lf", &d1, &d2);
printf("x:%Lf, y:%Lf", d1, d2);
Output:
x:6.437061, y:4.154172
...
Where am I losing the precision? Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
Is it possible that its being read in correctly, but my printf statement isn't showing all the precision?
That's exactly what's happening. From the printf(3) man page:
... the number of digits after the
decimal-point character is equal to the precision specification.
If the precision is missing, it is taken as 6 ...
Tell printf to show more precision by changing your format string:
printf("x:%.11Lf, y:%.11Lf", d1, d2);
The default %f format only prints 6 places after the decimal point, which gives you much less precision than the actual floating point value (unless the exponent is large) and possibly no precision at all (if the exponent is more than slightly negative). Unless you know all your values are bounded away from zero (e.g. all greater than 1), you really need to use the %g format (which can switch to exponential notation as needed) or the %e format (which always uses exponential notation) to print floating point values in a way that preserves their precision.
You also need to use sufficiently many decimal places. For IEEE double precision, 17 decimal places is sufficient, so %.17g would be the preferred format. For long double, it depends on the type used on your particular implementation. Thankfully, C offers a macro, DECIMAL_DIG, that gives you exactly the number of places you need. So you would use:
printf("%.*Lg", DECIMAL_DIG, x);
or similar. Note that this will print more places than were originally present in your input file. If you know your input always has a particular number of places, you could perhaps just hard-code that instead of using DECIMAL_DIG to get a more uniform output.
The reason you are not as far with the precision as you'd like to be is because the level of the spacing in the number is not enough. In the first number, 6.43706064058, you have 13 numbers, including the decimal, so you'd put
printf("x:%13Lf, y:%Lf", d1, d2);
allowing 13 spaces for the x:
for the second number, 4.15417249035, you have 13 also, so for that, you'd put
printf("x:%13Lf, y:13%Lf", d1, d2);
and that will print:
x:6.43706064058, y:4.15417249035
you must allow room for all of the spaces within the number when doing the printf function.
Hope that helped!

Floating point results

In my C code:
I see some of the results in floating point come out to be for example 2.404567E+1. it seems to me that for results less then 1 the results turn out to be in some exponential series.
So, I have 2 questions:
how can I get the result rounded off to some digits ie instead of 5.23542342734 I just want the result to be 5.23
How can I get rid of exponential results and get results as for example 0.1648 instead of 1.6483517E-1
You can control the output format of printf() (I'm assuming you're talking about printf()?) in a number of ways. e.g.:
printf("%.2f\n", 5.23542342734); // Prints "5.23"
printf("%.4f\n", 1.6483517E-1); // Prints "0.1648"
See e.g. http://www.cplusplus.com/reference/clibrary/cstdio/printf/ (or a million other references out there on the internet) for more details on format specifiers for printf().
Adjust format string:
printf ("%.2f", float_data);
http://linux.die.net/man/3/printf
Or to use to truncate/approximate the value to some decimal places, i think the following should work:
trunc = floor (float_val * 10000) / 10000;
The above will preserve only upto 4 decimal places of float_var and store it to trunc. Use round () if needed.

What Comes After The %?

I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example:
double radius = 1.0;
double area = 0.0;
area = calculateArea( radius );
printf( "%10.1f %10.2\n", radius, area );
I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this?
http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful
Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc).
Hope this helps
UPDATE:
To clarify using your examples:
printf( "%10.1f %10.2\n", radius, area );
%10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place.
%10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places.
man 3 printf
on a Linux system will give you all the information you need. You can also find these manual pages online, for example at http://linux.die.net/man/3/printf
10.1f means floating point with 1 place after the decimal point and the 10 places before the decimal point. If the number has less than 10 digits, it's padded with spaces. 10.2f is the same, but with 2 places after the decimal point.
On every system I've seen, from Unix to Rails Migrations, this is not the case. #robintw expresses it best:
Basically in a simple form it's %[width].[precision][type].
That is, not "10 places before the decimal point," but "10 places, both before and after, and including the decimal point."
10.1f means floating point with 10 characters wide with 1 place after the decimal point.
If the number has less than 10 digits, it's padded with spaces.
10.2f is the same, but with 2 places after the decimal point.
You have these basic types:
%d - integer
%x - hex integer
%s - string
%c - char (only one)
%f - floating point (float)
%d - signed int (decimal)
%i - signed int (integer) (same as decimal).
%u - unsigned int
%ld - long (signed) int
%lu - long unsigned int
%lld - long long (signed) int
%llu - long long unsigned int
Edit: there are several others listed in #Eli's response (man 3 printf).
10.1f means you want to display a float with 1 decimal and the displayed number should be 10 characters long.
In short, those values after the % tell printf how to interpret (or output) all of the variables coming later. In your example, radius is interpreted as a float (this the 'f'), and the 10.1 gives information about how many decimal places to use when printing it out.
See this link for more details about all of the modifiers you can use with printf.
Man pages contain the information you want. To read what you have above:
printf( "%10.2f", 1.5 )
This will print:
1.50
Whereas:
printf("%.2f", 1.5 )
Prints:
1.50
Note the justification of both.
Similarly:
printf("%10.1f", 1.5 )
Would print:
1.5
Any number after the . is the precision you want printed. Any number before the . is the distance from the left margin.
One issue that hasn't been raised by others is whether double is the same as a float. On some systems a different format specifier was needed for a double compared to a float. Not least because the parameters passed could be of different sizes.
%f - float
%lf - double
%g - double

Resources