Why this formatted output is getting printed while using float? - c

In this underlying code:
float j=9.01;
printf("%2.3f \n",j);
the output is 9.010, shouldn't it be like 2 character wide reserved for the entire j which includes 3 for the decimal - so it makes the no. before decimal i.e. 9 disappear and only 01 to be printed?
Where am I going wrong in this?

Let's quote the man page:
Field Width
In no case does a nonexistent or small field width cause truncation of a field; if the result of a conversion is wider than the field width, the field is expanded to contain the conversion result.
and
The precision
This gives the minimum number of digits to appear for d, i, o, u, x, and X
conversions, the number of digits to appear after the radix character for
a, A, e, E, f, and F conversions [...]
The field width is too small, but fields are never truncated so that doesn't matter.
The part after the period is clearly the precision, and you ask for three digits.

Related

What are the proper rules for printf conversions?

I have this function call for printf:
printf("%-+05.6d", 133);
With this output:
+000133
I'm having trouble with the rules of how the printf flags interact with each other since it seems they are given conflicting rules.
According to the man(man 3 printf):
0
The value should be zero padded. For d, i, o, u, x, X, a, A, e, E, f, F, g, and G conversions, the converted value is padded on the left with zeros rather than blanks. If the 0 and - flags both appear, the 0 flag is ignored. If a precision is given with a numeric conversion (d, i, o, u, x, and X), the 0 flag is ignored. For other conversions, the behavior is undefined.
So according to this, the 0 flag should be ignored since there is both a '-' flag and a precision specified, yet it is still printing 0's.
According to the man for diouxX conversions:
The precision, if any, gives the minimum number of digits that must appear; if the converted value requires fewer digits, it is padded on the left with zeros.
So with both of these rules, it would seem that specific diouxX conversion rule applies here since 0's are still being printed, but then why does the man say the 0's get ignored?
Since precision is applied here, shouldn't the '0' flag be ignored and padded with spaces instead?
You have a mess. The 0 is ignored, try it:
printf("%-+5.6d", 133);
Output
+000133
The problem you have is there are multiple overriding flags specified. You correct cite that a - (left justified) overrides the 0. The sign flag + provides a space for either +/- before the string on all occasions.
The next conflict is 5.6. You specify a field-width modifier of 5, but then specify the precision as 6. For d, i, o, u, x, and X conversions the precision specifies the number of digits that will appear. (see C11 Standard -7.21.6.1 The fprintf function) So while not specifically stated, the precision being greater than the field-width effectively overrides the field width resulting in 6-digits in addition to the + sign being printed for 133.
So the crux of the issue is "Why if - overrides the 0 flag are 0s still printed?" Here there are some technical hairs to be split. Since you have specified - (left justified) and both a field-width and precision, the 0 is ignored but you have stated that 6-digits shall be printed and the result will be left-justified. So you get 0s based on the precision and then total number of digits is left-justified with explicit +/- being provided for the sign.
Remove the precision and you will see the effect of - overriding 0:
printf("%-+05.d", 133);
Output:
+133
(there is an empty following the final 3 to make a total field-width of 5)
(note: -- can you rely on this being completely portable? -- probably not, there is likely a bit of implementation defined leeway left in the standard for flag sequence interpretation that may result in a slightly different ordering of flags with some compilers)
Based on the documentation that you quote, the 0 in the format is ignored, meaning it has no effect on whether there are zeros in the output. Then, you provide a precision of .6 with an integer format, which means the result must show 6 or more digits. The zeros you see are needed to fill up that minimum number of digits.
If the format did not include - or .6, the 05 would mean that the entire number would take up at least than 5 characters, padded by zeros, so the output would be +0133.

C printf difference between 0 flag & width attribute and precision flag

I'm currently learning the printf function of libc and I don't understand, what is the difference between:
printf("Test : %010d", 10);
using the 0 flag and 10 as width specifier
and
printf("Test : %.10d", 10);
using 10 as precision specifier
That produce the same output: Test : 0000000010
We'll start with the docs for printf() and I'll highlight their relevant bits.
First 0 padding.
`0' (zero)
Zero padding. For all conversions except n, the converted value is padded on the left with zeros rather than blanks. If a precision is given with a numeric conversion (d, i, o, u, i, x, and X), the 0 flag is ignored.
And then precision.
An optional precision, in the form of a period . followed by an optional digit string. If the digit string is omitted, the precision is taken as zero. This gives the minimum number of digits to appear for d, i, o, u, x, and X conversions, the number of digits to appear after the decimal-point for a, A, e, E, f, and F conversions, the maximum number of significant digits for g and G conversions, or the maximum number of characters to be printed from a string for s conversions.
%010d says to zero-pad to a minimum width of 10 digits. No problem there.
%.10d", because you're using %d, says the minimum number of digits to appear is 10. So the same thing as zero padding. %.10f would behave more like you expected.
I would recommend you use %010d to zero pad. The %.10d form is a surprising feature that might confuse readers. I didn't know about it and I'm surprised it isn't simply ignored.
Both formats produce the same output for positive numbers, but the output differs for negative numbers greater than -1000000000:
printf("Test : %010d", -10); produces -000000010
whereas
printf("Test : %.10d", -10); produces -0000000010
Format %010d pads the output with leading zeroes upto a width of 10 characters.
Format %.10d pads the converted number with leading zeroes upto 10 digits.
The second form is useful if you want to produce no output for value 0 but otherwise produce the normal conversion like %d:
printf("%.0d", 0); // no output
printf("%.0d", 10); // outputs 10
Also note that the initial 0 in the first form is a flag: it can be combined with other flags in any order as in %0+10d which produces +000000010 and it can be used with an indirect width as in printf("%0*d", 10, 10); which produces 0000000010.
There's no difference besides maybe a purely conceptual one.
In the first case you are just filling the blank area with completely independent padding 0 characters. In the second case these zeros are leading zeros created when converting your argument value. (This is admittedly very contrived.)
In any case these zeros look, smell and quack the same.
However, in general case there's one obscure specific situation when precision behaves differently from padded field-width: when you are asking for zero field width and print zero value. When zero precision is used, zero values are simply not printed at all. When zero field-width is used, zero values will appear as usual
printf("%00d\n", 0); // prints '0'
printf("%.0d\n", 0); // prints nothing
Obviously this is also a very contrived situation, since no padding occurs in this case.
In your second case you probably expected 10.0000000000 - but %d is only for integers. The specification says:
For integer specifiers (d, i, o, u, x, X): precision specifies the minimum number of digits to be written.
(Precision is the part started with . , so in your case 10 .)
So, with %.10d you specified at least 10 digits to express the two-digit number, so it is completed with the 8 leading zeroes.
It means that both %010d and %.10d will produce the same result.

How to control the number of digits to appear after the decimal-point character for a double variable?

I want to print n number of digits after decimal while printing a number of datatype double. However, integer n must be obtained from user using scanf().
double pi = acos(-1);
int n;
printf("\nEnter the number of decimal digits required : ");
scanf("%d",&n);
Now, how to use printf() to print n number of decimal digits of pi?
Quoting C11, chapter §7.21.6.1/p4, for the precision option,
Each conversion specification is introduced by the character %. After the %, the following
appear in sequence:
An optional precision that gives [...] the number of digits to appear after the decimal-point
character for a, A, e, E, f, and F conversions, [...] The precision takes the form of a period (.) followed either by an
asterisk * (described later) or by an optional decimal integer; [...]
and, in paragraph 5,
As noted above, a field width, or precision, or both, may be indicated by an asterisk. In
this case, an int argument supplies the field width or precision. The arguments
specifying field width, or precision, or both, shall appear (in that order) before the
argument (if any) to be converted. [...]
So, you can use the format
printf("%.*f", precision, variable);
like
printf("%.*f", n, pi);
to use the precision n taken from user.

How can variable field width be implemented with printf()?

The question is :
How can variable field width be implemented using printf()? That is, instead of %8d, the width should be specified at run time.
I came across some C code on the Internet based on the question above but as I am new to C programming I haven't been able to make heads or tails of the code.
I am posting the code below:
#include <stdio.h>
int main()
{
const char text[] = "Hello world";
int i;
for ( i = 1; i < 12; ++i )
{
printf("\"%.*s\"\n", i, text);
}
return 0;
}
First of all, let me tell you, the code you have shown is about controlling the precision, not the field width. For a shortened form**
%A.B<format specifier>
A denotes the field width and B makes the precision.
Now, quoting the C11 standard, chapter §7.21.6.1, fprintf() (emphasis mine)
Each conversion specification is introduced by the character %. After the %, the following
appear in sequence:
[..]
An optional precision that gives the minimum number of digits to appear for the d, i,
o, u, x, and X conversions, the number of digits to appear after the decimal-point
character for a, A, e, E, f, and F conversions, the maximum number of significant
digits for the g and G conversions, or the maximum number of bytes to be written for s conversions. The precision takes the form of a period (.) followed either by an
asterisk * (described later) or by an optional decimal integer; if only the period is
specified, the precision is taken as zero. If a precision appears with any other
conversion specifier, the behavior is undefined.
and
As noted above, a field width, or precision, or both, may be indicated by an asterisk. In
this case, an int argument supplies the field width or precision. [...]
So, in your case,
printf("\"%.*s\"\n", i, text);
the precision will be supplied by i which can hold different values at run-time.
The complete format (broken down in separate lines for ease of readability)
%
<Zero or more flags>
<optional minimum field width>
<optional precision>
<optional length modifier>
<A conversion specifier character>

Leading zeros default behaviour with ISO C99 printf ("%Nd")?

I've just spotted the following in the C99 ISO standard, 7.19.6.1 The fprintf function, subsection 6, detailing the conversion flags, specifically the 0 flag:
0: d, i, o, u, x, X, a, A, e, E, f, F, g, and G conversions, leading zeros (following any indication of sign or base) are used to pad to the field width rather than performing space padding, except when converting an infinity or NaN.
So far so good, I know that the following lines will produce the shown output:
printf ("%5d\n", 7); // produces " 7"
printf ("%05d\n",7); // produces "00007"
However, in subsection 8 detailing the conversion modifiers, I see:
d,i: The int argument is converted to signed decimal in the style [−]dddd. The precision specifies the minimum number of digits to appear; if the value being converted can be represented in fewer digits, it is expanded with leading zeros.
That's plainly not the case since the default behaviour is to pad with spaces, not zeroes. Or am I misreading something here?
You're confusing precision and field width:
printf("%.5i", 1); // prints "00001", precision = 5
printf("%5i", 1); // prints " 1", field width = 5
printf("%5.3i", 1); // prints " 001", field width = 5, precision = 3

Resources