format string in printf [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
What is the output?
main()
{
float a=4;
int i=2;
printf("%f %d",i/a,i/a);
printf("%d %f",i/a,i/a);
}
The answer I'm receiving is: 0.500000 00 0.000000
Reason: In the first printf, %f=i/a=2/4=int/float so the implicit casting is done and i becomes float which causes a result of a float (i.e 0.500000).
Default precision of float is 6 so after decimal 6 digit then next %d=i/a=2 /4=0.500000, but %d format string print only integer so 0 is printed and after decimal values are discarded.
Next printf with %d=i/a=2/4 print 0 has the same concept; however, %f=i/a=2/4=0.000000 last result I did not understand.

This plain undefined behavior to specify the wrong format specifier to printf in both cases the i/a expression will be promoted to double and you are specifying that it is a int for one argument. The C99 draft standard in section 7.19.6.1 The fprintf function which printf's section refers back to for the format string paragraph 9 says:
If a conversion specification is invalid, the behavior is undefined.[...]
You should enable warning but both gcc and clang will warn about this without cranking them up at all, in gcc I obtain the following message:
warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘double’ [-Wformat]

i/a expression will always evaluate to float since one of the operands are float. While printing, we are using specifiers %d and %f. So when we use %f it will (should) always be 0.5 and when we use %d it should be undefined.
On Linux (ubuntu) with gcc compiler I get following output (added a \n after first print for clarity):
0.500000 2047229448
899608576 0.500000

Undefined behaviour: all the data types in printf are implicitly floats. This is because i/a has type float as the int datum gets promoted to floating point. So you must use %f exclusively in your printf.

Related

Why is the result of -5/2 being printed as 0? [duplicate]

This question already has answers here:
What happens when I use the wrong format specifier?
(2 answers)
Closed 1 year ago.
Can you tell why this statement in C gives 0 as output
printf("Hello %f",-5/2);
Whereas
printf("Hello %d",-5/2);
is giving output as -2
Division of two integers produces a integer result (int in this case). The %f format specifier expects an argument of type double. Using the wrong format specifier for a given argument triggers undefined behavior which in this case gives you an incorrect result.
If you want a floating point result, at least one of the operands to / must have a floating point type, i.e.
printf("Hello %f",-5.0/2);

C 's format specifier

I have this code in C-
#include <stdio.h>
void main(void)
{
int a=90;
float b=4;
printf("%f",90%4);
}
It gives an output 0.0000,I am unable to understand why???
i know that 90%4 returns 2 and the format specifier specified is %f,which is for double,but what I expect is-
That it will give an error,but it is showing 0.0000 as output.
Can someone please explain why?
The type of 90%4 will be int.
The behaviour on using %f as the format specifier for an int is undefined.
The output could be 2, it could be 0. The compiler could even eat your cat.
This discrepancy comes about because the compiler and library do not communicate regarding types. What happens is that your C compiler observes that printf is a variadic function taking any number of arguments, so the extra arguments get passed per their individual types. If you're lucky, it also parses the format string and warns you that the type doesn't match:
$ gcc -Wformat -o fmterr fmterr.c
fmterr.c: In function ‘main’:
fmterr.c:6:2: warning: format ‘%f’ expects argument of type ‘double’,
but argument 2 has type ‘int’ [-Wformat=]
printf("%f",90%4);
^
But this is still just a warning; you might have replaced printf with a function with different behaviour, as far as the compiler is concerned. At run time, floating point and integer arguments may not even be placed in the same place, and certainly don't have the same format, so the particular result of 0.0 is not guaranteed. What really happens may be related to the platform ABI. You can get specified behaviour by changing the printf argument to something like (float)(90%4).
printf is a variadic function. Such functions are obscure, with no type safety worth mentioning. What such functions do is to implicitly promote small integer types to type int and float to double, but that is it. It is not able to do more subtle things like integer to float conversion.
So printf in itself can't tell what you passed on it, it relies on the programmer to specify the correct type. Integer literals such as 90 are of type int. With the %f specifier you told printf that you passed a double, but actually passed an int, so you invoke undefined behavior. Meaning that anything can happen: incorrect prints, correct prints, program crash etc.
An explicit cast (double)(90%4) will solve the problem.
%f expects a double and you pass int(90%4=2) in printf. Thus ,leading to Undefined Behaiour and can give output anything .
You need to explicitly cast -
printf("%f",(double)(90%4));
Don't try this as compiler will generate an error (as pointed by #chux Sir )-
printf("%f",90%(double)4);
In short: there is no error checking for format specifiers. If your format is looking for a double, then whatever you pass as an argument (or even if you pass nothing) will be interpreted as a double.
By default, 90%4 gives an integer.
If you print integer with %f specifier, it will print 0.
For example, printf("%f", 2); will print 0.
You typecast result with float, you will get 2.00000.
printf("%f",(float) (90%4)); will print 2.00000
Hope it clarifies.

Why the division of two integers return 0.00? [duplicate]

This question already has answers here:
printf("%f", aa) when aa is of type int [duplicate]
(2 answers)
Closed 7 years ago.
Every time I run this program I get different and weird results. Why is that?
#include <stdio.h>
int main(void) {
int a = 5, b = 2;
printf("%.2f", a/b);
return 0;
}
Live Demo
printf("%.2f", a/b);
The output of the division is again of type int and not float.
You are using wrong format specifier which will lead to undefined behavior.
You need to have variables of type float to perform the operation you are doing.
The right format specifier to print out int is %d
In your code, a and b are of type int, so the division is essecntially an integer division, the result being an int.
You cannot use a wrong format specifier anytime. %f requires the corresponding argument to be of type double. You need to use %d for int type.
FWIW, using wrong format specifier invokes undefined behaviour.
From C11 standard, chapter §7.21.6.1, fprintf()
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
If you want a floating point division, you need to do so explicitly by either
promoting one of the variable before the division to enforce floating point division, result of which will be of floating point type.
printf("%.2f", (float)a/b);
use float type for a and b.
You need to change the type as float or double.
Something like this:
printf("%.2f", (float)a/b);
IDEONE DEMO
%f format specifier is for float. Using the wrong format specifier will lead you to undefined behavior. The division of int by an int will give you an int.
Use this instead of your printf()
printf("%.2lf",(double)a/b);

printf giving wrong output [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
#include<stdio.h>
#define square(x) x*x
void main()
{
int i;
i = 8 / square(4);
printf("%d %d", i, 8/square(4));
}
Gives output : 8 8
but if I write below code :
#include<stdio.h>
#define square(x) x*x
void main()
{
float i;
i = 8 / square(4);
printf("%f %f", i, 8/square(4));
}
Gives Output : 8.000000 0.000000
Why like that??? please explain
The problems are not just with the format specifier but also the way you have defined your macro. It should be:
#define square(x) ((x)*(x))
Also macros are not type safe. Now if you cast your results you will see what is happening, since the square of 4 is 16 and 8/16 is 0.5 which gets truncated to int hence becomes 0. For proper values this is how you should typecast:
printf("%d %d", (int)i, (int)(8/square(4)));
printf("\n%f %f", (float)i, (float)8/((float)square(4)));
Sample Output:
0 0
0.000000 0.500000
First of all correct this:
#define square(x) x*x
to
#define square(x) ((x)*(x))
for correct results after macro replacement.
Now, in your first program, as others explained you are using wrong format specifier %f to print an integer (8/(square(4) will evaluate to an integer), which is undefined behavior.
In second program, 8/square(4) is type promoted to float as you are storing the result in float i. Therefore, you get 8.000000 on first printing. On second printing, result is wrong due to same reason as above.
The first is easy to understand so I focus on the second only. You use %f for the second parameter which requires a float number while C compiler take 8/square(4) as integer. This mismatch corrupt your result.
8/square(4) results to an int and trying to print an integer using %f is Undefined behavior. So there is no use of debugging the value you got in second case.
If you are using gcc compiler then command cc -E filename.c may clarify your doubts.
It is because you given float as datatype in second program.
8/square(4) will give an integer result, and hence your output becomes wrong. you used %f to print an integer.
That is so simple...
because %f means the type of number is double and default precision

printf prints the float read by scanf instead of the integer parameter [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The following program is supposed to read a float, then prints an integer. But when I run it, it doesn't print the integer. Instead it prints the value that I entered in scanf.
Can anyone explain the output?
#include<stdio.h>
void main()
{
long x;
float t;
scanf("%f",&t);
printf("%d\n",t);
x=90;
printf("%f\n",x);
{
x=1;
printf("%f\n",x);
{
x=30;
printf("%f\n",x);
}
x=9;
printf("%f\n",x);
}
}
t has type float, so printf("%d\n",t) invokes undefined behaviour, since %d expects an argument of type int. Anything can happen. (The same is true of printf("%f\n",x): %f expects a double, but the type of x is long int.)
I once answered one of those cases in detail; perhaps that's of some interest. The upshot is that in practice you can explain the observed behaviour by studying the anatomy of IEEE754 floating point numbers, and by knowing the sizes of integral types on your platform.
printf("%f\n",x);
x has the type long, but the %f converter expects a floating-point value (of type double). Normally, when you pass an integer to a function that expects a floating-point value, the value is silently converted. However, this only applies to functions with a fixed number of arguments and a prototype indicating what the type of each argument is. It does not apply to variadic functions such as printf, because the compiler doesn't have enough information to know what to convert the value to: the format string may only be analyzed at runtime.
The language specification leaves the behavior of this program undefined: it can do anything. What is probably happening in this case is that x is stored in some integer register, and f is stored in some floating-point register. Since the printf calls are looking for a floating-point value, the compiled code goes to look in the first floating-point register. If you had passed a floating-point value to printf, the argument would end up in that register. But you didn't, so the value in that register was the last value that was stored there: the value read by scanf.
A good compiler would warn you that you're doing something wrong. For example, here's what I get when I compile your code with gcc -O -Wall:
a.c:2: warning: return type of 'main' is not 'int'
a.c: In function 'main':
a.c:7: warning: format '%d' expects type 'int', but argument 2 has type 'double'
a.c:9: warning: format '%f' expects type 'double', but argument 2 has type 'long int'
a.c:12: warning: format '%f' expects type 'double', but argument 2 has type 'long int'
a.c:15: warning: format '%f' expects type 'double', but argument 2 has type 'long int'
a.c:18: warning: format '%f' expects type 'double', but argument 2 has type 'long int'
I recommend configuring your compiler to print such warnings and paying attention to them.
To make your program work, either pass a floating-point value where one is expected, or tell printf to expect an integer value. Either
printf("%f", (double)x);
or
printf("%ld", x);
You should use %d when printing an int variable, like this:
printf("%ld", x);
You are using %f.
x is a long integer yet you're printing it as a float. And t is a float yet you're printing it as an integer. Switch the %d and %f format specifiers in your printf calls.

Resources