This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is printf’s behaviour when printing an int as float?
int main()
{
int x=4;
int y=987634;
printf("%f %f",x,y);
}
On compiling this code i get an output as 0.000000 0.000000. Shouldn't there be a type promotion of x and y to floating point numbers? Shouldn't the O/P be 4.000000 and 987634.000000?
Can anyone help me with this. Thanx in Advance.
Conversions happen to arguments to functions with a prototype which includes the specific parameters. The prototype for printf() does not include the specific parameters after the first one
int printf(const char *format, ...);
so, no arguments after the 1st one get automatically converted except as defined by "default argument conversions" (basically any integer type with a rank lower than int to int and any floating-point type with a rank lower than double to double (thank you, Pascal Cuoq)). You need to convert them explicitly yourself with a cast operation
printf("%f %f\n", (double)x, (double)y);
Ohhh ... and you really, really, really should include the header that has the prototype in question (under penalty of Undefined Behaviour)
#include <stdio.h>
The compiler has no idea that your printf format string is going to interpret the arguments as floats. It passes them straight through as ints.
Because printf is a varargs function, it's really up to you to pass parameters that make sense.
Try printf("%i %i",x,y); to print integers as 4 987634. For printf formatting details see http://www.cplusplus.com/reference/cstdio/printf/.
ints and floats are stored differently in memory, but your compiler does not know that you want floats. You need to convert them explicitly.
printf("%f %f",(float)x,(float)y);
Variadic functions (printf() is one) aren't type checked because variadic signatures don't contain any type information. Thus, there's no implicit type casting. You have to do it manually:
printf("%f %f", (double)x, (double)y);
Related
void main()
{
clrscr();
float f = 3.3;
/* In printf() I intentionaly put %d format specifier to see
what type of output I may get */
printf("value of variable a is: %d", f);
getch();
}
In effect, %d tells printf to look in a certain place for an integer argument. But you passed a float argument, which is put in a different place. The C standard does not specify what happens when you do this. In this case, it may be there was a zero in the place printf looked for an integer argument, so it printed “0”. In other circumstances, something different may happen.
Using an invalid format specifier to printf invokes undefined behavior. This is specified in section 7.21.6.1p9 of the C standard:
If a conversion specification is invalid, the behavior is
undefined.282) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.
What this means is that you can't reliably predict what the output of the program will be. For example, the same code on my system prints -1554224520 as the value.
As to what's most likely happening, the %d format specifier is looking for an int as a parameter. Assuming that an int is passed on the stack and that an int is 4 bytes long, the printf function looks at the next 4 bytes on the stack for the value given. Many implementations don't pass floating point values on the stack but in registers instead, so it instead sees whatever garbage values happen to be there. Even if a float is passed on the stack, a float and an int have very different representations, so printing the bytes of a float as an int will most likely not give you the same value.
Let's look at a different example for a moment. Suppose I write
#include <string.h>
char buf[10];
float f = 3.3;
memset(buf, 'x', f);
The third argument to memset is supposed to be an integer (actually a value of type size_t) telling memset how many characters of buf to set to 'x'. But I passed a float value instead. What happens? Well, the compiler knows that the third argument is supposed to be an integer, so it automatically performs the appropriate conversion, and the code ends up setting the first three (three point zero) characters of buf to 'x'.
(Significantly, the way the compiler knew that the third argument of memset was supposed to be an integer was based on the prototype function declaration for memset which is part of the header <string.h>.)
Now, when you called
printf("value of variable f is: %d", f);
you might think the same thing happens. You passed a float, but %d expects an int, so an automatic conversion will happen, right?
Wrong. Let me say that again: Wrong.
The perhaps surprising fact is, printf is different. printf is special. The compiler can't necessarily know what the right types of the arguments passed to printf are supposed to be, because it depends on the details of the %-specifiers buried in the format string. So there are no automatic conversions to just the right type. It's your job to make sure that the types of the arguments you actually pass are exactly right for the format specifiers. If they don't match, the compiler does not automatically perform corresponding conversions. If they don't match, what happens is that you get crazy, wrong results.
(What does the prototype function declaration for printf look like? It literally looks like this: extern int printf(const char *, ...);. Those three dots ... indicate a variable-length argument list or "varargs", they tell the compiler it can't know how many more arguments there are, or what their types are supposed to be. So the compiler performs a few basic conversions -- such as upconverting types char and short int to int, and float to double -- and leaves it at that.)
I said "The compiler can't necessarily know what the right types of the arguments passed to printf are supposed to be", but these days, good compilers go the extra mile and try to figure it out anyway, if they can. They still won't perform automatic conversions (they're not really allowed to, by the rules of the language), but they can at least warn you. For example, I tried your code under two different compilers. Both said something along the lines of warning: format specifies type 'int' but the argument has type 'float'. If your compiler isn't giving you warnings like these, I encourage you to find out if those warnings can be enabled, or consider switching to a better compiler.
Try
printf("... %f",f);
That's how you print float numbers.
Maybe you only want to print x digits of f, eg.:
printf("... %.3f" f);
That will print your float number with 3 digits after the dot.
Please read through this list:
%c - Character
%d or %i - Signed decimal integer
%e - Scientific notation (mantissa/exponent) using e character
%E - Scientific notation (mantissa/exponent) using E character
%f - Decimal floating point
%g - Uses the shorter of %e or %f
%G - Uses the shorter of %E or %f
%o - Signed octal
%s - String of characters
%u - Unsigned decimal integer
%x - Unsigned hexadecimal integer
%X - Unsigned hexadecimal integer (capital letters)
%p - Pointer address
%n - Nothing printed
The code is printing a 0, because you are using the format tag %d, which represents Signed decimal integer (http://devdocs.io).
Could you please try
void main() {
clrscr();
float f=3.3;
/* In printf() I intentionaly put %d format specifier to see what type of output I may get */
printf("value of variable a is: %f",f);
getch();
}
I tried to print character as a float in printf and got output 0. What is the reason for this.
Also:
char c='z';
printf("%f %X",c,c);
is giving some weird output for hexadecimal while output is correct when I do this:
printf("%X",c);
why is it so?
The printf() function is a variadic function, which means that you can pass a variable number of arguments of unspecified types to it. This also means that the compiler doesn't know what type of arguments the function expects, and so it cannot convert the arguments to the correct types. (Modern compilers can warn you if you get the arguments wrong to printf, if you invoke it with enough warning flags.)
For historical reasons, you can not pass an integer argument of smaller rank than int, or a floating type of smaller rank than double to a variadic function. A float will be converted to double and a char will be converted to int (or unsigned int on bizarre implementations) through a process called the default argument promotions.
When printf parses its parameters (arguments are passed to a function, parameters are what the function receives), it retrieves them using whatever method is appropriate for the type specified by the format string. The "%f" specifier expects a double. The "%X" specifier expects an unsigned int.
If you pass an int and printf tries to retrieve a double, you invoke undefined behaviour.
If you pass an int and printf tries to retrieve an unsigned int, you invoke undefined behaviour.
Undefined behaviour may include (but is not limited to) printing strange values, crashing your program or (the most insidious of them all) doing exactly what you expect.
Source: n1570 (The final public draft of the current C standard)
You need to use a cast operator like this:
char c = 'z';
printf("%f %X", (float)c, c);
or
printf("%f %X", (double)c, c);
In Xcode, if I do not do this, I get the warning:
Format specifies specifies 'double' but the argument has type 'char', and the output is 0.000000.
I tried to print character as a float in printf and got output 0. What is the reason for this.
The question is, what value did you expect to see? Why would you expect something other than 0?
The short answer to your question is that the behavior of printf is undefined if the type of the argument doesn't match the conversion specifier. The %f conversion specifier expects its corresponding argument to have type double; if it isn't, all bets are off, and the exact output will vary.
To understand the floating point issue, consider reading: http://en.wikipedia.org/wiki/IEEE_floating_point
As for hexadecimal, let me guess.. the output was something like... 99?
This is because of encodings.. the machine has to represent information in some format, and usually that format entails either giving meanings to certain bits in a number, or having a table of symbols to numbers, or both
Floating points are sometimes represented as a (sign,mantissa,exponent) triplet all packed in a 32 or 64 bit number - characters are sometimes represented in a format named ASCII, which establishes which number corresponds to each character you type
Because printf, like any function that work with varargs, eg: int foobar(const char fmt, ...) {} tries to interpret its parameter to certain type.
If you say "%f", then pass c (as a char), then printf will try to read a float.
You can read more here: var_arg (even if this is C++, it still applies).
This question already has answers here:
printf("%f", aa) when aa is of type int [duplicate]
(2 answers)
Closed 7 years ago.
Every time I run this program I get different and weird results. Why is that?
#include <stdio.h>
int main(void) {
int a = 5, b = 2;
printf("%.2f", a/b);
return 0;
}
Live Demo
printf("%.2f", a/b);
The output of the division is again of type int and not float.
You are using wrong format specifier which will lead to undefined behavior.
You need to have variables of type float to perform the operation you are doing.
The right format specifier to print out int is %d
In your code, a and b are of type int, so the division is essecntially an integer division, the result being an int.
You cannot use a wrong format specifier anytime. %f requires the corresponding argument to be of type double. You need to use %d for int type.
FWIW, using wrong format specifier invokes undefined behaviour.
From C11 standard, chapter §7.21.6.1, fprintf()
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
If you want a floating point division, you need to do so explicitly by either
promoting one of the variable before the division to enforce floating point division, result of which will be of floating point type.
printf("%.2f", (float)a/b);
use float type for a and b.
You need to change the type as float or double.
Something like this:
printf("%.2f", (float)a/b);
IDEONE DEMO
%f format specifier is for float. Using the wrong format specifier will lead you to undefined behavior. The division of int by an int will give you an int.
Use this instead of your printf()
printf("%.2lf",(double)a/b);
I tried to print character as a float in printf and got output 0. What is the reason for this.
Also:
char c='z';
printf("%f %X",c,c);
is giving some weird output for hexadecimal while output is correct when I do this:
printf("%X",c);
why is it so?
The printf() function is a variadic function, which means that you can pass a variable number of arguments of unspecified types to it. This also means that the compiler doesn't know what type of arguments the function expects, and so it cannot convert the arguments to the correct types. (Modern compilers can warn you if you get the arguments wrong to printf, if you invoke it with enough warning flags.)
For historical reasons, you can not pass an integer argument of smaller rank than int, or a floating type of smaller rank than double to a variadic function. A float will be converted to double and a char will be converted to int (or unsigned int on bizarre implementations) through a process called the default argument promotions.
When printf parses its parameters (arguments are passed to a function, parameters are what the function receives), it retrieves them using whatever method is appropriate for the type specified by the format string. The "%f" specifier expects a double. The "%X" specifier expects an unsigned int.
If you pass an int and printf tries to retrieve a double, you invoke undefined behaviour.
If you pass an int and printf tries to retrieve an unsigned int, you invoke undefined behaviour.
Undefined behaviour may include (but is not limited to) printing strange values, crashing your program or (the most insidious of them all) doing exactly what you expect.
Source: n1570 (The final public draft of the current C standard)
You need to use a cast operator like this:
char c = 'z';
printf("%f %X", (float)c, c);
or
printf("%f %X", (double)c, c);
In Xcode, if I do not do this, I get the warning:
Format specifies specifies 'double' but the argument has type 'char', and the output is 0.000000.
I tried to print character as a float in printf and got output 0. What is the reason for this.
The question is, what value did you expect to see? Why would you expect something other than 0?
The short answer to your question is that the behavior of printf is undefined if the type of the argument doesn't match the conversion specifier. The %f conversion specifier expects its corresponding argument to have type double; if it isn't, all bets are off, and the exact output will vary.
To understand the floating point issue, consider reading: http://en.wikipedia.org/wiki/IEEE_floating_point
As for hexadecimal, let me guess.. the output was something like... 99?
This is because of encodings.. the machine has to represent information in some format, and usually that format entails either giving meanings to certain bits in a number, or having a table of symbols to numbers, or both
Floating points are sometimes represented as a (sign,mantissa,exponent) triplet all packed in a 32 or 64 bit number - characters are sometimes represented in a format named ASCII, which establishes which number corresponds to each character you type
Because printf, like any function that work with varargs, eg: int foobar(const char fmt, ...) {} tries to interpret its parameter to certain type.
If you say "%f", then pass c (as a char), then printf will try to read a float.
You can read more here: var_arg (even if this is C++, it still applies).
This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
here are part of my code.
float a = 12.5;
printf("%d\n", a);
printf("%d\n", (int)a);
printf("%d\n", *(int *)&a);
when I compile in windows, I got:
0
12
1094713344
and then, I compile in linux, I got:
-1437851864
12
1094713344
-1437851864 will be changed every time I excuted it.
my question is: in how does the "printf" function works in linux
It works very well, but why are you passing the wrong sort of data to it? The %d specifier expects and int, but you're passing something else. Bad idea.
If float and int are differently sized across the varargs barrier, this is undefined behavior. And since float is typically promoted to double with varargs calls, if your int is smaller than your double this will break.
In short, this is really bad and broken code. Don't do this.
To print a floating point number in C, you should do:
float a = 12.5;
printf("%f\n", a);
As has been mentioned, passing arguments with types not matching the format string invokes undefined behaviour, so the language standard doesn't place any restrictions on what
float a = 12.5;
printf("%d\n", a);
actually does.
To find out what it does, you'd need to analyse your implementation, or at least the assembly the compiler produced for that code.
A common way for translating that code is to pass the promoted (to double) float argument in a floating point register and tell printf how many arguments are passed in floating point registers. But since the format tells printf to look for an int, it doesn't look in a floating point register for it, but in another register. So the printed value would be whatever happened to be in that register when printf was called.