This question already has answers here:
printf specify integer format string for float
(7 answers)
Closed 5 years ago.
I wrote this very simple and short code, but it doesn't work: when I compile and execute the returned value from the function calculateCharges() is 0 when I'm expecting 2.
Can anybody explain why, please?
#include <stdio.h>
#include <stdlib.h>
float calculateCharges(float timeIn);
int main()
{
printf("%d", calculateCharges(3.0));
return 0;
}
float calculateCharges(float timeIn)
{
float Total;
if(timeIn <= 3.0)
Total = 2.0;
return Total;
}
There are at least three problems here, two of which should be easily noticeable if you enable compiler warnings (-Wall command-line option), and which lead to undefined behavior.
One is wrong format specifier in your printf statement. You're printing a floating point value wirh %d, the format specifier for signed integer. The correct specifier is %f.
The other is using uninitialized value. The variable Total is potentially uninitialized if the if statement in your function isn't gone through, and the behavior of such usage is undefined.
From my point of view, it's likely the wrong format specifier that caused the wrong output. But it's also recommended that you fix the second problem described above.
The third problem has to do with floating point precision. Casting values between float and double may not be a safe round-trip operation.
Your 3.0 double constant is cast to float when passed to calculateCharges(). That value is then cast up to a double in the timeIn <= 3.0 comparison (to match the type of 3.0).
It's probably okay with a value like 3.0 but it's not safe in the general case. See, for example, this piece of code which exhibits the problem.
#include <stdio.h>
#define EPI 2.71828182846314159265359
void checkDouble(double x) {
printf("double %s\n", (x == EPI) ? "okay" : "bad");
}
void checkFloat(float x) {
printf("float %s\n", (x == EPI) ? "okay" : "bad");
}
int main(void) {
checkFloat(EPI);
checkDouble(EPI);
return 0;
}
You can see from the output that treating it as double always is okay but not so when you cast to float and lose precision:
float bad
double okay
Of course, the problem goes away if you ensure you always use and check against the correct constant types, such as by using 3.0F.
%d will print integers.
Total is a float, so it will not work.
You must use the proper specifier for a float.
(You should research that yourself, rather than have us give you the answer)
Related
I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3
This question already has answers here:
printf("%f", aa) when aa is of type int [duplicate]
(2 answers)
Closed 7 years ago.
Every time I run this program I get different and weird results. Why is that?
#include <stdio.h>
int main(void) {
int a = 5, b = 2;
printf("%.2f", a/b);
return 0;
}
Live Demo
printf("%.2f", a/b);
The output of the division is again of type int and not float.
You are using wrong format specifier which will lead to undefined behavior.
You need to have variables of type float to perform the operation you are doing.
The right format specifier to print out int is %d
In your code, a and b are of type int, so the division is essecntially an integer division, the result being an int.
You cannot use a wrong format specifier anytime. %f requires the corresponding argument to be of type double. You need to use %d for int type.
FWIW, using wrong format specifier invokes undefined behaviour.
From C11 standard, chapter §7.21.6.1, fprintf()
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
If you want a floating point division, you need to do so explicitly by either
promoting one of the variable before the division to enforce floating point division, result of which will be of floating point type.
printf("%.2f", (float)a/b);
use float type for a and b.
You need to change the type as float or double.
Something like this:
printf("%.2f", (float)a/b);
IDEONE DEMO
%f format specifier is for float. Using the wrong format specifier will lead you to undefined behavior. The division of int by an int will give you an int.
Use this instead of your printf()
printf("%.2lf",(double)a/b);
The below program is printing 123828749, 0.000000 but I expected 123828749, 123828749.0. From where it is getting 0.000000 ?
#include <stdio.h>
void main()
{
double x = 123828749.66;
int y = x;
printf("%d\n", y);
printf("%lf\n", y);
}
Thanks
In the second call to printf you are passing an int, but the format string is %lf which expects a floating point value to be passed. This invokes undefined behaviour.
If you want to treat y as a floating point value when you pass it to printf, you'll need an explicit conversion:
printf("%lf\n", (double)y);
To answer more precisely to your question (even though David's answer is completely precise), when (at runtime) the format string gets parsed by printf function it expects double format. However, you supplied an int and since int is most likely not the same size as double (there is no indication that this is the case, though) your program read some junk memory and printed this unexpected result. This is where the zeroes came from.
This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
here are part of my code.
float a = 12.5;
printf("%d\n", a);
printf("%d\n", (int)a);
printf("%d\n", *(int *)&a);
when I compile in windows, I got:
0
12
1094713344
and then, I compile in linux, I got:
-1437851864
12
1094713344
-1437851864 will be changed every time I excuted it.
my question is: in how does the "printf" function works in linux
It works very well, but why are you passing the wrong sort of data to it? The %d specifier expects and int, but you're passing something else. Bad idea.
If float and int are differently sized across the varargs barrier, this is undefined behavior. And since float is typically promoted to double with varargs calls, if your int is smaller than your double this will break.
In short, this is really bad and broken code. Don't do this.
To print a floating point number in C, you should do:
float a = 12.5;
printf("%f\n", a);
As has been mentioned, passing arguments with types not matching the format string invokes undefined behaviour, so the language standard doesn't place any restrictions on what
float a = 12.5;
printf("%d\n", a);
actually does.
To find out what it does, you'd need to analyse your implementation, or at least the assembly the compiler produced for that code.
A common way for translating that code is to pass the promoted (to double) float argument in a floating point register and tell printf how many arguments are passed in floating point registers. But since the format tells printf to look for an int, it doesn't look in a floating point register for it, but in another register. So the printed value would be whatever happened to be in that register when printf was called.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), § 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)