This question already has answers here:
strange output in comparison of float with float literal
(8 answers)
Closed 4 years ago.
Below is the program for if and else
#include<stdio.h>
#include<conio.h>
int main()
{
float a = 0.7; //a declared as float variable
if(a == 0.7) //why it takes only integral part of 0.7
{
printf("Hi");
}
else
{
printf("hello");
}
return 0;
}
Shouldn't this program display Hi instead of hello as 0.7 is equal to 0.7?
(I'm new to C programming)
Note that
float a = 0.7;
converts a double value 0.7 to a float and then
if(a == 0.7)
converts a to a double.
Since 0.7 cannot be exactly represented in floating point, the float and double representations are not quite the same.
You can force the compiler to stay with float like this
#include<stdio.h>
int main()
{
float a = 0.7f;
if(a == 0.7f)
{
printf("Hi");
}
else
{
printf("hello");
}
return 0;
}
And now the program prints "Hi" because it is comparing like with like.
More generally you must be very wary of floating point comparisons for equality
#include<stdio.h>
int main()
{
double d = 2.1;
d /= 3.0;
if(d == 0.7) {
puts("Equal");
}
else {
puts("Unequal");
}
printf("%.17f\n", d);
printf("%.17f\n", 0.7);
return 0;
}
Program output
Unequal
0.70000000000000007
0.69999999999999996
If you change datatype of a to double, it'll print hi,
This is because constants in floating point stored in double and non floating in long,
double precision is high and float has less precise, double value stored in 64 bit binary and float value stored in 32 bit binary, it'll be completely clear if you see the method of floating point numbers conversion to binary convertion.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed last year.
Can someone please explain the output here?
#include <stdio.h>
int main(void) {
float f = 0.1 ;
printf ("%f\n",f ) ;
if (f == 0.100000) {
printf ("true ") ;
}
else {
printf ("False") ;
}
return 0;
}
output:
0.100000
False
You are trying to compare a float number with a double. The number 0.100000is considered as doubletype unless you specify with a trailing f that it is a float. Thus:
int main(){
float x = 0.100000;
if (x == 0.100000)
printf("NO");
else if (x == 0.100000f)
// THE CODE GOES HERE
printf("YES");
else
printf("NO");
}
You mat refer to single precision and double precision for theoritical details
Objects of the types float and double are stored with different precisions.
You should write
if (f == 0.100000f) {
trying to compare two floats.
Also it would be better to initialize the variable f with a constant of the type float
float f = 0.1f;
0.100000 is double by default, try 0.100000f. if you need to compare with double, you need to try something described here.
This question already has answers here:
Why are floating point numbers inaccurate?
(5 answers)
Closed 6 years ago.
#include<stdio.h>
void main()
{
float a = 2.3;
if(a == 2.3) {
pritnf("hello");
}
else {
printf("hi");
}
}
It prints "hi" in output, or we can say that if condition is getting false value.
#include<stdio.h>
void main()
{
float a = 2.5;
if(a == 2.5)
printf("Hello");
else
printf("Hi");
}
It prints hello.
The variable a is a float that holds some value close to the mathematical value 2.3.
The literal 2.3 is a double that also holds some value close to the mathematical value 2.3, but because double has greater precision than float, this may be a different value from the value of a. Both float and double can only represent a finite number of values, so there are necessarily mathematical real numbers that cannot be represented exactly by either of those two types.
In the comparison a == 2.3, the left operand is promoted from float to double. This promotion is exact and preserves the value (as all promotions do), but as discussed above, that value may be a different one from that of the 2.3 literal.
To make a comparison between floats, you can use an appropriate float literal:
assert(a == 2.3f);
// ^
2.3 with binary representation is 01000000000100110011001100110011...
so you are not able to set a float exactly to 2.3
with double precision you get something similar: 2.299999952316284
you converted a double to float when you wrote:
float a = 2.3;
the if checks if the float a is equal to double 2.299999952316284
you should write:
float a = 2.3f;
and you can check:
if (a == 2.3f) {
...
}
i would rather test with:
if (fabs(a - 2.3f) < 0.00001) {
...
}
the 2.5 represented with bits is: 01000000001000000000000000000000
EDIT: fabs is part of the <math.h> or <cmath>
Read this: article
Comparing floating point values is not as easy as it might seem, have a look at Most effective way for float and double comparison.
It all boils down to the fact, that floating point numbers are not exact (well
most are not). Usually you compare 2 floats by allowing a small error window (epsilon):
if( fabs(a - 2.3f) < epsion) { ... }
where epsilon is small enough for your calculation, but not too small (bigger than Machine epsilon).
I wrote this code and only 'Hello' got printed.
float x=1.1;
printf("Hello\n");
while(x-1.1==0)
{
printf("%f\n",x);
x=x-1;
}
When you are dealing with floating point operations, you don't get the results as you would expect.
What you are seeing is:
1.1 is represented as a float in x.
In the while statement, 1.1 is of type double, not float. Hence, x is promoted to a double before the subtraction and comparison is made.
You lose precision in these steps. Hence x-1.1 does not evaluate to 0.0.
You can see expected results if you use appropriate floating point constants.
#include <stdio.h>
void test1()
{
printf("In test1...\n");
float x=1.1;
// Use a literal of type float, not double.
if (x-1.1f == 0)
{
printf("true\n");
}
else
{
printf("false\n");
}
}
void test2()
{
printf("In test1...\n");
// Use a variable of type double, not float.
double x=1.1;
if (x-1.1 == 0)
{
printf("true\n");
}
else
{
printf("false\n");
}
}
int main()
{
test1();
test2();
return 0;
}
Output:
In test1...
true
In test2...
true
This is because x is a single-precision floating-point number, but you subtract the constant 1.1 from it, which is double-precision. So your single-precision 1.1 is converted to double-precision, and the subtraction is performed, but the result is non-zero (since 1.1 cannot be exactly represented, but the double-precision value is closer than the single-precision value). Try the following:
#include <stdio.h>
int main()
{
float x = 1.1;
double y = 1.1;
printf("%.20g\n", x - 1.1);
printf("%.20g\n", y - 1.1);
return 0;
}
On my computer, the result is:
2.384185782133840803e-08
0
Compare float like -0.000001 < x - 1.1 && x - 1.1 < 0.00001
You need to read up on floating point precision.
TL;DR is - x-1.1 is not really round 0.
for example, in my debugger, x equals to 1.10000002 - this is due to the nature of the floating point precision.
Relevant read:
http://floating-point-gui.de/
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
The following piece of code :
int main()
{
float a=0.7;
printf("%.10f %.10f\n",0.7, a);
return 0;
}
gives 0.7000000000 0.6999999881 as answer while the following piece of code :
int main()
{
float a=1.7;
printf("%.10f %.10f\n",1.7, a);
return 0;
}
gives 1.7000000000 1.7000000477 as output.
Why is that in the first case upon printing a I got a value less than 0.7 and in the second case more than 1.7?
When you pass a floating-point constant to printf, it is passed as a double.
So it is not the same as passing a float variable with "the same" value to printf.
Change this constant value to 0.7f (or 1.7f), and you'll get the same results.
Alternatively, change float a to double a, and you'll also get the same results.
Option #1:
double a = 0.7;
printf("%.10f %.10f\n",0.7,a);
// Passing two double values to printf
Option #2:
float a = 0.7;
printf("%.10f %.10f\n",0.7f,a);
// Expanding two float values to double values and passing them to printf
This question already has answers here:
problems in floating point comparison [duplicate]
(2 answers)
Explain this floating point behavior
(6 answers)
Closed 9 years ago.
Why does the code given below Prints b on the Screen?
#include<stdio.h>
main()
{
float a = 5.6;
if(a == 5.6)
{
printf("a");
}
else
{
printf("b");
}
}
As floating point Numbers can't be matched exactly (because between each 2 numbers you choose are infinite other numbers). A machine can't represent them all and is forced to represent them with a moddel of only some floating point numbers it is able to represent.
So in your case, the system is probably not storing 5.6 because that's a number your machine doesn't want to represent. Instead it is storing something which is pretty close to 5.6 into the memory.
So if you do comparing with floating point numbers you never should check for equivalenz. Instead you should use the system C define FLT_EPSILON and check for
if (((a - 5.6) > -FLT_EPSILON) && ((a - 5.6) < FLT_EPSILON))
{
...
}
Where FLT_EPSILON is the smallest representable float type value.
So if the difference from a to 5.6 is absolute smaller as EPSILON, you can be sure it WAS equal, but the machine has chosen the next number it knows instead of 5.6.
The same would be DBL_EPSILON for double type.
this types are defined in float.h
When you want a float value don't forget to add f
#include<stdio.h>
main()
{
float a = 5.6f;
if(a == 5.6f)
{
printf("a");
}
else
{
printf("b");
}
}
prints a as expected.
The problem was that both 5.6 are defined as double literals, and the a got converted into a float while in the if it's still comparing it to a double value so you get false.
Actually adding f only inside the if would be enough, but better safe then sorry.
floating point number in c is double by default. If you want use as float you need to add f at end of the number. try below code it gives a
#include<stdio.h>
main()
{
float a = 5.6;
if(a == 5.6f)
{
printf("a");
}
else
{
printf("b");
}