Comparison of float and double variables [duplicate] - c

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Difference between float and double
strange output in comparision of float with float literal
I am using visual C++ 6.0 and in a program I am comparing float and double variables
For example for this program
#include<stdio.h>
int main()
{
float a = 0.7f;
double b = 0.7;
printf("%d %d %d",a<b,a>b,a==b);
return 0;
}
I am getting 1 0 0 as output
and for
#include<stdio.h>
int main()
{
float a = 1.7f;
double b = 1.7;
printf("%d %d %d",a<b,a>b,a==b);
return 0;
}
I am getting 0 1 0 as output.
Please tell me why I am getting these weird output and is there any way to predict these outputs on the same processor. Also how comparison is done of two variables in C ?

It has to do with the way the internal representation of floats and doubles are in the computer. Computers store numbers in binary which is base 2. Base 10 numbers when stored in binary may have repeating digits and the "exact" value stored in the computer is not the same.
When you compare floats, it's common to use an epsilon to denote a small change in values. For example:
float epsilon = 0.000000001;
float a = 0.7;
double b = 0.7;
if (abs(a - b) < epsilon)
// they are close enough to be equal.

1.7d and 1.7f are very likely to be different values: one is the closest you can get to the absolute value 1.7 in a double representation, and one is the closest you can get to the absolute value 1.7 in a float representation.
To put it into simpler-to-understand terms, imagine that you had two types, shortDecimal and longDecimal. shortDecimal is a decimal value with 3 significant digits. longDecimal is a decimal value with 5 significant digits. Now imagine you had some way of representing pi in a program, and assigning the value to shortDecimal and longDecimal variables. The short value would be 3.14, and the long value would be 3.1416. The two values aren't the same, even though they're both the closest representable value to pi in their respective types.

1.7 is decimal. In binary, it has non-finite representation.
Therefore, 1.7 and 1.7f differ.
Heuristic proof: when you shift bits to the left (ie multiply by 2) it will in the end be an integer if ever the binary representation is “finite”.
But in decimal, multiply 1.7 by 2, and again: you will only obtain non-integers (decimal part will cycle between .4, .8, .6 and .2). Therefore 1.7 is not a sum of powers of 2.

You can't compare floating point variables for equality. The reason is that decimal fractions are represented as binary ones, that means loss of precision.

Related

Strange difference in output for int and float values in C using while loop? [duplicate]

I wrote the following code to compare between a float variable and a double variable in C.
int main()
{
float f = 1.1;
double d = 1.1;
if(f==d)
printf("EQUAL");
if(f < d)
printf("LESS");
if(f > d)
printf("GREATER");
return 0;
}
I am using an online C compiler here to compile my code.
I know that EQUAL will never be printed for recurring decimals. However what I expect should be printed is LESS since double should have a higher precision and therefore should be closer to the actual value of 1.1 than float is. As far as I know, in C when you compare float and double, the mantissa of the float is zero-extended to double, and that zero-extended value should always be smaller.
Instead in all situations GREATER is being printed. Am I missing something here?
The exact value of the closest float to 1.1 is 1.10000002384185791015625. The binary equivalent is 1.00011001100110011001101
The exact value of the closest double to 1.1 is 1.100000000000000088817841970012523233890533447265625. The binary equivalent is 1.0001100110011001100110011001100110011001100110011010.
Lining up the two binary numbers next to each other:
1.00011001100110011001101
1.0001100110011001100110011001100110011001100110011010
The first few truncated bits for rounding to float are 11001100, which is greater than half, so the conversion to float rounded up, making its least significant bits 11001101. That rounding resulted in the most significant difference being a 1 in the float in a bit position that is 0 in the double. The float is greater than the double, regardless of values of bits of lower significance being zero in the float extended to double, but non-zero in the double.
If you'll add the following 2 lines after declaring the 2 variables:
printf("%.9g\n", f);
printf("%.17g\n", d);
you will get the following output:
1.10000002
1.1000000000000001
so it easy to see that due to precision the float is greater then the double thus the printing of GREATER is fine.

Float point numbers are not equal, despite being the same? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Identical float values comparing as inequal [duplicate]
(4 answers)
Closed 3 years ago.
The program below outputs This No. is not same. Why does it do this when both numbers are the same?
void main() {
float f = 2.7;
if(f == 2.7) {
printf("This No. is same");
} else {
printf("This No. is not same");
}
}
Why does it do this when both numbers are the same?
The numbers are not the same value.
double can represent exactly typically about 264 different values.
float can represent exactly typically about 232 different values.
2.7 is not one of the values - some approximations are made given the binary nature of floating-point encoding versus the decimal text 2.7
The compiler converts 2.7 to the nearest representable double or
2.70000000000000017763568394002504646778106689453125
given the typical binary64 representation of double.
The next best double is
2.699999999999999733546474089962430298328399658203125.
Knowing the exact value beyond 17 significant digits has reduced usefulness.
When the value is assigned to a float, it becomes the nearest representable float or
2.7000000476837158203125.
2.70000000000000017763568394002504646778106689453125 does not equal 2.7000000476837158203125.
Should a compiler use a float/double which represents 2.7 exactly like with decimal32/decimal64, code would work as OP expected. double representation using an underlying decimal format is rare. Far more often the underlying double representation is base 2 and these conversion artifacts need to be considered when programming.
Had the code been float f = 2.5;, the float and double value, using a binary or decimal underlying format, would have made if (f == 2.5) true. The same value as a high precision double is representable exactly as a low precision float.
(Assuming binary32/binary64 floating point)
double has 53 bits of significance and float has 24. The key is that if the number as a double has its least significant (53-24) bits set to 0, when converted to float, it will have the same numeric value as a float or double. Numbers like 1, 2.5 and 2.7000000476837158203125 fulfill that. (range, sub-normal, and NaN issues ignored here.)
This is a reason why exact floating point comparisons are typically only done in select situations.
Check this program:
#include<stdio.h>
int main()
{
float x = 0.1;
printf("%zu %zu %zu\n", sizeof(x), sizeof(0.1), sizeof(0.1f));
return 0;
}
Output is 4 8 4.
The values used in an expression are considered as double (double precision floating point format) unless a f is specified at the end. So the expression “x==0.1″ has a double on right side and float which are stored in a single precision floating point format on left side.
In such situations float is promoted to double .
The double precision format uses uses more bits for precision than single precision format.
In your case add 2.7f to get expected result.
The literal 2.7 in f == 2.7 is converted to double that's why 2.7 is not equal to f.

Explain the output when converting Float to integer? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I exactly don't know how why output is coming
float f=1.4, t;
int d,s;
d=(int)f;
printf("d=%d\n",d);
t=f-d;
printf("t=%f\n",t);
t=t*10;
printf("t=%f\n",t);
s=(int)t;
printf("s=%d\n",s);
the output is
d=1
t=0.400000
t=4.000000
s=3
and similarly when f=1.1
the output is
d=1
t=0.100000
t=1.000000
s=1
Is this related to the way the integer and float is stored in the memory or something else ?
You have initialized f=1.4, not when you do
d=(int)f;
You are converting float to integer, and when float is converted to integer, all the number after period "." are truncated. Now d has 1 so
t=f-d;
will be 1.4 - 1 = 0.4
t=t*10;
t=0.4*10=4 and as t is float so it outputs 4.0000
Float represent trailing zeros at the end
s=(int)t;
Here again you are converting float to integer, Now here is tricky part, All the values above are rounded off, here t has value 3.99999976 so when converted to integer it shows 3 as a result
This is all because when you initialize t=1.4, in actual it is initialized to 1.39999998
During the first assignment
float f=1.4;
there is an approximation because 1.4 is intended as double (not float). Something like 1.39999999 is assigned to f.
Try to use
float f=1.4f;
and it should work as you expect.
Let's take it step-by-step and see how floating point and int interact.
Assume a typical platform where
float is IEEE 754 single-precision binary floating-point format and
double is IEEE 754 double-precision binary floating-point format
float f=1.4, t;
// 1.4 isn't exactly representable in FP & takes on the closest `double` value of
// 1.399999999999999911182158029987476766109466552734375
// which when assigned to a float becomes
// 1.39999997615814208984375
int d,s;
d=(int)f;
// d gets the truncated value of 1 and prints 1, no surprise.
printf("d=%d\n",d);
t=f-d;
// t gets the value 0.39999997615814208984375
// A (double) version of t, with the same value is passed to printf()
// This is printed out, rounded to 6 (default) decimal places after the '.' as
// 0.400000
printf("t=%f\n",t);
t=t*10;
// t is multiplied by exactly 10 and gets the value
// 3.9999997615814208984375
// A (double) version of t, with the same value is passed to printf()
// which prints out, rounded to 6 decimal places after the '.' as
// 4.00000.
printf("t=%f\n",t);
s=(int)t;
// s gets the truncated value of 3.9999997615814208984375
// which is 3 and prints out 3. - A bit of a surprise.
printf("s=%d\n",s);

Float value confusion in C

My code is
void main()
{
float a = 0.7;
if (a < 0.7)
printf("c");
else
printf("c++");
}
It prints C and this is fine as a treated as double constant value and its value will be 0.699999 which is less than 0.7.
Now if I change the value to 0.1,0.2,0.3 till 0.9 in a and also at if condition then it prints C++, apart from 0.7 and 0.9 that mean both are equal or a is greater.
Why this concept not consider for all value?
What "concept" are you talking about?
None of the numbers you mentioned are representable precisely in binary floating-point format (regardless of precision). All of the numbers you mentioned end up having infinite number of binary digits after the dot.
Since neither float nor double have infinite precision, in float and double formats the implementation will represent these values approximately, most likely by a nearest representable binary floating-point value. These approximate values will be different for float and double. And the approximate float value might end up being greater or smaller than the approximate double value. Hence the result you observe.
For example in my implementation, the value of 0.7 is represented as
+6.9999998807907104e-0001 - float
+6.9999999999999995e-0001 - double
Meanwhile the value of 0.1 is represented as
+1.0000000149011611e-0001 - float
+1.0000000000000000e-0001 - double
As you can see, double representation is greater than float representation in the first example, while in the second example it is the other way around. (The above are decimal notations, which are rounded by themselves, but they do have enough precision to illustrate the effect well enough.)

I cant understand this program [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
strange output in comparision of float with float literal
I can't understand this code. How can two same numbers be compared?
#include<stdio.h>
int main()
{
float a=0.8;
if(0.8>a) //how can we compare same numbers?
printf("c");
else
printf("c++");
return 0;
}
How can this problem be solved?
I do not understand why you ask whether the same two numbers can be compared. Why would you not expect a comparison such as 3 > 3 to work, even though 3 is the same as 3? The > operator returns true if the left side is greater than the right side, and it returns false otherwise. In this case, .8 is not greater than a, so the result is false.
Other people seem to have assumed that you are asking about some floating-point rounding issue. However, even with exact mathematics, .8 > .8 is false, and that is the result you get with the code you showed; the else branch is taken. So there is no unexpected behavior here to explain.
What is your real question?
In case you are asking about floating-point effects, some information is below.
In C source, the text “.8” stands for one of the numbers that is representable in double-precision that is nearest .8. (A good implementation uses the nearest number, but the C standard allows some slack.) In the most common floating-point format, the nearest double-precision value to .8 is (as a hexadecimal floating-point numeral) 0x1.999999999999ap-1, which is (in decimal) 0.8000000000000000444089209850062616169452667236328125.
The reason for this is that binary floating-point represents numbers only with bits that stand for powers of two. (Which powers of two depends on the exponent of the floating-point value, but, regardless of the exponent, every bit in the fraction portion represents some power of two, such as .5, .25, .125, .0625, and so on.) Since .8 is not an exact multiple of any power of two, then, when the available bits in the fraction portion are all used, the resulting value is only close to .8; it is not exact.
The initialization float a = .8; converts the double-precision value to single-precision, so that it can be assigned to a. In single-precision, the representable number closest to .8 is 0x1.99999ap-1 (in decimal, 0.800000011920928955078125).
Thus, when you compare “.8” to a, you find that .8 > a is false.
For some other values, such as .7, the nearest representable numbers work out differently, and the relational operator returns true. For example, .7 > .7f is true. (The text “.7f” in source code stands for a single-precision floating-point value near .7.)
0.8 is a double. When a is set to it, then it's converted into a float and at this point looses precision. The comparison takes the float and promotes it back to a double, so the value is for sure different.
EDIT: I can prove my point. I just compile and ran a program
float a = 0.8;
int b = a == 0.8 ? 1 : 0;
int c = a < 0.8 ? 1 : 0;
int d = a > 0.8 ? 1 : 0;
printf("b=%d, c=%d, d=%d, a=%.12f 0.8=%.12f \n", b, c, d, a, 0.8);
b=0, c=0, d=1, a=0.800000011921 0.8=0.800000000000
Notice how a now has some very small factional part, due to the promotion to double

Resources