Why is the statement "f == (float)(double)f;" wrong? - c

I have recently taken a lecture of System Programming, and my professor told me that f == (float)(double) f is which wrong that I cannot get.
I know that double type loses its data when converted to float, but I believe the loss happens only if the stored number in double type cannot be expressed in float type.
Shouldn't it be true as same as x == (int)(double)x; is true?
the picture is the way I'm understanding it
I'm so sorry that I didn't make my question clearly.
the question is not about declaration, but about double type conversion.
I hope you don't lose your precious time because of my fault.

Assuming IEC 60559, the result of f == (float)(double) f depends on the type of f.
Further assuming f is a float, then there's nothing "wrong" about the expression - it will evaluate to true (unless f held NaN, in which case the expression will evaluate to false).
On the other hand, x == (int)(double)x (assuming x is a int) is (potentially) problematic, since a double precision IEC 60559 floating point value only has 53 bits for the significand1, which cannot represent all possible values of an int if it uses more than 53 bits for its value on your platform (admittedly rare). So it will evaluate to true on platforms where ints are 32-bit (using 31 bits for the value), and might evaluate to false on platforms where ints are 64-bit (using 63 bits for the value) (depending on the value).
Relevant quotes from the C standard (6.3.1.4 and 6.3.1.5) :
When a value of integer type is converted to a real floating type, if the value being converted can be represented exactly in the new type, it is unchanged.
When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.
When a value of real floating type is converted to a real floating type, if the value being converted can be represented exactly in the new type, it is unchanged.
1 a double precision IEC 60559 floating point value consists of 1 bit for the sign, 11 bits for the exponent, and 53 bits for the significand (of which 1 is implied and not stored) - totaling 64 (stored) bits.

Taking the question as posed in the title literally,
Why is the statement “f == (float)(double)f;” wrong?
the statement is "wrong" not in any way related to the representation of floating point values but because it is trivially optimized away by any compiler and thus you might as well have saved the electrons used to store it. It is exactly equivalent to the statement
1;
or, if you like, to the statement (from the original question)
x == (int)(double)x;
(which has exactly the same effect as that in the title, regardless of the available precision of the types int, float, and double, i.e. none whatsoever).
Programming being somewhat concerned with precision you should perhaps take note of the difference between a statement and an expression. An expression has a value which might be true or false or something else, but when you add a semicolon (as you did in the question) it becomes a statement (as you called it in the question) and in the absence of side effects the compiler is free to throw it away.

NaNs are retained through float => double => float, but they not equal themselves.
#include <math.h>
#include <stdio.h>
int main(void) {
float f = HUGE_VALF;
printf("%d\n", f == (float)(double) f);
f = NAN;
printf("%d\n", f == (float)(double) f);
printf("%d\n", f == f);
}
Prints
1
0
0

Related

Nonintuitive result of the assignment of a double precision number to an int variable in C

Could someone give me an explanation why I get two different
numbers, resp. 14 and 15, as an output from the following code?
#include <stdio.h>
int main()
{
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
printf("%d %d",b,c); // 14 15, why?
return 0;
}
I expect to get 15 in both cases but it seems I'm missing some fundamentals of the language.
I am not sure if it's relevant but I was doing the test in CodeBlocks. However, if I type the same lines of code in some on-line compiler ( this one for example) I get an answer of 15 for the two printed variables.
... why I get two different numbers ...
Aside from the usual float-point issues, the computation paths to b and c are arrived in different ways. c is calculated by first saving the value as double a.
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
C allows intermediate floating-point math to be computed using wider types. Check the value of FLT_EVAL_METHOD from <float.h>.
Except for assignment and cast (which remove all extra range and precision), ...
-1 indeterminable;
0 evaluate all operations and constants just to the range and precision of the
type;
1 evaluate operations and constants of type float and double to the
range and precision of the double type, evaluate long double
operations and constants to the range and precision of the long double
type;
2 evaluate all operations and constants to the range and precision of the
long double type.
C11dr §5.2.4.2.2 9
OP reported 2
By saving the quotient in double a = (Vmax-Vmin)/step;, precision is forced to double whereas int b = (Vmax-Vmin)/step; could compute as long double.
This subtle difference results from (Vmax-Vmin)/step (computed perhaps as long double) being saved as a double versus remaining a long double. One as 15 (or just above), and the other just under 15. int truncation amplifies this difference to 15 and 14.
On another compiler, the results may both have been the same due to FLT_EVAL_METHOD < 2 or other floating-point characteristics.
Conversion to int from a floating-point number is severe with numbers near a whole number. Often better to round() or lround(). The best solution is situation dependent.
This is indeed an interesting question, here is what happens precisely in your hardware. This answer gives the exact calculations with the precision of IEEE double precision floats, i.e. 52 bits mantissa plus one implicit bit. For details on the representation, see the wikipedia article.
Ok, so you first define some variables:
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
The respective values in binary will be
Vmax = 10.111001100110011001100110011001100110011001100110011
Vmin = 1.0110011001100110011001100110011001100110011001100110
step = .00011001100110011001100110011001100110011001100110011010
If you count the bits, you will see that I have given the first bit that is set plus 52 bits to the right. This is exactly the precision at which your computer stores a double. Note that the value of step has been rounded up.
Now you do some math on these numbers. The first operation, the subtraction, results in the precise result:
10.111001100110011001100110011001100110011001100110011
- 1.0110011001100110011001100110011001100110011001100110
--------------------------------------------------------
1.1000000000000000000000000000000000000000000000000000
Then you divide by step, which has been rounded up by your compiler:
1.1000000000000000000000000000000000000000000000000000
/ .00011001100110011001100110011001100110011001100110011010
--------------------------------------------------------
1110.1111111111111111111111111111111111111111111111111100001111111111111
Due to the rounding of step, the result is a tad below 15. Unlike before, I have not rounded immediately, because that is precisely where the interesting stuff happens: Your CPU can indeed store floating point numbers of greater precision than a double, so rounding does not take place immediately.
So, when you convert the result of (Vmax-Vmin)/step directly to an int, your CPU simply cuts off the bits after the fractional point (this is how the implicit double -> int conversion is defined by the language standards):
1110.1111111111111111111111111111111111111111111111111100001111111111111
cutoff to int: 1110
However, if you first store the result in a variable of type double, rounding takes place:
1110.1111111111111111111111111111111111111111111111111100001111111111111
rounded: 1111.0000000000000000000000000000000000000000000000000
cutoff to int: 1111
And this is precisely the result you got.
The "simple" answer is that those seemingly-simple numbers 2.9, 1.4, and 0.1 are all represented internally as binary floating point, and in binary, the number 1/10 is represented as the infinitely-repeating binary fraction 0.00011001100110011...[2] . (This is analogous to the way 1/3 in decimal ends up being 0.333333333... .) Converted back to decimal, those original numbers end up being things like 2.8999999999, 1.3999999999, and 0.0999999999. And when you do additional math on them, those .0999999999's tend to proliferate.
And then the additional problem is that the path by which you compute something -- whether you store it in intermediate variables of a particular type, or compute it "all at once", meaning that the processor might use internal registers with greater precision than type double -- can end up making a significant difference.
The bottom line is that when you convert a double back to an int, you almost always want to round, not truncate. What happened here was that (in effect) one computation path gave you 15.0000000001 which truncated down to 15, while the other gave you 14.999999999 which truncated all the way down to 14.
See also question 14.4a in the C FAQ list.
An equivalent problem is analyzed in analysis of C programs for FLT_EVAL_METHOD==2.
If FLT_EVAL_METHOD==2:
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
computes b by evaluating a long double expression then truncating it to a int, whereas for c it's evaluating from long double, truncating it to double and then to int.
So both values are not obtained with the same process, and this may lead to different results because floating types does not provides usual exact arithmetic.

value of variable in c language with 100 digits

So I'm new to c , and I have just learned about data type, what confuse me is that a value range of a double for example is from 2.3E-308 to 1.7E+308
mathematically a number of 100 digits ∈ [2.3E-308 , 1.7E+308].
Writing this simple program
#include <stdio.h>
int main()
{
double c = 5416751717547457918597197587615765157415671579185765176547645735175197857989185791857948797847984848;
printf("%le",c);
return 0;
}
the result is 7.531214e+18 by changing %le by %lf th result is 7531214226330737664.000000
which doesn't equal c.
So whats is the problem.
This long number is actually a numerical literal of type long long. But since this type cannot contain such a long number, it is truncated modulo (LLONG_MAX + 1) and resulting in 7531214226330737360.
Demo.
Edit:
#JohnBollinger: ... and then converted to double, with a resulting loss of a few (binary) digits of precision.
#rici: Demo2 - here the constant is of type double because of added decimal point
It might seem that, if we can store a number of up to 10 to the power 308, we are storing 308 digits or so but, in floating point arithmetic, that isn't the case. Floating point numbers are not stored as huge strings of digits.
Broadly, a floating-point number is stored as a mantissa -- typically a number between zero and one -- and an exponent -- some number raised to the power of some other number. The different kinds of floating point number (float, double, long double) each has a different number of bits allocated to the mantissa and exponent. These bit counts, particularly in the mantissa, control the precision with which the number can be represented.
A double on most platforms gives 16-17 decimal digits of precision, regardless of the magnitude (power of ten). It's possible to use libraries that will do arithmetic to any degree of precision required, although such features are not built into C.
An additional complication is that, in your example, the number you assign to c is not actually defined to be a floating point number at all. Lacking any indication that it should be so represented, the compiler will treat it as an integer and, as it's too large to fit even the largest integer type on most platforms, it gets truncated down to integer range.
You should get a proper compiler or enable warnings on it. A recent GCC, with just default settings will output the following warning:
% gcc float.c
float.c: In function ‘main’:
float.c:4:12: warning: integer constant is too large for its type
double c = 5416751717547457918597197587615765157415671579185765176547645735175197857989185791857948797847984848;
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Notice that it says integer, i.e. a whole number, not floating point. In C a constant of that form denotes an integer. Unless suffixed with U, it is additionally a signed integer, of the greatest type that it fits. However, neither standard C, nor common implementations, have a type that is big enough to fit this value. So what happens, is [(C11 6.4.4.1p6)[http://port70.net/~nsz/c/c11/n1570.html#6.4.4.1p6]) :
If an integer constant cannot be represented by any type in its list and has no extended integer type, then the integer constant has no type.
Use of such an integer constant without type in arithmetic leads to undefined behaviour, that is the whole execution of the program is now meaningless. You should have read the warnings.
The "fix" would have been to add a . after the number!
#include <stdio.h>
int main(void)
{
double c = 54167517175474579185971975876157651574156715791\
85765176547645735175197857989185791857948797847984848.;
printf("%le\n",c);
}
And running it:
% ./a.out
5.416752e+99
Notice that even then, a double is precise to average ~15 significant decimal digits only.

Why does (int) float == float.truncate instead of garbage (How does casting actually work?)

Going on understanding of these datatypes as primitives
(int) char, and (char) int are intepretations of data. (int) c gives the integer value of that character, and (char) 14 gives you back the character encoded by 14.
I've always understood this as being a "memory parse", such that it just takes the value at that position and then applies a type filter to it.
Given that floating points are stored as some version of scientific notation, what is stored in memory should be garbage as an integer. Looking into this utility http://www.h-schmidt.net/FloatConverter/IEEE754.html it appears that the whole number portion is separated.
However, since this is in the higher portion of memory, how does the int cast know to "reformat"? Does the compiler identify that it was a float and apply special handling, or what's going on?
Your understanding of casts is completely wrong. Casts are nothing but explicit requests for a value conversion from one type to another. They do not reinterpret the representation of one type as if it had a different type. The source code:
float f = 42.5;
int x;
x = (int)f;
simply instructs the compiler to produce code that truncates the floating point value of the expression f to an integer and store the result in the object x.
I've always understood this as being a "memory parse", such that it just takes the value at that position and then applies a type filter to it.
That is an incorrect understanding.
The language specifies conversions between the fundamental arithmetic types. Lookup "Usual Arithmetic Conversions" on the web. You will find a lot of links that describe that. For converting a floating point type to an integral type, this is what the C99 Standard has to say:
6.3.1.4 Real floating and integer
1 When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.
float f = 4.5;
int i = (int); // i is 4
f = -6.3;
i = (int)f; // i is -6

When a double with an integer value is cast to an integer, is it guaranteed to do it 'properly'?

When a double has an 'exact' integer value, like so:
double x = 1.0;
double y = 123123;
double z = -4.000000;
Is it guaranteed that it will round properly to 1, 123123, and -4 when cast to an integer type via (int)x, (int)y, (int)z? (And not truncate to 0, 123122 or -5 b/c of floating point weirdness). I ask b/c according to this page (which is about fp's in lua, a language that only has doubles as its numeric type by default), talks about how integer operations with doubles are exact according to IEEE 754, but I'm not sure if, when calling C-functions with integer type parameters, I need to worry about rounding doubles manually, or it is taken care of when the doubles have exact integer values.
Yes, if the integer value fits in an int.
A double could represent integer values that are out of range for your int type. For example, 123123.0 cannot be converted to an int if your int type has only 16 bits.
It's also not guaranteed that a double can represent every value a particular type can represent. IEEE 754 uses something like 52 or 53 bits for the mantissa. If your long has 64 bits, then converting a very large long to double and back might not give the same value.
As Daniel Fischer stated, if the value of the integer part of the double (in your case, the double exactly) is representable in the type you are converting to, the result is exact. If the value is out of range of the destination type, the behavior is undefined. “Undefined” means the standard allows any behavior: You might get the closest representable number, you might get zero, you might get an exception, or the computer might explode. (Note: While the C standard permits your computer to explode, or even to destroy the universe, it is likely the manufacturer’s specifications impose a stricter limit on the behavior.)
It will do it correctly, if it really is true integer, which you might be assured of in some contexts. But if the value is the result of previous floating point calculations, you could not easily know that.
Why not explicitly calculate the value with the floor() function, as in long value = floor(x + 0.5). Or, even better, use the modf() function to inspect for an integer value.
Yes it will hold the exact value you give it because you input it in code. Sometimes in calculations it would yield 0.99999999999 for example but that is due to the error in calculating with doubles not its storing capacity

Does floor() return something that's exactly representable?

In C89, floor() returns a double. Is the following guaranteed to work?
double d = floor(3.0 + 0.5);
int x = (int) d;
assert(x == 3);
My concern is that the result of floor might not be exactly representable in IEEE 754. So d gets something like 2.99999, and x ends up being 2.
For the answer to this question to be yes, all integers within the range of an int have to be exactly representable as doubles, and floor must always return that exactly represented value.
All integers can have exact floating point representation if your floating point type supports the required mantissa bits. Since double uses 53 bits for mantissa, it can store all 32-bit ints exactly. After all, you could just set the value as mantissa with zero exponent.
If the result of floor() isn't exactly representable, what do you expect the value of d to be? Surely if you've got the representation of a floating point number in a variable, then by definition it's exactly representable isn't it? You've got the representation in d...
(In addition, Mehrdad's answer is correct for 32 bit ints. In a compiler with a 64 bit double and a 64 bit int, you've got more problems of course...)
EDIT: Perhaps you meant "the theoretical result of floor(), i.e. the largest integer value less than or equal to the argument, may not be representable as an int". That's certainly true. Simple way of showing this for a system where int is 32 bits:
int max = 0x7fffffff;
double number = max;
number += 10.0;
double f = floor(number);
int oops = (int) f;
I can't remember offhand what C does when conversions from floating point to integer overflow... but it's going to happen here.
EDIT: There are other interesting situations to consider too. Here's some C# code and results - I'd imagine at least similar things would happen in C. In C#, double is defined to be 64 bits and so is long.
using System;
class Test
{
static void Main()
{
FloorSameInteger(long.MaxValue/2);
FloorSameInteger(long.MaxValue-2);
}
static void FloorSameInteger(long original)
{
double convertedToDouble = original;
double flooredToDouble = Math.Floor(convertedToDouble);
long flooredToLong = (long) flooredToDouble;
Console.WriteLine("Original value: {0}", original);
Console.WriteLine("Converted to double: {0}", convertedToDouble);
Console.WriteLine("Floored (as double): {0}", flooredToDouble);
Console.WriteLine("Converted back to long: {0}", flooredToLong);
Console.WriteLine();
}
}
Results:
Original value: 4611686018427387903
Converted to double:
4.61168601842739E+18Floored (as double): 4.61168601842739E+18
Converted back to long:
4611686018427387904
Original value: 9223372036854775805
Converted to double:
9.22337203685478E+18Floored (as double): 9.22337203685478E+18
Converted back to long:
-9223372036854775808
In other words:
(long) floor((double) original)
isn't always the same as original. This shouldn't come as any surprise - there are more long values than doubles (given the NaN values) and plenty of doubles aren't integers, so we can't expect every long to be exactly representable. However, all 32 bit integers are representable as doubles.
I think you're a bit confused about what you want to ask. floor(3 + 0.5) is not a very good example, because 3, 0.5, and their sum are all exactly representable in any real-world floating point format. floor(0.1 + 0.9) would be a better example, and the real question here is not whether the result of floor is exactly representable, but whether inexactness of the numbers prior to calling floor will result in a return value different from what you would expect, had all numbers been exact. In this case, I believe the answer is yes, but it depends a lot on your particular numbers.
I invite others to criticize this approach if it's bad, but one possible workaround might be to multiply your number by (1.0+0x1p-52) or something similar prior to calling floor (perhaps using nextafter would be better). This could compensate for cases where an error in the last binary place of the number causes it to fall just below rather than exactly on an integer value, but it will not account for errors which have accumulated over a number of operations. If you need that level of numeric stability/exactness, you need to either do some deep analysis or use an arbitrary-precision or exact-math library which can handle your numbers correctly.

Resources