Why casting double to int might give different results? - c

I am using fixed decimal point number (using uint16_t) to store percentage with 2 fractional digits. I have found that the way I am casting the double value to integer makes a difference in the resulting value.
const char* testString = "99.85";
double percent = atof(testString);
double hundred = 100;
uint16_t reInt1 = (uint16_t)(hundred * percent);
double stagedDouble = hundred * percent;
uint16_t reInt2 = (uint16_t)stagedDouble;
Example output:
percent: 99.850000
stagedDouble: 9985.000000
reInt1: 9984
reInt2: 9985
The error is visible in about 47% of all values between 0 and 10000 (of the fixed point representation). It does not appear at all when casting with stagedDouble. And I do not understand why the two integers are different. I am using GCC 6.3.0.
Edit:
Improved code snippet to demonstrate percent variable and to unify the coefficient between the two statements. The change of 100 into a double seems as a quality change that might affect the output, but it does not change a thing in my program.

Is percent a float? If so, look at what types you're multiplying.
reInt1 is double * float and stagedDouble is int * float. Mixing up floating point math can cause these types of rounding errors.
Changing the 100's to be both double or both int results in the same answer.

The reported behavior is consistent with percent being declared float, and the use of IEEE-754 basic 32-bit and 64-bit binary floating-point for float and double.
uint16_t reInt1 = (uint16_t)(100.0 * percent);
Since 100.0 is a double constant, this converts percent to double, performs a multiplication in double, and converts the result to uint16_t. The multiplication may have a very slight rounding error, up to ½ ULP of the double format, a relative error around 2−53.
double stagedDouble = 100 * percent;
uint16_t reInt2 = (uint16_t)stagedDouble;
Since 100 is an int constant, this converts 100 to float, performs a multiplication in float, and converts the result to uint16_t. The rounding error in the multiplication may be up to ½ ULP of the float format, a relative error around 2−24.
Since all of the values are near hundredths of an integer, a 50:50 ratio of errors up:down would make about half the results just under what is needed for the integer threshold. In the multiplications, all those with values that are 0, 25, 50, or 100 one-hundredths would be exact (because 25/100 is ¼, which is exactly representable in binary floating-point), so 96/100 would have rounding errors. If the directions of the float and double rounding errors behave as independent, uniform random variables, about half would round in different directions, producing different results, giving about 48% mismatches, which is consistent with the 47% reported in the question.
(However, when I measure the actual results, I get 42% differences between the float and double methods. I suspect that has something to do with the trailing bits in the float multiplication before rounding—the distribution might not act like a uniform distribution of two possibilities. It may be the OP’s code prepares the percent values in some way other than dividing an integer value by 100.)

Related

Nonintuitive result of the assignment of a double precision number to an int variable in C

Could someone give me an explanation why I get two different
numbers, resp. 14 and 15, as an output from the following code?
#include <stdio.h>
int main()
{
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
printf("%d %d",b,c); // 14 15, why?
return 0;
}
I expect to get 15 in both cases but it seems I'm missing some fundamentals of the language.
I am not sure if it's relevant but I was doing the test in CodeBlocks. However, if I type the same lines of code in some on-line compiler ( this one for example) I get an answer of 15 for the two printed variables.
... why I get two different numbers ...
Aside from the usual float-point issues, the computation paths to b and c are arrived in different ways. c is calculated by first saving the value as double a.
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
C allows intermediate floating-point math to be computed using wider types. Check the value of FLT_EVAL_METHOD from <float.h>.
Except for assignment and cast (which remove all extra range and precision), ...
-1 indeterminable;
0 evaluate all operations and constants just to the range and precision of the
type;
1 evaluate operations and constants of type float and double to the
range and precision of the double type, evaluate long double
operations and constants to the range and precision of the long double
type;
2 evaluate all operations and constants to the range and precision of the
long double type.
C11dr §5.2.4.2.2 9
OP reported 2
By saving the quotient in double a = (Vmax-Vmin)/step;, precision is forced to double whereas int b = (Vmax-Vmin)/step; could compute as long double.
This subtle difference results from (Vmax-Vmin)/step (computed perhaps as long double) being saved as a double versus remaining a long double. One as 15 (or just above), and the other just under 15. int truncation amplifies this difference to 15 and 14.
On another compiler, the results may both have been the same due to FLT_EVAL_METHOD < 2 or other floating-point characteristics.
Conversion to int from a floating-point number is severe with numbers near a whole number. Often better to round() or lround(). The best solution is situation dependent.
This is indeed an interesting question, here is what happens precisely in your hardware. This answer gives the exact calculations with the precision of IEEE double precision floats, i.e. 52 bits mantissa plus one implicit bit. For details on the representation, see the wikipedia article.
Ok, so you first define some variables:
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
The respective values in binary will be
Vmax = 10.111001100110011001100110011001100110011001100110011
Vmin = 1.0110011001100110011001100110011001100110011001100110
step = .00011001100110011001100110011001100110011001100110011010
If you count the bits, you will see that I have given the first bit that is set plus 52 bits to the right. This is exactly the precision at which your computer stores a double. Note that the value of step has been rounded up.
Now you do some math on these numbers. The first operation, the subtraction, results in the precise result:
10.111001100110011001100110011001100110011001100110011
- 1.0110011001100110011001100110011001100110011001100110
--------------------------------------------------------
1.1000000000000000000000000000000000000000000000000000
Then you divide by step, which has been rounded up by your compiler:
1.1000000000000000000000000000000000000000000000000000
/ .00011001100110011001100110011001100110011001100110011010
--------------------------------------------------------
1110.1111111111111111111111111111111111111111111111111100001111111111111
Due to the rounding of step, the result is a tad below 15. Unlike before, I have not rounded immediately, because that is precisely where the interesting stuff happens: Your CPU can indeed store floating point numbers of greater precision than a double, so rounding does not take place immediately.
So, when you convert the result of (Vmax-Vmin)/step directly to an int, your CPU simply cuts off the bits after the fractional point (this is how the implicit double -> int conversion is defined by the language standards):
1110.1111111111111111111111111111111111111111111111111100001111111111111
cutoff to int: 1110
However, if you first store the result in a variable of type double, rounding takes place:
1110.1111111111111111111111111111111111111111111111111100001111111111111
rounded: 1111.0000000000000000000000000000000000000000000000000
cutoff to int: 1111
And this is precisely the result you got.
The "simple" answer is that those seemingly-simple numbers 2.9, 1.4, and 0.1 are all represented internally as binary floating point, and in binary, the number 1/10 is represented as the infinitely-repeating binary fraction 0.00011001100110011...[2] . (This is analogous to the way 1/3 in decimal ends up being 0.333333333... .) Converted back to decimal, those original numbers end up being things like 2.8999999999, 1.3999999999, and 0.0999999999. And when you do additional math on them, those .0999999999's tend to proliferate.
And then the additional problem is that the path by which you compute something -- whether you store it in intermediate variables of a particular type, or compute it "all at once", meaning that the processor might use internal registers with greater precision than type double -- can end up making a significant difference.
The bottom line is that when you convert a double back to an int, you almost always want to round, not truncate. What happened here was that (in effect) one computation path gave you 15.0000000001 which truncated down to 15, while the other gave you 14.999999999 which truncated all the way down to 14.
See also question 14.4a in the C FAQ list.
An equivalent problem is analyzed in analysis of C programs for FLT_EVAL_METHOD==2.
If FLT_EVAL_METHOD==2:
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
computes b by evaluating a long double expression then truncating it to a int, whereas for c it's evaluating from long double, truncating it to double and then to int.
So both values are not obtained with the same process, and this may lead to different results because floating types does not provides usual exact arithmetic.

Precision loss / rounding difference when directly assigning double result to an int

Is there a reason why converting from a double to an int performs as expected in this case:
double value = 45.33;
double multResult = (double) value*100.0; // assign to double
int convert = multResult; // assign to int
printf("convert = %d\n", convert); // prints 4533 as expected
But not in this case:
double value = 45.33;
int multResultInt = (double) value*100.0; // assign directly to int
printf("multResultInt = %d\n", multResultInt); // prints 4532??
It seems to me there should be no difference. In the second case the result is still first stored as a double before being converted to an int unless I am not understanding some difference between casts and hard assignments.
There is indeed no difference between the two, but compilers are used to take some freedom when it comes down to floating point computations. For example compilers are free to use higher precision for intermediate results of computations but higher still means different so the results may vary.
Some compilers provide switches to always drop extra precision and convert all intermediate results to the prescribed floating point numbers (say 64bit double-precision numbers). This will make the code slower, however.
In the specific the number 45.33 cannot be represented exactly with a floating point value (it's a periodic number when expressed in binary and it would require an infinite number of bits). When multiplying by 100 this value may be you don't get an integer, but something very close (just below or just above).
int conversion or cast is performed using truncation and something very close to 4533 but below will become 4532, when above will become 4533; even if the difference is incredibly tiny, say 1E-300.
To avoid having problems be sure to account for numeric accuracy problems. If you are doing a computation that depends on exact values of floating point numbers then you're using the wrong tool.
#6502 has given you the theory, here's how to look at things experimentally
double v = 45.33;
int x = v * 100.0;
printf("x=%d v=%.20lf v100=%.20lf\n", x, v, v * 100.0 );
On my machine, this prints
x=4533 v=45.32999999999999829470 v100=4533.00000000000000000000
The value 45.33 does not have an exact representation when encoded as a 64-bit IEEE-754 floating point number. The actual value of v is slightly lower than the intended value due to the limited precision of the encoding.
So why does multiplying by 100.0 fix the problem on some machines? One possibility is that the multiplication is done with 80-bits of precision and then rounded to fit into a 64-bit result. The 80-bit number 4532.999... will round to 4533 when converted to 64-bits.
On your machine, the multiplication is evidently done with 64-bits of precision, and I would expect that v100 will print as 4532.999....

c floating point round down

If I run the following:
int tokenIdx=ERROR; //ERROR=8
tokens[tokenIdx] = 57; //tokens is of type int[]
int totalTokens = 100;
int percent = (int)((100.0 * tokens[tokenIdx])/(float)totalTokens);
printf("%d%\n",percent);
int percent2 = (int)(100.0*(tokens[tokenIdx]/(1.0*totalTokens)));
printf("%d%\n",percent2);
the output is:
57%
56%
Why is this happening?
Because 5700.0 and 100.0 can both be represented exactly as floating point numbers, and their ratio is exactly 57.0. On the other hand, 57.0/100.0 cannot be represented exactly as a floating pointer number, and multiplying it by 100.0 will not produced exactly 57.0. If it produces slightly less than 57.0 (as seems to be the case), then casting to (int), which truncates, will result in the integer 56.
Floating point rounding error. In your first case, you're doing a double divided by a float, in the second, it's a double divided by a double. The int conversion is throwing away the fractional portion, which in the second case is probably .999999999998 or something like it.
Lessons learned: floating point isn't precise, mind your conversions, watch your types
If you really honestly want an integer percentage, do:
int percent = 100 * value / total;
where value and total are both ints.
If you need more precision (say 10ths), consider doing it in thousands or and dividing down to float:
float percent = (1000 * value / total) / 10f;
Provided that 1000 * value won't overflow in your problem domain.

Multiplying two floats doesn't give exact result

I am trying to multiply two floats as follows:
float number1 = 321.12;
float number2 = 345.34;
float rexsult = number1 * number2;
The result I want to see is 110895.582, but when I run the code it just gives me 110896. Most of the time I'm having this issue. Any calculator gives me the exact result with all decimals. How can I achive that result?
edit : It's C code. I'm using XCode iOS simulator.
There's a lot of rounding going on.
float a = 321.12; // this number will be rounded
float b = 345.34; // this number will also be rounded
float r = a * b; // and this number will be rounded too
printf("%.15f\n", r);
I get 110895.578125000000000 after the three separate roundings.
If you want more than 6 decimal digits' worth of precision, you will have to use double and not float. (Note that I said "decimal digits' worth", because you don't get decimal digits, you get binary.) As it stands, 1/2 ULP of error (a worst-case bound for a perfectly rounded result) is about 0.004.
If you want exactly rounded decimal numbers, you will have to use a specialized decimal library for such a task. A double has more than enough precision for scientists, but if you work with money everything has to be 100% exact. No floating point numbers for money.
Unlike integers, floating point numbers take some real work before you can get accustomed to their pitfalls. See "What Every Computer Scientist Should Know About Floating-Point Arithmetic", which is the classic introduction to the topic.
Edit: Actually, I'm not sure that the code rounds three times. It might round five times, since the constants for a and b might be rounded first to double-precision and then to single-precision when they are stored. But I don't know the rules of this part of C very well.
You will never get the exact result that way.
First of all, number1 ≠ 321.12 because that value cannot be represented exactly in a base-2 system. You'll need an infinite number of bits for it.
The same holds for number2 ≠ 345.34.
So, you begin with inexact values to begin with.
Then the product will get rounded because multiplication gives you double the number of significant digits but the product has to be stored in float again if you multiply floats.
You probably want to use a 10-based system for your numbers. Or, in case your numbers only have 2 decimal digits of the fractional, you can use integers (32-bit integers are sufficient in this case, but you may end up needing 64-bit):
32112 * 34534 = 1108955808.
That represents 321.12 * 345.34 = 110895.5808.
Since you are using C you could easily set the precision by using "%.xf" where x is the wanted precision.
For example:
float n1 = 321.12;
float n2 = 345.34;
float result = n1 * n2;
printf("%.20f", result);
Output:
110895.57812500000000000000
However, note that float only gives six digits of precision. For better precision use double.
floating point variables are only approximate representation, not precise one. Not every number can "fit" into float variable. For example, there is no way to put 1/10 (0.1) into binary variable, just like it's not possible to put 1/3 into decimal one (you can only approximate it with endless 0.33333)
when outputting such variables, it's usual to apply many rounding options. Unless you set them all, you can never be sure which of them are applied. This is especially true for << operators, as the stream can be told how to round BEFORE <<.
Printf also does some rounding. Consider http://codepad.org/LLweoeHp:
float t = 0.1f;
printf("result: %f\n", t);
--
result: 0.100000
Well, it looks fine. Why? Because printf defaulted to some precision and rounded up the output. Let's dial in 50 places after decimal point: http://codepad.org/frUPOvcI
float t = 0.1f;
printf("result: %.50f\n", t);
--
result: 0.10000000149011611938476562500000000000000000000000
That's different, isn't it? After 625 the float ran out of capacity to hold more data, that's why we see zeroes.
A double can hold more digits, but 0.1 in binary is not finite. Double has to give up, eventually: http://codepad.org/RAd7Yu2r
double t = 0.1;
printf("result: %.70f\n", t);
--
result: 0.1000000000000000055511151231257827021181583404541015625000000000000000
In your example, 321.12 alone is enough to cause trouble: http://codepad.org/cgw3vUKn
float t = 321.12f;
printf("and the result is: %.50f\n", t);
result: 321.11999511718750000000000000000000000000000000000000
This is why one has to round up floating point values before presenting them to humans.
Calculator programs don't use floats or doubles at all. They implement decimal number format. eg:
struct decimal
{
int mantissa; //meaningfull digits
int exponent; //number of decimal zeroes
};
Ofc that requires reinventing all operations: addition, substraction, multiplication and division. Or just look for a decimal library.

How to set precision of a float

Can someone explain me how to choose the precision of a float with a C function?
Examples:
theFatFunction(0.666666666, 3) returns 0.667
theFatFunction(0.111111111, 3) returns 0.111
You can't do that, since precision is determined by the data type (i.e. float or double or long double). If you want to round it for printing purposes, you can use the proper format specifiers in printf(), i.e. printf("%0.3f\n", 0.666666666).
You can't. Precision depends entirely on the data type. You've got float and double and that's it.
Floats have a static, fixed precision. You can't change it. What you can sometimes do, is round the number.
See this page, and consider to scale yourself by powers of 10. Note that not all numbers are exactly representable as floats, either.
Most systems follow IEEE-754 floating point standard which defines several floating point types.
On these systems, usually float is the IEEE-754 binary32 single precision type: it has 24-bit of precision. double is the binary64 double precision type; it has 53-bit of precision. The precision in bit numbers is defined by the IEEE-754 standard and cannot be changed.
When you print values of floating point types using functions of the fprintf family (e.g., printf), the precision is defined as the maximum number of significant digits and is by default set to 6 digits. You can change the default precision with a . followed by a decimal number in the conversion specification. For example:
printf("%.10f\n", 4.0 * atan(1.0)); // prints 3.1415926536
whereas
printf("%f\n", 4.0 * atan(1.0)); // prints 3.141593
It might be roughly the following steps:
Add 0.666666666 with 0.0005 (we get 0.667166666)
Multiply by 1000 (we get 667.166666)
Shift the number to an int (we get 667)
Shift it back to float (we get 667.0)
Divide by 1000 (we get 0.667)
Thank you.
Precision is determined by the data type (i.e. float or double or long double).
If you want to round it for printing purposes, you can use the proper format specifiers in printf(), i.e.
printf("%0.3f\n", 0.666666666) //will print 0.667 in c
Now if you want to round it for calculating purposes you have to first multiply the float by 10^number of digits then typecast to int , do the calculation and then again typecast to float and divide by same power of 10
float f=0.66666;
f *= 1000; // 666.660
int i = (int)f; // 666
i = 2*i; // 1332
f = i; // 1332
f /= 1000; // 1.332
printf("%f",f); //1.332000

Resources