How to overflow a float? - c

Working on my way to solve exercise 2.1 from "The C programming language" where one should calculate on the local machine the range of different types like char, short, int etc. but also float and double. By everything except float and double i watch for the overflow to happen and so can calculate the max/min values. However, by floats this is still not working.
So, the question is why this code prints the same value twice? I thought the second line should print inf
float f = 1.0;
printf("%f\n",FLT_MAX);
printf("%f\n",FLT_MAX + f);

Try multiplying with 10, and if will overflow. The reason it doesn't overflow is the same reason why adding a small float to an already very large float doesn't actually change the value at all - it's a floating point format, meaning the number of digits of precision is limited.
Or, adding at least that last significant digit would likely work:
float f = 3.402823e38f; // FLT_MAX
f = f + 0.000001e38f; // this should result in overflow

The reason why it prints the same value twice is that 1.0 is too small to be added to FLOAT_MAX. A float has usually 24 bits for the mantissa, and 8 bits for the exponent. If you have a very large value with an exponent of 127, you would need a mantissa with at least 127 bits to be able to add 1.0.
As an example, the same problem exists with decimal (and any other) exponential values:
If you have a number with 3 significant digits like 1.00*106, you can't add 1 to it because this would be 1'000'001, and this requires 6 significant digits.
You could overflow a float by doubling the value repeatedly.

Related

What happens if we keep dividing float 1.0 by 2 until it reaches zero?

float f = 1.0;
while (f != 0.0) f = f / 2.0;
This loop runs 150 times using 32-bit precision. Why is that so? Is it getting rounded to zero?
In common C implementations, the IEEE-754 binary32 format is used for float. It is also called “single precision.” It is a binary based format where finite numbers are represented as ±f•2e, where f is a 24-bit binary numeral in [1, 2) and e is an integer in [−126, 127].
In this format, 1 is represented as +1.000000000000000000000002•20. Dividing that by 2 yields ½, which is represented as +1.000000000000000000000002•2−1. Dividing that by 2 yields +1.000000000000000000000002•2−2, then +1.000000000000000000000002•2−3, and so on until we reach +1.000000000000000000000002•2−126.
When that is divided by two, the mathematical result is +1.000000000000000000000002•2−127, but −127 is below the normal exponent range, [−126, 127]. Instead, the significand becomes denormalized; 2−127 is represented by +0.100000000000000000000002•2−126. Dividing that by 2 yields +0.010000000000000000000002•2−126, then +0.001000000000000000000002•2−126, +0.000100000000000000000002•2−126, and so on until we get to +0.000000000000000000000012•2−126.
At this point, we have done 149 divisions by 2; +0.000000000000000000000012•2−126 is 2−149.
When the next division is performed, the result would be 2−150, but that is not representable in this format. Even with the lowest non-zero significand, 0.000000000000000000000012, and the lowest exponent, −126, we cannot get to 2−150. The next lower representable number is +0.000000000000000000000002•2−126, which equals 0.
So, the real-number-arithmetic result of the division would be 2−150, but we cannot represent that in this format. The two nearest representable numbers are +0.000000000000000000000012•2−126 just above it and +0.000000000000000000000002•2−126 just below it. They are equally near 2−150. The default rounding method is to take the nearest representable number and, in case of ties, to take the number with the even low digit. So +0.000000000000000000000002•2−126 wins the tie, and that is produced as the result for the 150th division.
What happens is simply that your system has only a limited number of bits available for a variable, and hence limited precision; even though, mathematically, you can halve a number (!= 0) indefinitely without ever reaching zero, in a computer implementation that has a limited precision for a float variable, that variable will inevitably, at some stage, become indistinguishable from zero. The more bits your system uses, the more precision it has and the later this will happen, but at some stage it will.
Since I suppose this is meant to be C, I just implemented it in C (with a counter counting each iteration), and indeed it ran for 150 rounds until the loop ended. I also implemented it with a double, where it ran for 1075 iterations. Keep in mind, however, that the C standard does not define the exact precision of a float variable. In most implementations it's 32 bits for a float and 64 for a double. With a long double, I get 16,446 iterations.

Nonintuitive result of the assignment of a double precision number to an int variable in C

Could someone give me an explanation why I get two different
numbers, resp. 14 and 15, as an output from the following code?
#include <stdio.h>
int main()
{
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
printf("%d %d",b,c); // 14 15, why?
return 0;
}
I expect to get 15 in both cases but it seems I'm missing some fundamentals of the language.
I am not sure if it's relevant but I was doing the test in CodeBlocks. However, if I type the same lines of code in some on-line compiler ( this one for example) I get an answer of 15 for the two printed variables.
... why I get two different numbers ...
Aside from the usual float-point issues, the computation paths to b and c are arrived in different ways. c is calculated by first saving the value as double a.
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
C allows intermediate floating-point math to be computed using wider types. Check the value of FLT_EVAL_METHOD from <float.h>.
Except for assignment and cast (which remove all extra range and precision), ...
-1 indeterminable;
0 evaluate all operations and constants just to the range and precision of the
type;
1 evaluate operations and constants of type float and double to the
range and precision of the double type, evaluate long double
operations and constants to the range and precision of the long double
type;
2 evaluate all operations and constants to the range and precision of the
long double type.
C11dr §5.2.4.2.2 9
OP reported 2
By saving the quotient in double a = (Vmax-Vmin)/step;, precision is forced to double whereas int b = (Vmax-Vmin)/step; could compute as long double.
This subtle difference results from (Vmax-Vmin)/step (computed perhaps as long double) being saved as a double versus remaining a long double. One as 15 (or just above), and the other just under 15. int truncation amplifies this difference to 15 and 14.
On another compiler, the results may both have been the same due to FLT_EVAL_METHOD < 2 or other floating-point characteristics.
Conversion to int from a floating-point number is severe with numbers near a whole number. Often better to round() or lround(). The best solution is situation dependent.
This is indeed an interesting question, here is what happens precisely in your hardware. This answer gives the exact calculations with the precision of IEEE double precision floats, i.e. 52 bits mantissa plus one implicit bit. For details on the representation, see the wikipedia article.
Ok, so you first define some variables:
double Vmax = 2.9;
double Vmin = 1.4;
double step = 0.1;
The respective values in binary will be
Vmax = 10.111001100110011001100110011001100110011001100110011
Vmin = 1.0110011001100110011001100110011001100110011001100110
step = .00011001100110011001100110011001100110011001100110011010
If you count the bits, you will see that I have given the first bit that is set plus 52 bits to the right. This is exactly the precision at which your computer stores a double. Note that the value of step has been rounded up.
Now you do some math on these numbers. The first operation, the subtraction, results in the precise result:
10.111001100110011001100110011001100110011001100110011
- 1.0110011001100110011001100110011001100110011001100110
--------------------------------------------------------
1.1000000000000000000000000000000000000000000000000000
Then you divide by step, which has been rounded up by your compiler:
1.1000000000000000000000000000000000000000000000000000
/ .00011001100110011001100110011001100110011001100110011010
--------------------------------------------------------
1110.1111111111111111111111111111111111111111111111111100001111111111111
Due to the rounding of step, the result is a tad below 15. Unlike before, I have not rounded immediately, because that is precisely where the interesting stuff happens: Your CPU can indeed store floating point numbers of greater precision than a double, so rounding does not take place immediately.
So, when you convert the result of (Vmax-Vmin)/step directly to an int, your CPU simply cuts off the bits after the fractional point (this is how the implicit double -> int conversion is defined by the language standards):
1110.1111111111111111111111111111111111111111111111111100001111111111111
cutoff to int: 1110
However, if you first store the result in a variable of type double, rounding takes place:
1110.1111111111111111111111111111111111111111111111111100001111111111111
rounded: 1111.0000000000000000000000000000000000000000000000000
cutoff to int: 1111
And this is precisely the result you got.
The "simple" answer is that those seemingly-simple numbers 2.9, 1.4, and 0.1 are all represented internally as binary floating point, and in binary, the number 1/10 is represented as the infinitely-repeating binary fraction 0.00011001100110011...[2] . (This is analogous to the way 1/3 in decimal ends up being 0.333333333... .) Converted back to decimal, those original numbers end up being things like 2.8999999999, 1.3999999999, and 0.0999999999. And when you do additional math on them, those .0999999999's tend to proliferate.
And then the additional problem is that the path by which you compute something -- whether you store it in intermediate variables of a particular type, or compute it "all at once", meaning that the processor might use internal registers with greater precision than type double -- can end up making a significant difference.
The bottom line is that when you convert a double back to an int, you almost always want to round, not truncate. What happened here was that (in effect) one computation path gave you 15.0000000001 which truncated down to 15, while the other gave you 14.999999999 which truncated all the way down to 14.
See also question 14.4a in the C FAQ list.
An equivalent problem is analyzed in analysis of C programs for FLT_EVAL_METHOD==2.
If FLT_EVAL_METHOD==2:
double a =(Vmax-Vmin)/step;
int b = (Vmax-Vmin)/step;
int c = a;
computes b by evaluating a long double expression then truncating it to a int, whereas for c it's evaluating from long double, truncating it to double and then to int.
So both values are not obtained with the same process, and this may lead to different results because floating types does not provides usual exact arithmetic.

Using floorf to reduce the number of decimals

I would like to use the first five digits of a number for computation.
For example,
A floating point number: 4.23654897E-05
I wish to use 4.2365E-05.I tried the following
#include <math.h>
#include <stdio.h>
float num = 4.23654897E-05;
int main(){
float rounded_down = floorf(num * 10000) / 10000;
printf("%f",rounded_down);
return 0;
}
The output is 0.000000.The desired output is 4.2365E-05.
In short,say 52 bits are allocated for storing the mantissa.Is there a way to reduce the number of bits being allocated?
Any suggestions on how this can be done?
A number x that is positive and within the normal range can be rounded down approximately to five significant digits with:
double l = pow(10, floor(log10(x)) - 4);
double y = l * floor(x / l);
This is useful only for tinkering with floating-point arithmetic as a learning tool. The exact mathematical result is generally not exactly representable, because binary floating-point cannot represent most decimal values exactly. Additionally, rounding errors can occur in the pow, /, and * operations that may cause the result to differ slightly from the true mathematical result of rounding x to five significant digits. Also, poor implementations of log10 or pow can cause the result to differ from the true mathematical result.
I'd go:
printf("%.6f", num);
Or you can try using snprintf() from stdlib.h:
float num = 4.23654897E-05; char output[50];
snprintf(output, 50, "%f", num);
printf("%s", output);
The result is expected. The multiplication by 10000 yield 0.423.. the nearest integer to it is 0. So the result is 0. Rounding can be done using format specifier %f to print the result upto certain decimal places after decimal point.
If you check the return value of floorf you will see it returns If no errors occur, the largest integer value not greater than arg, that is ⌊arg⌋, is returned. where arg is the passed argument.
Without using floatf you can use %e or (%E)format specifier to print it accordingly.
printf("%.4E",num);
which outputs:
4.2365E-05
After David's comment:
Your way of doing things is right but the number you multiplied is wrong. The thing is 4.2365E-05 is 0.00004235.... Now if you multiply it with 10000 then it will 0.42365... Now you said I want the expression to represent in that form. floorf returns float in this case. Store it in a variable and you will be good to go. The rounded value will be in that variable. But you will see that the rounded down value will be 0. That is what you got.
float rounded_down = floorf(num * 10000) / 10000;
This will hold the correct value rounded down to 4 digits after . (not in exponent notation with E or e). Don't confuse the value with the format specifier used to represent it.
What you need to do in order to get the result you want is move the decimal places to the right. To do that multiply with larger number. (1e7 or 1e8 or as you want it to).
I would like to use the first five digits of a number for computation.
In general, floating point numbers are encoded using binary and OP wants to use 5 significant decimal digits. This is problematic as numbers like 4.23654897E-05 and 4.2365E-05 are not exactly representable as a float/double. The best we can do is get close.
The floor*() approach has problems with 1) negative numbers (should have used trunc()) and 2) values near x.99995 that during rounding may change the number of digits. I strongly recommend against it here as such solutions employing it fail many corner cases.
The *10000 * power10, round, /(10000 * power10) approach suffers from 1) power10 calculation (1e5 in this case) 2) rounding errors in the multiple, 3) overflow potential. The needed power10 may not be exact. * errors show up with cases when the product is close to xxxxx.5. Often this intermediate calculation is done using wider double math and so the corner cases are rare. Bad rounding using (some_int_type) which has limited range and is a truncation instead of the better round() or rint().
An approach that gets close to OP's goal: print to 5 significant digits using %e and convert back. Not highly efficient, yet handles all cases well.
int main(void) {
float num = 4.23654897E-05f;
// sign d . dddd e sign expo + \0
#define N (1 + 1 + 1 + 4 + 1 + 1 + 4 + 1)
char buf[N*2]; // Use a generous buffer - I like 2x what I think is needed.
// OP wants 5 significant digits so print 4 digits after the decimal point.
sprintf(buf, "%.4e", num);
float rounded = (float) atof(buf);
printf("%.5e %s\n", rounded, buf);
}
Output
4.23650e-05 4.2365e-05
Why 5 in %.5e: Typical float will print up to 6 significant decimal digits as expected (research FLT_DIG), so 5 digits after the decimal point are printed. The exact value of rounded in this case was about 4.236500171...e-05 as 4.2365e-05 is not exactly representable as a float.

Multiplying two floats doesn't give exact result

I am trying to multiply two floats as follows:
float number1 = 321.12;
float number2 = 345.34;
float rexsult = number1 * number2;
The result I want to see is 110895.582, but when I run the code it just gives me 110896. Most of the time I'm having this issue. Any calculator gives me the exact result with all decimals. How can I achive that result?
edit : It's C code. I'm using XCode iOS simulator.
There's a lot of rounding going on.
float a = 321.12; // this number will be rounded
float b = 345.34; // this number will also be rounded
float r = a * b; // and this number will be rounded too
printf("%.15f\n", r);
I get 110895.578125000000000 after the three separate roundings.
If you want more than 6 decimal digits' worth of precision, you will have to use double and not float. (Note that I said "decimal digits' worth", because you don't get decimal digits, you get binary.) As it stands, 1/2 ULP of error (a worst-case bound for a perfectly rounded result) is about 0.004.
If you want exactly rounded decimal numbers, you will have to use a specialized decimal library for such a task. A double has more than enough precision for scientists, but if you work with money everything has to be 100% exact. No floating point numbers for money.
Unlike integers, floating point numbers take some real work before you can get accustomed to their pitfalls. See "What Every Computer Scientist Should Know About Floating-Point Arithmetic", which is the classic introduction to the topic.
Edit: Actually, I'm not sure that the code rounds three times. It might round five times, since the constants for a and b might be rounded first to double-precision and then to single-precision when they are stored. But I don't know the rules of this part of C very well.
You will never get the exact result that way.
First of all, number1 ≠ 321.12 because that value cannot be represented exactly in a base-2 system. You'll need an infinite number of bits for it.
The same holds for number2 ≠ 345.34.
So, you begin with inexact values to begin with.
Then the product will get rounded because multiplication gives you double the number of significant digits but the product has to be stored in float again if you multiply floats.
You probably want to use a 10-based system for your numbers. Or, in case your numbers only have 2 decimal digits of the fractional, you can use integers (32-bit integers are sufficient in this case, but you may end up needing 64-bit):
32112 * 34534 = 1108955808.
That represents 321.12 * 345.34 = 110895.5808.
Since you are using C you could easily set the precision by using "%.xf" where x is the wanted precision.
For example:
float n1 = 321.12;
float n2 = 345.34;
float result = n1 * n2;
printf("%.20f", result);
Output:
110895.57812500000000000000
However, note that float only gives six digits of precision. For better precision use double.
floating point variables are only approximate representation, not precise one. Not every number can "fit" into float variable. For example, there is no way to put 1/10 (0.1) into binary variable, just like it's not possible to put 1/3 into decimal one (you can only approximate it with endless 0.33333)
when outputting such variables, it's usual to apply many rounding options. Unless you set them all, you can never be sure which of them are applied. This is especially true for << operators, as the stream can be told how to round BEFORE <<.
Printf also does some rounding. Consider http://codepad.org/LLweoeHp:
float t = 0.1f;
printf("result: %f\n", t);
--
result: 0.100000
Well, it looks fine. Why? Because printf defaulted to some precision and rounded up the output. Let's dial in 50 places after decimal point: http://codepad.org/frUPOvcI
float t = 0.1f;
printf("result: %.50f\n", t);
--
result: 0.10000000149011611938476562500000000000000000000000
That's different, isn't it? After 625 the float ran out of capacity to hold more data, that's why we see zeroes.
A double can hold more digits, but 0.1 in binary is not finite. Double has to give up, eventually: http://codepad.org/RAd7Yu2r
double t = 0.1;
printf("result: %.70f\n", t);
--
result: 0.1000000000000000055511151231257827021181583404541015625000000000000000
In your example, 321.12 alone is enough to cause trouble: http://codepad.org/cgw3vUKn
float t = 321.12f;
printf("and the result is: %.50f\n", t);
result: 321.11999511718750000000000000000000000000000000000000
This is why one has to round up floating point values before presenting them to humans.
Calculator programs don't use floats or doubles at all. They implement decimal number format. eg:
struct decimal
{
int mantissa; //meaningfull digits
int exponent; //number of decimal zeroes
};
Ofc that requires reinventing all operations: addition, substraction, multiplication and division. Or just look for a decimal library.

Why does the addition of two float numbers is incorrect in C?

I have a problem with the addition of two float numbers.
Code below:
float a = 30000.0f;
float b = 4499722832.0f;
printf("%f\n", a+b);
Why the output result is 450002816.000000? (The correct one should be 450002832.)
Float are not represented exactly in C - see http://en.wikipedia.org/wiki/Floating_point#IEEE_754:_floating_point_in_modern_computers and http://en.wikipedia.org/wiki/Single_precision, so calculations with float can only give an approximate result.
This is especially apparent for larger values, since the possible difference can be represented as a percentage of the value. In case of adding/subtracting two values, you get the worse precision of both (and of the result).
Floating-point values cannot represent all integer values.
Remember that single-precision floating-point numbers only have 24 (or 23, depending on how you count) bits of precision (i.e. significant figures). So as values get larger, you begin to lose low-end precision, which is why the result of your calculation isn't quite "correct".
From wikipedia
Single precision, called "float" in the C language family, and "real" or "real*4" in Fortran. This is a binary format that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).
So your number doesn't actually fit in float. You can use double instead.

Resources