Different Answers by removing a printf statement - c

This is the link to the question on UVa online judge.
https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=29&page=show_problem&problem=1078
My C code is
#include <stdio.h>
double avg(double * arr,int students)
{
int i;
double average=0;
for(i=0;i<students;i++){
average=average+(*(arr+i));
}
average=average/students;
int temp=average*100;
average=temp/100.0;
return average;
}
double mon(double * arr,int students,double average)
{
int i;
double count=0;
for(i=0;i<students;i++){
if(*(arr+i)<average){
double temp=average-*(arr+i);
int a=temp*100;
temp=a/100.0;
count=count+temp;
}
}
return count;
}
int main(void)
{
// your code goes here
int students;
scanf("%d",&students);
while(students!=0){
double arr[students];
int i;
for(i=0;i<students;i++){
scanf("%lf",&arr[i]);
}
double average=avg(arr,students);
//printf("%lf\n",average);
double money=mon(arr,students,average);
printf("$%.2lf\n",money);
scanf("%d",&students);
}
return 0;
}
One of the input and outputs are
Input
3
0.01
0.03
0.03
0
Output
$0.01
My output is $0.00.
However if I uncomment the line printf("%lf",average);
The Output is as follows
0.02 //This is the average
$0.01
I am running the code on ideone.com
Please explain why is this happening.

I believe I've found the culprit and a reasonable explanation.
On x86 processors, the FPU operates internally with extended precision, which is an 80-bit format. All of the floating point instructions operate with this precision. If a double is actually required, the compiler will generate code to convert the extended precision value down to a double precision value. Crucially, the commented-out printf forces such a conversion because the FPU registers must be saved and restored across that function call, and they will be saved as doubles (note that avg and mon are both inlined so no save/restore happens there).
In fact, instead of printf we can use the line static double dummy = average; to force the double conversion to occur, which also causes the bug to disappear: http://ideone.com/a1wadn
Your value of average is close to but not exactly 0.02 because of floating-point inaccuracies. When I do all the calculations explicitly with long double, and I print out the value of average, it is the following:
long double: 0.01999999999999999999959342418532
double: 0.02000000000000000041633363423443
Now you can see the problem. When you add the printf, average is forced to a double which pushes it above 0.02. But, without the printf, average will be a little less than 0.02 in its native extended precision format.
When you do int a=temp*100; the bug appears. Without the conversion, this makes a = 1. With the conversion, this makes a = 2.
To fix this, simply use int a=round(temp*100); - all your weird errors should vanish.
Of note, this bug is extremely sensitive to changes in the code. Anything that causes the registers to be saved (such as a printf pretty much anywhere) will actually cause the bug to vanish. Hence, this is an extremely good example of a heisenbug: a bug that vanishes when you try to investigate it.

#nneonneo well answered most of the issue: Slightly variant compilations result in slightly different floating-point code that result in nearly the same double answer, except one answer is just below 2.0 and the the other at or just about 2.0.
Like to add about the importance on not using conversion to int for floating point rounding.
Code like double temp; ... int a=temp*100; accentuate this difference resulting in a with a value of 1 or 2 as conversion to int is effective "truncate toward zero" - drop the fraction.
Rather than round to near 0.01 with code like:
double temp;
...
int a = temp*100; // Problem is here
temp = a/100.0;
Do not use int at all. Use
double temp;
...
temp = round(temp*100.0)/100.0;
Not only does this provide more consistent answers (as temp is unlikely to have values near a half-cent), it also allows temp values outside the int range. temp = 1e13; int a = temp/100; certainly results in undefined behavior.
Do not use conversion to int to round floating-point numbers: use round()
roundf(), roundl(), floor(), ceil(), etc. may also be useful. #jeff

You're dividing a double by an integer, which is integer division. In this case it will give a value of 0.
If you cast student to a double it should give you proper output.
average=average/(double)students;
There might be other locations this is needed depending on your arithmetic.

Related

VS code is showing different answer for two same code

I run two same code. But it shows different answer.
Code 1:
#include<stdio.h>
int main(){
float far = 98.6;
printf("%f", (far-32)*5/9);
return 0;
}
Code 2:
#include<stdio.h>
int main(){
float far = 98.6;
float cel;
cel = (far-32)*5/9;
printf("%f", cel);
return 0;
}
First code gives 36.99999 as output and second code gives 37.00000 as output.
Research FLT_EVAL_METHOD. This reports the intermediate floating-point math allowed.
printf("%d\n", FLT_EVAL_METHOD);
When this is non-zero, the 2 codes may have different output as printf("%f", (far-32)*5/9); can print the result of (far-32)*5/9 using double or float math.
In the 2nd case, (far-32)*5/9); is performed user float or double and then saved as a float and then printed. Its promotion to a double as part of a printf() ... argument does not affect the value.
For deeper understanding, print far, cel, (far-32)*5/9 with "%a" and "%.17g" for greater detail.
In both cases, far is the float value 0x1.8a6666p+6 or 98.599998474121094...
As I see it the first used double math in printf("%f", (far-32)*5/9); and the second used double math too, yet rounded to a float result from cel = (far-32)*5/9;. To be certain we need to see the intermediate results as suggested above.
Avoid double constants with float objects. It sometimes makes a difference.
// float far = 98.6;
float far = 98.6f;
Use double objects as a default. Save float for select speed/space cases. #Some programmer dude.
The difference lies in the types used and the printf call.
Variable-argument functions like printf will promote arguments of smaller types. So for example a float argument will be promoted to double.
The type double have much higher precision than float.
Since in the first program you do the calculation as part of the actual printf call, not storing the result in a variable using less precision, the compiler might perform the whole calculation using double and increasing the precision. Precision that will be lost when you store the result in cel in the second example.
Unless you have very specific requirements, you should generally always use double for all your floating-point variables and values and calculations.

Pow function returning wrong result

When I use the pow() function, sometimes the results are off by one. For example, this code produces 124, but I know that 5³ should be 125.
#include<stdio.h>
#include<math.h>
int main(){
int i = pow(5, 3);
printf("%d", i);
}
Why is the result wrong?
Your problem is that you are mixing integer variables with floating point math. My bet is that the result of 5^3 is something like 124.999999 due to rounding problems and when cast into integer variable get floored to 124.
There are 3 ways to deal with this:
more safely mix floating math and integers
int x=5,y=3,z;
z=floor(pow(x,y)+0.5);
// or
z=round(pow(x,y));
but using this will always present a possible risk of rounding errors affecting the result especially for higher exponents.
compute on floating variables only
so replace int with float or double. This is a bit safer than #1 but still in some cases is this not usable (depends on the task). and may need occasional floor,ceil,round along the way to get the wanted result correctly.
Use integer math only
This is the safest way (unless you cross the int limit). The pow can be computed on integer math relatively easily see:
Power by squaring for negative exponents
pow(x, y) is most likely implemented as exp(y * log(x)): modern CPUs can evaluate exp and log in a couple of flicks of the wrist.
Although adequate for many scientific applications, when truncating the result to an integer, the result can be off for even trivial arguments. That's what is happening here.
Your best bet is to roll your own version of pow for integer arguments; i.e. find one from a good library. As a starting point, see The most efficient way to implement an integer based power function pow(int, int)
Use Float Data Type
#include <stdio.h>
#include <math.h>
int main()
{
float x=2;
float y=2;
float p= pow(x,y);
printf("%f",p);
return 0;
}
You can use this function instead of pow:
long long int Pow(long long int base, unsigned int exp)
{
if (exp > 0)
return base * Pow(base, exp-1);
return 1;
}

Division of two floats giving incorrect answer

Attempting to divide two floats in C, using the code below:
#include <stdio.h>
#include <math.h>
int main(){
float fpfd = 122.88e6;
float flo = 10e10;
float int_part, frac_part;
int_part = (int)(flo/fpfd);
frac_part = (flo/fpfd) - int_part;
printf("\nInt_Part = %f\n", int_part);
printf("Frac_Part = %f\n", frac_part);
return(0);
}
To this code, I use the commands:
>> gcc test_prog.c -o test_prog -lm
>> ./test_prog
I then get this output:
Int_Part = 813.000000
Frac_Part = 0.802063
Now, this Frac_part it seems is incorrect. I have tried the same equation on a calculator first and then in Wolfram Alpha and they both give me:
Frac_Part = 0.802083
Notice the number at the fifth decimal place is different.
This may seem insignificant to most, but for the calculations I am doing it is of paramount importance.
Can anyone explain to me why the C code is making this error?
When you have inadequate precision from floating point operations, the first most natural step is to just use floating point types of higher precision, e.g. use double instead of float. (As pointed out immediately in the other answers.)
Second, examine the different floating point operations and consider their precisions. The one that stands out to me as being a source of error is the method above of separating a float into integer part and fractional part, by simply casting to int and subtracting. This is not ideal, because, when you subtract the integer part from the original value, you are doing arithmetic where the three numbers involved (two inputs and result) have very different scales, and this will likely lead to precision loss.
I would suggest to use the C <math.h> function modf instead to split floating point numbers into integer and fractional part. http://www.techonthenet.com/c_language/standard_library_functions/math_h/modf.php
(In greater detail: When you do an operation like f - (int)f, the floating point addition procedure is going to see that two numbers of some given precision X are being added, and it's going to naturally assume that the result will also have precision X. Then it will perform the actual computation under that assumption, and finally reevaluate the precision of the result at the end. Because the initial prediction turned out not to be ideal, some low order bits are going to get lost.)
Float are single precision for floating point, you should instead try to use double, the following code give me the right result:
#include <stdio.h>
#include <math.h>
int main(){
double fpfd = 122.88e6;
double flo = 10e10;
double int_part, frac_part;
int_part = (int)(flo/fpfd);
frac_part = (flo/fpfd) - int_part;
printf("\nInt_Part = %f\n", int_part);
printf("Frac_Part = %f\n", frac_part);
return(0);
}
Why ?
As I said, float are single precision floating point, they are smaller than double (in most architecture, sizeof(float) < sizeof(double)).
By using double instead of float you will have more bit to store the mantissa and the exponent part of the number (see wikipedia).
float has only 6~9 significant digits, it's not precise enough for most uses in practice. Changing all float variables to double (which provides 15~17 significant digits) gives output:
Int_Part = 813.000000
Frac_Part = 0.802083

floating point bug in embedded system

On a Rabbit microcontroller..
(1)
I am incrementing f1 every second by converting into hours to the existing value and store in the same register.
void main()
{
float f1;
int i;
f1 = 4096;
// Assume that I am simulating a one second through each iteration of the following loop
for(i = 0; i < 100; i++)
{
f1 += 0.000278; // f1 does not change from 4096
printf("\ni: %d f1: %.06f", i, f1);
}
}
(2)
Another question is when I try to store a 32-bit unsigned long int value into float variable and accessing it does not give me the value I have stored. What am I doing wrong?
void main()
{
unsigned long L1;
int temp;
float f1;
L1 = 4000000000; // four billion
f1 = (float)L1;
// Now print both
// You see that L1: 4000000000 while f1: -4000000000.000000
printf("\nL1: %lu f1:%.6f", L1, f1);
}
The first problem is that single precision (32 bit) binary floating point is good for only approximately 6 significant figures in decimal. So if you start with 4096.00 anything less than .01 cannot be added to the value. Using double precision will improve the result at some significant cost.
It is usually unnecessary and inappropriate to use floating point, it is very expensive on a processor without an FPU - especially an 8 bitter. Moreover your literal approximation of one second in hours (1.0f/3600.0f hours) will introduce significant cumulative error in any case. You may be better off storing time in integer seconds, and converting to hours where necessary for display or output.
The second problem is less clear, but seems likely to be an issue with the Rabbit compiler implementation of floating point or possibly of the %f format specifier in the printf() implementation. Check the ISO compliance statement in the compiler documentation - there may be restrictions - especially on floating point. Again you may find that using a double resolve the problem - especially as strictly that is the type expected by the %f format specifier in an ISO conforming implementation. As I said, you are probably best off avoiding floating point altogether on such a target.
Note that if you are using Rabbit's Dynamic C compiler, you should be clear that Dynamic C is not an ISO conforming C compiler. It is a proprietary C-like language, that is similar enough to C to cause a great deal of confusion! Specifically it does not support double precision (double) floating point.
f1 += (1/3600); should be f1 += (1.0f/3600.0f);.
If you perform integer division then result will also be integer.

Can I calculate error introduced by doubles?

Suppose I have an irrational number like \sqrt{3}. As it is irrational, it has no decimal representation. So when you try to express it with a IEEE 754 double, you will introduce an error.
A decimal representation with a lot of digits is:
1.7320508075688772935274463415058723669428052538103806280558069794519330169088
00037081146186757248575675...
Now, when I calculate \sqrt{3}, I get 1.732051:
#include <stdio.h> // printf
#include <math.h> // needed for sqrt
int main() {
double myVar = sqrt (3);
printf("as double:\t%f\n", myVar);
}
According to Wolfram|Alpha, I have an error of 1.11100... × 10^-7.
Is there any way I can calculate the error myself?
(I don't mind switching to C++, Python or Java. I could probably also use Mathematica, if there is no simple alternative)
Just to clarify: I don't want a solution that works only for sqrt{3}. I would like to get a function that gives me the error for any number. If that is not possible, I would at least like to know how Wolfram|Alpha gets more values.
My try
While writing this question, I found this:
#include <stdio.h> // printf
#include <math.h> // needed for sqrt
#include <float.h> // needed for higher precision
int main() {
long double r = sqrtl(3.0L);
printf("Precision: %d digits; %.*Lg\n",LDBL_DIG,LDBL_DIG,r);
}
With this one, I can get the error down to 2.0 * 10^-18 according to Wolfram|Alpha. So I thought this might be close enough to get a good estimation of the error. I wrote this:
#include <stdio.h> // printf
#include <math.h> // needed for sqrt
#include <float.h>
int main() {
double myVar = sqrt (3);
long double r = sqrtl(3.0L);
long double error = abs(r-myVar) / r;
printf("Double:\t\t%f\n", myVar);
printf("Precision:\t%d digits; %.*Lg\n",LDBL_DIG,LDBL_DIG,r);
printf("Error:\t\t%.*Lg\n", LDBL_DIG, error);
}
But it outputs:
Double: 1.732051
Precision: 18 digits; 1.73205080756887729
Error: 0
How can I fix that to get the error?
What every Programmer should know about Floating Point Arithmetic by Goldberg is the definite guide you are looking for.
https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf
printf rounds doubles to 6 places when you use %f without a precision.
e.g.
double x = 1.3;
long double y = 1.3L;
long double err = y - (double) x;
printf("Error %.20Lf\n", err);
My output: -0.00000000000000004445
If the result is 0, your long double and double are the same.
One way to obtain an interval that is guaranteed to contain the real value of the computation is to use interval arithmetic. Then, comparing the double result to the interval tells you how far the double computation is, at worst, from the real computation.
Frama-C's value analysis can do this for you with option -all-rounding-modes.
double Frama_C_sqrt(double x);
double sqrt(double x)
{
return Frama_C_sqrt(x);
}
double y;
int main(){
y = sqrt(3.0);
}
Analyzing the program with:
frama-c -val t.c -float-normal -all-rounding-modes
[value] Values at end of function main:
y ∈ [1.7320508075688772 .. 1.7320508075688774]
This means that the real value of sqrt(3), and thus the value that would be in variable y if the program computed with real numbers, is within the double bounds [1.7320508075688772 .. 1.7320508075688774].
Frama-C's value analysis does not support the long double type, but if I understand correctly, you were only using long double as reference to estimate the error made with double. The drawback of that method is that long double is itself imprecise. With interval arithmetic as implemented in Frama-C's value analysis, the real value of the computation is guaranteed to be within the displayed bounds.
You have a mistake in printing Double: 1.732051 here printf("Double:\t\t%f\n", myVar);
The actual value of double myVar is
1.732050807568877281 //18 digits
so 1.732050807568877281-1.732050807568877281 is zero
According to the C standard printf("%f", d) will default to 6 digits after the decimal point. This is not the full precision of your double.
It might be that double and long double happen to be the same on your architecture. I have different sizes for them on my architecture and get a non-zero error in your example code.
You want fabsl instead of abs when calculating the error, at least when using C. (In C, abs is integer.) With this substitution, I get:
Double: 1.732051
Precision: 18 digits; 1.73205080756887729
Error: 5.79643049346087304e-17
(Calculated on Mac OS X 10.8.3 with Apple clang 4.0.)
Using long double to estimate the errors in double is a reasonable approach for a few simple calculations, except:
If you are calculating the more accurate long double results, why bother with double?
Error behavior in sequences of calculations is hard to describe and can grow to the point where long double is not providing an accurate estimate of the exact result.
There exist perverse situations where long double gets less accurate results than double. (Mostly encountered when somebody constructs an example to teach students a lesson, but they exist nonetheless.)
In general, there is no simple and efficient way to calculate the error in a floating-point result in a sequence of calculations. If there were, it would be effectively a means of calculating a more accurate result, and we would use that instead of the floating-point calculations alone.
In special cases, such as when developing math library routines, the errors resulting from a particular sequence of code are studied carefully (and the code is redesigned as necessary to have acceptable error behavior). More often, error is estimated either by performing various “experiments” to see how much results fluctuate with varying inputs or by studying general mathematical behavior of systems.
You also asked “I would like to get a function that gives me the error for any number.” Well, that is easy, given any number x and the calculated result x', the error is exactly x' – x. The actual problem is you probably do not have a description of x that can be used to evaluate that expression easily. In your example, x is sqrt(3). Obviously, then, the error is sqrt(3) – x, and x is exactly 1.732050807568877193176604123436845839023590087890625. Now all you need to do is evaluate sqrt(3). In other words, numerically evaluating the error is about as hard as numerically evaluating the original number.
Is there some class of numbers you want to perform this analysis for?
Also, do you actually want to calculate the error or just a good bound on the error? The latter is somewhat easier, although it remains hard for sequences of calculations. For all elementary operations, IEEE 754 requires the produced result to be the result that is nearest the mathematically exact result (in the appropriate direction for the rounding mode being used). In round-to-nearest mode, this implies that each result is at most 1/2 ULP (unit of least precision) away from the exact result. For operations such as those found in the standard math library (sine, logarithm, et cetera), most libraries will produce results within a few ULP of the exact result.

Resources