This question already has answers here:
Dividing 1/n always returns 0.0 [duplicate]
(3 answers)
Closed 9 years ago.
Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?
#include <stdio.h>
void main() {
int a;
float b, c, d;
a = 750;
b = a / 350;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.00 2.14
}
http://codepad.org/j1pckw0y
This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.
#include <stdio.h>
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}
Use casting of types:
int main() {
int a;
float b, c, d;
a = 750;
b = a / (float)350;
c = 750;
d = c / (float)350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
This is another way to solve that:
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0; //if you use 'a / 350' here,
//then it is a division of integers,
//so the result will be an integer
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.
"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.
You should do it like this
b = a / 350.0;
Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.
EDIT: my answer makes me think of the classic old man saying "when I was your age..."
Chapter and verse
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.105) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is
undefined.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.
Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...
Dividing two integers will result in an integer (whole number) result.
You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.
Related
This question already has answers here:
Dividing 1/n always returns 0.0 [duplicate]
(3 answers)
Closed 9 years ago.
Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?
#include <stdio.h>
void main() {
int a;
float b, c, d;
a = 750;
b = a / 350;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.00 2.14
}
http://codepad.org/j1pckw0y
This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.
#include <stdio.h>
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}
Use casting of types:
int main() {
int a;
float b, c, d;
a = 750;
b = a / (float)350;
c = 750;
d = c / (float)350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
This is another way to solve that:
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0; //if you use 'a / 350' here,
//then it is a division of integers,
//so the result will be an integer
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.
"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.
You should do it like this
b = a / 350.0;
Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.
EDIT: my answer makes me think of the classic old man saying "when I was your age..."
Chapter and verse
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.105) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is
undefined.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.
Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...
Dividing two integers will result in an integer (whole number) result.
You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.
I have an expression which does the same calculation. When I try to do the whole calculation in a single expression and store it in variable "a", the expression calculates the answer as 0. When I divide the equations in two different parts and then calculate it, it gives the answer -0.332087. Obviously, -0.332087 is the correct answer. Can anybody explain why is this program misbehaving like this?
#include<stdio.h>
void main(){
double a, b, c;
int n=0, sumY=0, sumX=0, sumX2=0, sumY2=0, sumXY=0;
n = 85;
sumX = 4276;
sumY = 15907;
sumX2 = 288130;
sumY2 = 3379721;
sumXY = 775966;
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", a);
printf("%lf\n", b/c);
}
Output:
0.000000
-0.332097
In your program
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
all the variables in the right hand side are of type int, so it will produce a result of type int. The true answer -0.332097 is not a int value, so it will be converted to a valid int value, namely 0. And this 0 is assigned to variable a.
But when you do
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", b/c);
The variable b and c are of type double, so the expression b/c produce a double typed value and the true answer -0.332097 is a valid double value. Thus this part of your code give a right result.
In first equation i.e. a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)) both numerator and denominator will give integer results and the value stored in a will also be integer as integer/integer is integer. In second and third expression as you are solving them individually both b and c will be stored in double and double/double will result in a double i.e. a decimal value.
This problem can be solved by using type casting - or better still using float for the variables.
Add double before your calculation, so after you do your integer calculation in "a", it will convert it to double.
a = (double)((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
First of all (INTEGER)/(INTEGER) is always an INTEGER. So, you can typecast it like a = (double)(((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)));
OR
We know that any number (n ∈ ℂ), multiplied by 1.0 always gives the same number (n). So, your code shall be like:
a = ((n*sumXY*1.0L) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
Multiplying by 1.0L or 1.0f converts the whole arithmetic operation to long double data type.
Now you can print the number (-0.332097) to stdout.
Non-standard Code
Your code is:
void main()
{
//// YOUR CODE
}
Which is non-standard C. Instead your code should be like:
int main(int argc, char **argv)
{
//// YOUR CODE
return 0;
}
Change all your INTEGERS to DOUBLES. That should solve the problem
The type of 1st expression is int whereas in 2nd expression it is double.
I'm a C language newbie. Just trying to figure out why one code example from the textbook "Absolute Beginner's Guide to C, 3rd Edition" works like that:
// Two sets of equivalent variables, with one set
// floating-point and the other integer
float a = 19.0;
float b = 5.0;
float floatAnswer;
int x = 19;
int y = 5;
int intAnswer;
// Using two float variables creates an answer of 3.8
floatAnswer = a / b;
printf("%.1f divided by %.1f equals %.1f\n", a, b, floatAnswer);
floatAnswer = x /y; //Take 2 creates an answer of 3.0
printf("%d divided by %d equals %.1f\n", x, y, floatAnswer);
// This will also be 3, as it truncates and doesn't round up
intAnswer = a / b;
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
The second output is not understandable to me. We take integers so why is there a floating-point "3.0"?
The third output is not clear too. Why is there 3 when we take floating-point numbers like 19.0 and 5.0?
Pls help
In the second example, the right-hand side (RHS) of the assignment is evaluated first: this is a division of two integers, so it is performed as such (an integer operation), and the result is 3; this is then converted to a float, in order to fulfil the assignment, and the conversion from an integral 3 cannot have any fractional part. However, the left-hand side is a float and, in the printf format, you are explicitly asking for 1 decimal place in the output - even though that is (and must be) zero.
In the third example, the RHS is evaluated as a floating-point division, which will (as you suspect) give an interim value of 3.8; however, when this is then converted to an int, in order to fulfil the assignmment, the fractional part (.8) is lost, as an integer cannot have any fractional component. Conversion from a floating-point type to an integer will always truncate (discard) any fractional part, so even converting 1.999999999 will give a value of 1.
When you divide x/y you get the integer 3 because it performs integer division. When you assign that to floatAnswer, the integer is automatically converted to the equivalent float value, which is 3.0.
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
a/b becomes 19.0/5.0 which is 3.8 and is reported that way.
Looking at the second case, we have:
floatanswer = x/y
floatanswer = 19/5
floatanswer = 3
printf( 3.0 )
You can see that the integers experience integer division before they are assigned to float answer. That means the printf of float-answer isn't your problem, it's the integer division which happens before floatanswer is ever assigned the value.
Second case.
In the second case you assign the result of the int operation to a float.
int a = 19;
int b = 5;
float floatAnswer = a/b;
In that last line you assign an int to a float, which is implicitely casted. But the division operation was done using integer arithmetic. The cast is the last step (when it is assigned).
So basically, that's equivalent to
float floatAnswer = 3;
which is doing the same as,
float floatAnswer = (float)3;
Note the (float). That means that the value 3 is being casted (converted) to float.
Third case.
In the third case you assign the result of 19.0/5.0 to an int,
float a = 19.0;
float b = 5.0;
int intAnswer = a/b;
This implicitly casts the value to an int, and casting float to int is being done by truncating.
In this case this would be equivalent to
int intAnswer = 3.8;
Which is doing the same as,
int intAnswer = (int)3.8;
the (int) means that the value is casted (converted) to an int type.
You can image as below:
floatAnswer = x /y;
From right to left, the program will calculate:
temp = x/y: because x and y; they are integer, so temp = 3.
floatAnswer = temp, now temp = 3, so floatAnswer = 3
intAnswer = a / b;
From right to left:
temp = a/b will be 3.8
intAnswer = temp, but intAnswer is integer, so temp is cast to 3
2nd case: As I see, you have defined floatanswer as a float variable, that why It is returning decimal value.
3rd case: Because You have defined intAnswer as integer, that's why it is returning interfere value
Example (in C):
#include<stdio.h>
int main()
{
int a, b = 999;
float c = 0.0;
scanf("%d", &a);
c = (float)a/b;
printf("%.3lf...", c);
return 0;
}
If I put 998 it will come out 0.999, but I want the result be 0.998; how?
It looks like you want to truncate instead of round.
The mathematical result of 999/998 is 0.9989989989... Rounded to three decimal places, that is 0.999. So if you use %.3f to print it, that's what you're going to get.
When you convert a floating-point number to integer in C, the fractional part is truncated. So if you had the number 998.9989989 and you converted it to an int, you'd get 998. So you can get the result you want by multiplying by 1000, truncating to an int, and dividing by 1000 again:
c = c * 1000;
c = (int)c;
c = c / 1000;
Or you could shorten that to
c = (int)(c * 1000) / 1000.;
This will work fine for problems such as 998/999 ≈ 0.998, but you're close to the edge of where type float's limited precision will start introducing its own rounding issues. Using double would be a better choice. (Type float's limited precision almost always introduces issues.)
I write this short program to test the conversion from double to int:
int main() {
int a;
int d;
double b = 0.41;
/* Cast from variable. */
double c = b * 100.0;
a = (int)(c);
/* Cast expression directly. */
d = (int)(b * 100.0);
printf("c = %f \n", c);
printf("a = %d \n", a);
printf("d = %d \n", d);
return 0;
}
Output:
c = 41.000000
a = 41
d = 40
Why do a and d have different values even though they are both the product of b and 100?
The C standard allows a C implementation to compute floating-point operations with more precision than the nominal type. For example, the Intel 80-bit floating-point format may be used when the type in the source code is double, for the IEEE-754 64-bit format. In this case, the behavior can be completely explained by assuming the C implementation uses long double (80 bit) whenever it can and converts to double when the C standard requires it.
I conjecture what happens in this case is:
In double b = 0.41;, 0.41 is converted to double and stored in b. The conversion results in a value slightly less than .41.
In double c = b * 100.0000;, b * 100.0000 is evaluated in long double. This produces a value slightly less than 41.
That expression is used to initialize c. The C standard requires that it be converted to double at this point. Because the value is so close to 41, the conversion produces exactly 41. So c is 41.
a = (int)(c); produces 41, as normal.
In d = (int)(b * 100.000);, we have the same multiplication as before. The value is the same as before, something slightly less than 41. However, this value is not assigned to or used to intialize a double, so no conversion to double occurs. Instead, it is converted to int. Since the value is slightly less than 41, the conversion produces 40.
The compiler can infer that c has to be initialized with 0.41 * 100.0 and does that better than the calculation of d.
The crux of the problem is that 0.41 is not exactly representable in IEEE 754 64-bit binary floating point. The actual value (with only enough precision to show the relevant part) is 0.409999999999999975575..., while 100 can be represented exactly. Multiplying these together should yield 40.9999999999999975575..., which is again not quite representable. In the likely case that the rounding mode is towards nearest, zero, or negative infinity, this should be rounded to 40.9999999999999964.... When cast to an int, this is rounded to 40.
The compiler is allowed to do calculations with higher precision, however, and in particular may replace the multiplication in the assignment of c with a direct store of the computed value.
Edit: I miscalculated the largest representable number less than 41, the correct value is approximately 40.99999999999999289.... As both Eric Postpischil and Daniel Fischer correctly point out, even the value calculated as a double should be rounded to 41 unless the rounding mode is towards zero or negative infinity. Do you know what the rounding mode is? It makes a difference, as this code sample shows:
#include <stdio.h>
#include <fenv.h>
#pragma STDC FENV_ACCESS ON
int main(void)
{
int roundMode = fegetround( );
volatile double d1;
volatile double d2;
volatile double result;
volatile int rounded;
fesetround(FE_TONEAREST);
d1 = 0.41;
d2 = 100;
result = d1 * d2;
rounded = result;
printf("nearest rounded=%i\n", rounded);
fesetround(FE_TOWARDZERO);
d1 = 0.41;
d2 = 100;
result = d1 * d2;
rounded = result;
printf("zero rounded=%i\n", rounded);
fesetround(roundMode);
return 0;
}
Output:
nearest rounded=41
zero rounded=40