I have an expression which does the same calculation. When I try to do the whole calculation in a single expression and store it in variable "a", the expression calculates the answer as 0. When I divide the equations in two different parts and then calculate it, it gives the answer -0.332087. Obviously, -0.332087 is the correct answer. Can anybody explain why is this program misbehaving like this?
#include<stdio.h>
void main(){
double a, b, c;
int n=0, sumY=0, sumX=0, sumX2=0, sumY2=0, sumXY=0;
n = 85;
sumX = 4276;
sumY = 15907;
sumX2 = 288130;
sumY2 = 3379721;
sumXY = 775966;
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", a);
printf("%lf\n", b/c);
}
Output:
0.000000
-0.332097
In your program
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
all the variables in the right hand side are of type int, so it will produce a result of type int. The true answer -0.332097 is not a int value, so it will be converted to a valid int value, namely 0. And this 0 is assigned to variable a.
But when you do
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", b/c);
The variable b and c are of type double, so the expression b/c produce a double typed value and the true answer -0.332097 is a valid double value. Thus this part of your code give a right result.
In first equation i.e. a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)) both numerator and denominator will give integer results and the value stored in a will also be integer as integer/integer is integer. In second and third expression as you are solving them individually both b and c will be stored in double and double/double will result in a double i.e. a decimal value.
This problem can be solved by using type casting - or better still using float for the variables.
Add double before your calculation, so after you do your integer calculation in "a", it will convert it to double.
a = (double)((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
First of all (INTEGER)/(INTEGER) is always an INTEGER. So, you can typecast it like a = (double)(((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)));
OR
We know that any number (n ∈ ℂ), multiplied by 1.0 always gives the same number (n). So, your code shall be like:
a = ((n*sumXY*1.0L) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
Multiplying by 1.0L or 1.0f converts the whole arithmetic operation to long double data type.
Now you can print the number (-0.332097) to stdout.
Non-standard Code
Your code is:
void main()
{
//// YOUR CODE
}
Which is non-standard C. Instead your code should be like:
int main(int argc, char **argv)
{
//// YOUR CODE
return 0;
}
Change all your INTEGERS to DOUBLES. That should solve the problem
The type of 1st expression is int whereas in 2nd expression it is double.
Related
This question already has answers here:
Dividing 1/n always returns 0.0 [duplicate]
(3 answers)
Closed 9 years ago.
Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?
#include <stdio.h>
void main() {
int a;
float b, c, d;
a = 750;
b = a / 350;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.00 2.14
}
http://codepad.org/j1pckw0y
This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.
#include <stdio.h>
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}
Use casting of types:
int main() {
int a;
float b, c, d;
a = 750;
b = a / (float)350;
c = 750;
d = c / (float)350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
This is another way to solve that:
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0; //if you use 'a / 350' here,
//then it is a division of integers,
//so the result will be an integer
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.
"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.
You should do it like this
b = a / 350.0;
Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.
EDIT: my answer makes me think of the classic old man saying "when I was your age..."
Chapter and verse
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.105) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is
undefined.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.
Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...
Dividing two integers will result in an integer (whole number) result.
You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.
I'm a C language newbie. Just trying to figure out why one code example from the textbook "Absolute Beginner's Guide to C, 3rd Edition" works like that:
// Two sets of equivalent variables, with one set
// floating-point and the other integer
float a = 19.0;
float b = 5.0;
float floatAnswer;
int x = 19;
int y = 5;
int intAnswer;
// Using two float variables creates an answer of 3.8
floatAnswer = a / b;
printf("%.1f divided by %.1f equals %.1f\n", a, b, floatAnswer);
floatAnswer = x /y; //Take 2 creates an answer of 3.0
printf("%d divided by %d equals %.1f\n", x, y, floatAnswer);
// This will also be 3, as it truncates and doesn't round up
intAnswer = a / b;
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
The second output is not understandable to me. We take integers so why is there a floating-point "3.0"?
The third output is not clear too. Why is there 3 when we take floating-point numbers like 19.0 and 5.0?
Pls help
In the second example, the right-hand side (RHS) of the assignment is evaluated first: this is a division of two integers, so it is performed as such (an integer operation), and the result is 3; this is then converted to a float, in order to fulfil the assignment, and the conversion from an integral 3 cannot have any fractional part. However, the left-hand side is a float and, in the printf format, you are explicitly asking for 1 decimal place in the output - even though that is (and must be) zero.
In the third example, the RHS is evaluated as a floating-point division, which will (as you suspect) give an interim value of 3.8; however, when this is then converted to an int, in order to fulfil the assignmment, the fractional part (.8) is lost, as an integer cannot have any fractional component. Conversion from a floating-point type to an integer will always truncate (discard) any fractional part, so even converting 1.999999999 will give a value of 1.
When you divide x/y you get the integer 3 because it performs integer division. When you assign that to floatAnswer, the integer is automatically converted to the equivalent float value, which is 3.0.
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
a/b becomes 19.0/5.0 which is 3.8 and is reported that way.
Looking at the second case, we have:
floatanswer = x/y
floatanswer = 19/5
floatanswer = 3
printf( 3.0 )
You can see that the integers experience integer division before they are assigned to float answer. That means the printf of float-answer isn't your problem, it's the integer division which happens before floatanswer is ever assigned the value.
Second case.
In the second case you assign the result of the int operation to a float.
int a = 19;
int b = 5;
float floatAnswer = a/b;
In that last line you assign an int to a float, which is implicitely casted. But the division operation was done using integer arithmetic. The cast is the last step (when it is assigned).
So basically, that's equivalent to
float floatAnswer = 3;
which is doing the same as,
float floatAnswer = (float)3;
Note the (float). That means that the value 3 is being casted (converted) to float.
Third case.
In the third case you assign the result of 19.0/5.0 to an int,
float a = 19.0;
float b = 5.0;
int intAnswer = a/b;
This implicitly casts the value to an int, and casting float to int is being done by truncating.
In this case this would be equivalent to
int intAnswer = 3.8;
Which is doing the same as,
int intAnswer = (int)3.8;
the (int) means that the value is casted (converted) to an int type.
You can image as below:
floatAnswer = x /y;
From right to left, the program will calculate:
temp = x/y: because x and y; they are integer, so temp = 3.
floatAnswer = temp, now temp = 3, so floatAnswer = 3
intAnswer = a / b;
From right to left:
temp = a/b will be 3.8
intAnswer = temp, but intAnswer is integer, so temp is cast to 3
2nd case: As I see, you have defined floatanswer as a float variable, that why It is returning decimal value.
3rd case: Because You have defined intAnswer as integer, that's why it is returning interfere value
What will be the output of the following C code?
Can the int data type take the floating-point values?
#include <stdio.h>
int main(){
float a = 1.1;
int b = 1.1;
if(a==b)
printf("YES");
else
printf("NO");
}
The output will be NO
It's because int can't store float values correctly. So, the value stored in b will be 1 which is not equal to 1.1. So, the if case will never be satisfied and thus the output will be NO always.
The value stored in b will be 1. When comparing, b will be cast to the float value 1.0f, so the comparison will yield NO.
int b = 1.1;
This truncates the double value 1.1 to 1 before assigning to variable b. The output therefore is NO.
But you can compare int to float. Consider this example:
float a=1.0;
int b=1;
if(a==b)
printf("YES");
else
printf("NO");
Here, a is converted to float before the comparison, and therefore you would get a YES output.
You should get the answer No when you execute it.
That's because when
int b=1.1;
was executed, it initialized a variable b and then assigned the int of 1.1 that is 1. You can check this by outputting the value of b.
Also the type int stores whole numbers only but float can store fractional numbers. The types are completely different.
It will compile without any errors. The output of the program will be "NO".
In C, int and float are 2 different data types. Even when you try to assign the value 1.1 to an integer variable in C, it will be initialized as 1 only.
Others already told you that the answer will be NO. Can the int data type take the floating-point values?
Yes they can, but only in one statement! IIf you want int variable to behave like float variable you need to cast it to float.
For example:
int x = 5;
int y = 3;
printf("%f \n", (float)x/y);
Output will be 5.0/3 and that would be float value. (float)x is called casting. You can also cast float value into integer, (int)some_var etc.
Note: Type of x and y will remain int! They only behave like float in this statement.
However, in your program that won't work, because as soon as you declare int b = 1.1; b becomes 1.
I have two int values that I want to combine into a decimal number. So for example, I have A = 1234 and B = 323444. Both are int and I do not want to change it if possible.
I want to combine them to get 1234234233.323444.
My initial method was to divide b by 1e6 and add it to A to get my value.
I assigned
int A = 1234234233;
int B = 323444;
double C;
A = 1234;
B = 323444;
C = A + (B/ 1000000);
printf("%.6f\n", C);
I get 1234234233.000000 as a result. It rounds my C and I do not want that as I want 1234234233.323444
how can I solve this?
Try like this:
C = A + (B/ 1000000.0);
ie, make the denominator as double so that when integer by integer division is made it does not return weird results like you are getting.
NOTE:-
Integer/Integer = Integer
Integer/Double = Double
Double/Integer = Double
Double/Double = Double
B is an integer and dividing an integer by another integer (10000 here) will always give an integer and that's why you are getting unexpected result. Changing 10000, which is of type int, to 10000.0 (double type) will solve this problem. It seem that 10000 and 10000.0 are integer by mathematical definition but both are of different type in programming languages, former is of type int while latter is of type double.
C = A + (B/ 1000000.0);
or
C = A + ((double)B/ 1000000);
to get the expected result.
I'm currently trying to use angelscript with a simple code, following the official website's examples.
But when i try to initialise a double variable like below in my script :
double x=1/2;
the variable x appears to be initialised with the value 0.
It only works when i write
double x=1/2.0; or double x=1.0/2;
Does it exist a way to make angelscript work in double precision when i type double x=1/2 without adding any more code in the script ?
Thank you,
Using some macro chicanery:
#include <stdio.h>
#define DIV * 1.0 /
int main(void)
{
double x = 1 DIV 2;
printf("%f\n", x);
return 0;
}
DIV can also be defined as:
#define DIV / (double)
When you divide an int by an int, the result is an int. The quotient is the result, the remainder is discarded. Here, 1 / 2 yields a quotient of zero and a remainder of 1. If you need a double, try 1.0 / 2.
No, there is no way to get a double by dividing two ints without casting the result.
That's because 1 and 2 are integers, and:
int x = 1/2;
Would be 0. If x is actually a double, you get an implicit cast conversion:
double x = (double)(1/2);
Which means 1/2 = 0 becomes 0.0. Notice that is not the same as:
double x = (double)1/2;
Which will do what you want.
Numbers with decimals are doubles, and dividing an int by a double produces a double. You can also do this with casting each number:
double x = (double)1/(double)2;
Which is handy if 1 and 2 are actually int variables -- by casting this way, their value will be converted to a double before the division, so the product will be a double.