Promotion Order in C-like Languages - c

We know that types get promoted. For example, if you write:
int i = 2;
double d = 4.0;
double result = i / d;
. . . then the int will get promoted to a double, resulting in 0.5. However, I wasn't able to find any information on what happens if promotion and evaluation order conflict (it's also surprisingly difficult to Google). For example:
int i = 2;
int j = 4;
double d = 1.0;
double result = d * i / j;
In this example, the value depends on when promotion happens. If i gets promoted before the division, then the result will be 0.5, but if the result of i / j gets promoted, then integer division happens and the result is 0.0.
Is the result of what happens well defined? Is it the same in C++ and other C-derived languages?

Is the result of what happens well defined?
Yes.
Is it the same in C++ and other C-derived languages?
For C++ - yes. But "C-derived languages" is not that well defined, so it is hard to answer.
The order of evaluation of
d * i / j
is
(d * i) / j
So, first i gets promoted to double due to d * i.
Then, the result (double) has to be divided by j, so j gets promoted to double. So there are two promotions.
However, for
d + i / j
the order of operations is different. First, i / j division is done using integer arithmetics, and then the result is promoted to double. So there is only one promotion.

I believe promotion oder is the same as order of operations. When the compiler sees the line
double result = d * i / j;
it breaks the line down into:
double result;
result = d * i;
result = result / j;
before transforming it into machine code.

Related

Two different answers for same expression in C

I have an expression which does the same calculation. When I try to do the whole calculation in a single expression and store it in variable "a", the expression calculates the answer as 0. When I divide the equations in two different parts and then calculate it, it gives the answer -0.332087. Obviously, -0.332087 is the correct answer. Can anybody explain why is this program misbehaving like this?
#include<stdio.h>
void main(){
double a, b, c;
int n=0, sumY=0, sumX=0, sumX2=0, sumY2=0, sumXY=0;
n = 85;
sumX = 4276;
sumY = 15907;
sumX2 = 288130;
sumY2 = 3379721;
sumXY = 775966;
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", a);
printf("%lf\n", b/c);
}
Output:
0.000000
-0.332097
In your program
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
all the variables in the right hand side are of type int, so it will produce a result of type int. The true answer -0.332097 is not a int value, so it will be converted to a valid int value, namely 0. And this 0 is assigned to variable a.
But when you do
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", b/c);
The variable b and c are of type double, so the expression b/c produce a double typed value and the true answer -0.332097 is a valid double value. Thus this part of your code give a right result.
In first equation i.e. a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)) both numerator and denominator will give integer results and the value stored in a will also be integer as integer/integer is integer. In second and third expression as you are solving them individually both b and c will be stored in double and double/double will result in a double i.e. a decimal value.
This problem can be solved by using type casting - or better still using float for the variables.
Add double before your calculation, so after you do your integer calculation in "a", it will convert it to double.
a = (double)((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
First of all (INTEGER)/(INTEGER) is always an INTEGER. So, you can typecast it like a = (double)(((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)));
OR
We know that any number (n ∈ ℂ), multiplied by 1.0 always gives the same number (n). So, your code shall be like:
a = ((n*sumXY*1.0L) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
Multiplying by 1.0L or 1.0f converts the whole arithmetic operation to long double data type.
Now you can print the number (-0.332097) to stdout.
Non-standard Code
Your code is:
void main()
{
//// YOUR CODE
}
Which is non-standard C. Instead your code should be like:
int main(int argc, char **argv)
{
//// YOUR CODE
return 0;
}
Change all your INTEGERS to DOUBLES. That should solve the problem
The type of 1st expression is int whereas in 2nd expression it is double.

c integer division not returning correct quotient

my code (in the C language) is this:
avg = round((r + g + b) / 3);
r = avg;
g = avg;
b = avg;
It should do the grayscale image effect. There are no syntax errors, but apparently avg, when calculated as shown above with values r as 27, and both g and b as 28, avg finds it to be 27. I know it is 27.66666... and when rounded that is 28. if you can either-or/both explain to me why this happens with round(), and/or give me a solution, it is really appreciated.
Assuming that r, g, and b have integer types, the expression (r + g + b) / 3 performs integer division because all operands have integer type. This means that any fractional part of the division gets truncated.
Change the expression to this:
(r + g + b) / 3.0
The constant 3.0 has type double, so this will perform floating point division. Then rounding the result of this expression will give you the desired result.
Your code won't compile due to the missing ( before r.
You didn't provide the primitive data types, so I assume int.
Dividing the sum by 3 performs an integer division, so 27 is correct.
What you want is round((r + g + b) / 3.0), which will implicitly cast (r + g + b) to double.

A newbie question about operations with integers and floating-points in C

I'm a C language newbie. Just trying to figure out why one code example from the textbook "Absolute Beginner's Guide to C, 3rd Edition" works like that:
// Two sets of equivalent variables, with one set
// floating-point and the other integer
float a = 19.0;
float b = 5.0;
float floatAnswer;
int x = 19;
int y = 5;
int intAnswer;
// Using two float variables creates an answer of 3.8
floatAnswer = a / b;
printf("%.1f divided by %.1f equals %.1f\n", a, b, floatAnswer);
floatAnswer = x /y; //Take 2 creates an answer of 3.0
printf("%d divided by %d equals %.1f\n", x, y, floatAnswer);
// This will also be 3, as it truncates and doesn't round up
intAnswer = a / b;
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
The second output is not understandable to me. We take integers so why is there a floating-point "3.0"?
The third output is not clear too. Why is there 3 when we take floating-point numbers like 19.0 and 5.0?
Pls help
In the second example, the right-hand side (RHS) of the assignment is evaluated first: this is a division of two integers, so it is performed as such (an integer operation), and the result is 3; this is then converted to a float, in order to fulfil the assignment, and the conversion from an integral 3 cannot have any fractional part. However, the left-hand side is a float and, in the printf format, you are explicitly asking for 1 decimal place in the output - even though that is (and must be) zero.
In the third example, the RHS is evaluated as a floating-point division, which will (as you suspect) give an interim value of 3.8; however, when this is then converted to an int, in order to fulfil the assignmment, the fractional part (.8) is lost, as an integer cannot have any fractional component. Conversion from a floating-point type to an integer will always truncate (discard) any fractional part, so even converting 1.999999999 will give a value of 1.
When you divide x/y you get the integer 3 because it performs integer division. When you assign that to floatAnswer, the integer is automatically converted to the equivalent float value, which is 3.0.
printf("%.1f divided by %.1f equals %d\n", a, b, intAnswer);
a/b becomes 19.0/5.0 which is 3.8 and is reported that way.
Looking at the second case, we have:
floatanswer = x/y
floatanswer = 19/5
floatanswer = 3
printf( 3.0 )
You can see that the integers experience integer division before they are assigned to float answer. That means the printf of float-answer isn't your problem, it's the integer division which happens before floatanswer is ever assigned the value.
Second case.
In the second case you assign the result of the int operation to a float.
int a = 19;
int b = 5;
float floatAnswer = a/b;
In that last line you assign an int to a float, which is implicitely casted. But the division operation was done using integer arithmetic. The cast is the last step (when it is assigned).
So basically, that's equivalent to
float floatAnswer = 3;
which is doing the same as,
float floatAnswer = (float)3;
Note the (float). That means that the value 3 is being casted (converted) to float.
Third case.
In the third case you assign the result of 19.0/5.0 to an int,
float a = 19.0;
float b = 5.0;
int intAnswer = a/b;
This implicitly casts the value to an int, and casting float to int is being done by truncating.
In this case this would be equivalent to
int intAnswer = 3.8;
Which is doing the same as,
int intAnswer = (int)3.8;
the (int) means that the value is casted (converted) to an int type.
You can image as below:
floatAnswer = x /y;
From right to left, the program will calculate:
temp = x/y: because x and y; they are integer, so temp = 3.
floatAnswer = temp, now temp = 3, so floatAnswer = 3
intAnswer = a / b;
From right to left:
temp = a/b will be 3.8
intAnswer = temp, but intAnswer is integer, so temp is cast to 3
2nd case: As I see, you have defined floatanswer as a float variable, that why It is returning decimal value.
3rd case: Because You have defined intAnswer as integer, that's why it is returning interfere value

why cast when the conversion is implicit?

I see some code in c like this
int main()
{
int x = 4, y = 6;
long z = (long) x + y;
}
what is the benefit of casting even though in this case it implicit? Which operation comes first x + y or casting x first?
In this case the cast can serve a purpose. If you wrote
int x = 4, y = 6;
long z = x + y;
the addition would be performed on int values, and then, afterwards, the sum would be converted to long. So the addition might overflow. In this case, casting one operand causes the addition to be performed using long values, lessening the chance of overflow.
(Obviously in the case of 4 and 6 it's not going to overflow anyway.)
In answer to your second question, when you write
long z = (long)x + y;
the cast is definitely applied first, and that's important. If, on the other hand, you wrote
long z = (long)(x + y);
the cast would be applied after the addition (and it would be too late, because the addition would have already been performed on ints).
Similarly, if you write
float f = x / y;
or even
float f = (float)(x / y);
the division will be performed on int values, and will discard the remainder, and f will end up containing 0. But if you write
float f = (float)x / y;
the division will be performed using floating-point, and f will receive 0.666666.

Dividing integers in C rounds the value down / gives zero as a result

I'm trying to do some arithmetic on integers. The problem is when I'm trying to do division to get a double as a result, the result is always 0.00000000000000000000, even though this is obviously not true for something like ((7 * 207) / 6790). I have tried type-casting the formulas, but I still get the same result.
What am I doing wrong and how can I fix it?
int o12 = 7, o21 = 207, numTokens = 6790;
double e11 = ((o12 * o21) / numTokens);
printf(".%20lf", e11); // prints 0.00000000000000000000
Regardless of the actual values, the following holds:
int / int = int
The output will not be cast to a non-int type automatically.
So the output will be floored to an int when doing division.
What you want to do is force any of these to happen:
double / int = double
float / int = float
int / double = double
int / float = float
The above involves an automatic widening conversion - note that only one needs to be a floating point value.
You can do this by either:
Putting a (double) or (float) before one of your values to cast it to the corresponding type or
Changing one or more of the variables to double or float
Note that a cast like (double)(int / int) will not work, as this first does the integer division (which returns an int, and thus floors the value) and only then casts the result to double (this will be the same as simply trying to assign it to a double without any casting, as you've done).
It is certainly true for an expression such as ((7 * 207) / 6790) that the result is 0, or 0.0 if you think in double.
The expression only has integers, so it will be computed as an integer multiplication followed by an integer division.
You need to cast to a floating-point type to change that, e.g. ((7 * 207) / 6790.0).
Many poeple seem to expect the right-hand side of an assignment to be automatically "adjusted" by the type of the target variable: this is not how it works. The result is converted, but that doesn't affect any "inner" operations in the right-hand expression. In your code:
e11 = ((o12 * o21) / numTokens);
All of o12, o21 and numTokens are integer, so that expression is evaluated as integer, then converted to floating-point since e11 is double.
This like doing
const double a_quarter = 1 / 4;
this is just a simpler case of the same problem: the expression is evaluated first, then the result (the integer 0) is converted to double and stored. That's how the language works.
The fix is to cast:
e11 = ((o12 * o21) / (double) numTokens);
You must cast these numbers to double before division. When you perform division on int the result is also an integer rounded towards zero, e.g. 1 / 2 == 0, but 1.0 / 2.0 == 0.5.
If the operands are integer, C will perform integer arithmetic. That is, 1/4 == 0. However, if you force an operand to be double, then the arithmetic will take fractional parts into account. So:
int a = 1;
int b = 4;
double c = 1.0
double d = a/b; // d == 0.0
double e = c/b; // e == 0.25
double f = (double)a/b; // f == 0.25

Resources