why cast when the conversion is implicit? - c

I see some code in c like this
int main()
{
int x = 4, y = 6;
long z = (long) x + y;
}
what is the benefit of casting even though in this case it implicit? Which operation comes first x + y or casting x first?

In this case the cast can serve a purpose. If you wrote
int x = 4, y = 6;
long z = x + y;
the addition would be performed on int values, and then, afterwards, the sum would be converted to long. So the addition might overflow. In this case, casting one operand causes the addition to be performed using long values, lessening the chance of overflow.
(Obviously in the case of 4 and 6 it's not going to overflow anyway.)
In answer to your second question, when you write
long z = (long)x + y;
the cast is definitely applied first, and that's important. If, on the other hand, you wrote
long z = (long)(x + y);
the cast would be applied after the addition (and it would be too late, because the addition would have already been performed on ints).
Similarly, if you write
float f = x / y;
or even
float f = (float)(x / y);
the division will be performed on int values, and will discard the remainder, and f will end up containing 0. But if you write
float f = (float)x / y;
the division will be performed using floating-point, and f will receive 0.666666.

Related

Cubic integer overflow

The following simple calculation causes an integer overflow:
void main(void) {
int n = 1291;
long cube = n*n*n;
printf("Cube: %ld, n: %d", cube, n);
}
Output:
Cube: -2143282125, n: 1291
My thinking was that since the result of n*n*n is assigned to a long, the result should evaluate to 2151685171. However, it appears that the result is calculated first into an int; because if int n = 1291 is changed to long n = 1291, it works as expected.
Question:
Is the 'intermediary' result of n*n*n stored to int (the declared type) before being assigned to the long declaration? Or, more simply: Why does n*n*n cause an integer overflow when being assigned to a long type?
I have researched to find the answer first, unfortunately must be searching incorrectly.
Your question resembles a lot to the typical division question:
int a = 7;
double b = a / 2;
=> b seems to be equal to 3 instead of 3.5, and in order to avoid this, you need to do:
double b = ((double)a) / 2; // or:
double b = a / 2.0; // which is the same as (double)2
So, I believe that you might benefit from the same reasoning, doing something like this:
int n = 1291;
long cube = ((long)n) * n * n;

Compiler error operand of type 'div_t' where arithmetic or pointer type is required in C

I am trying to take the result of div function in c, cast it to an int and then add that int to a greater int value. I get the error of the title all the time, and i can not understand why
out = div(n, 10);
r = (int) out;
a = a + r;
Compiler shows me as an error the second line and out specifically.
Thank you in advance!
A div_t, as returned by div(), is a structure containing two numbers, the quotient and the remainder.
typedef struct {
int quot;
int rem;
} div_t;
If you've used the div() function then you want either r = out.rem or r = out.quot, not clear which from your example.
If all you want is the quotient, though, r = n / 10 is simpler. And if all you want is the remainder, r = n % 10 (for non-negative n). div() is useful in the case where you need both values - the actual divide instruction on many machines can deliver both results from one instruction.
div(x, y) function does both x / y and x % y in one operation. It returns a structure with rem member having the result of x % y and quot having the result of x / y. In your case you would access these values as out.quot and out.rem and both members are already values of type int. Casting a structure containing two integers into an integer does not make any sense.
On many processors there is a single division opcode that always calculates both, so if you need both, then div(x, y) is giving the other one for free. One common instance is converting a number into a decimal string which requires repeatedly taking quotient and remainder with 10; here you can use div efficiently for positive numbers:
res = div(n, 10);
next_digit = res.rem;
// place next_digit into the string
n = res.quot;

typecasting long to int and short in C

long x = <some value>
int y = <some value>
I want to subtract y from x , which of the following will give me different or same results
x = (int)x - y;
x = x-y
x = short(x) - short(y)
The type long is "bigger" than int. Smaller types can be automatically casted to bigger types. So, this code:
someLongNumber + someIntNumber
is basically equal to
someLongNumber + (long)someIntNumber
So the output will be long. You don't have to cast if you want to place the result in x. However, if you want to place it in y, you must cast the x operand to int ((int)x), or the whole operation ((int)(x - y)).

What should I write in between parentheses to define the data type I want a variable to change to?

If I have a variable as float var1 = 157.1; and want to transform it in int I would do int var2 = (int)var1;
I want to know about the other types of data, such as long int, short int, unsigned short int, long double and so on.
I tried long int var2 = (long int)var1; and it seemed to work, but I'm not sure if it is syntactically correct. If it is, I assume it'd be the same for all the other types, i.e., just the data type and its attributes separated by a space. If it isn't I'd like to know if there's a list of them of some sort.
This is the C cast operator, but the operation is more generally "type casting", "casting", or "recasting". This is a directive to the compiler to request a specific conversion.
When casting any valid C type can be specified, so:
int x = 10;
unsigned long long y = (unsigned long long) x;
In many cases this conversion can be done implicitly, automatically, so it's not always necessary but in others you must force it. For example:
int x = 10;
float y = x; // Valid, int -> float happens automatically.
You can get caught by surprise though:
int x = 10;
float y = x / 3; // y = 3.0, not 3.333, since it does integer division before casting
Where you need to cast to get the right result:
int x = 10;
float y = (float) x / 3; // 3.33333...
Note that when using pointers this is a whole different game:
int x = 10;
int* px = &x;
float* y = (float*) px; // Invalid conversion, treats int as a float
Generally C trusts you to know what you're doing, so you can easily shoot yourself in the foot. What "compiles" is syntactically valid by definition, but executing properly without crashing is a whole other concern. Anything not specified by the C "rule book" (C standard) is termed undefined behaviour, so you'll need to be aware of when you're breaking the rules, like in that last example.
Sometimes breaking the rules is necessary, like the Fast Inverse Square Root which relies on the ability of C to arbitrarily recast values.

Promotion Order in C-like Languages

We know that types get promoted. For example, if you write:
int i = 2;
double d = 4.0;
double result = i / d;
. . . then the int will get promoted to a double, resulting in 0.5. However, I wasn't able to find any information on what happens if promotion and evaluation order conflict (it's also surprisingly difficult to Google). For example:
int i = 2;
int j = 4;
double d = 1.0;
double result = d * i / j;
In this example, the value depends on when promotion happens. If i gets promoted before the division, then the result will be 0.5, but if the result of i / j gets promoted, then integer division happens and the result is 0.0.
Is the result of what happens well defined? Is it the same in C++ and other C-derived languages?
Is the result of what happens well defined?
Yes.
Is it the same in C++ and other C-derived languages?
For C++ - yes. But "C-derived languages" is not that well defined, so it is hard to answer.
The order of evaluation of
d * i / j
is
(d * i) / j
So, first i gets promoted to double due to d * i.
Then, the result (double) has to be divided by j, so j gets promoted to double. So there are two promotions.
However, for
d + i / j
the order of operations is different. First, i / j division is done using integer arithmetics, and then the result is promoted to double. So there is only one promotion.
I believe promotion oder is the same as order of operations. When the compiler sees the line
double result = d * i / j;
it breaks the line down into:
double result;
result = d * i;
result = result / j;
before transforming it into machine code.

Resources