int main(){
char a = 5;
float b = 6.0;
int c = a + b;
return c;
}
Looking at the generate instructions with gcc, the above code is evaluated like this:
Load 5 and convert it to float as a
Load 6 as b
Add a and b
Convert the result to an integer and return
Does gcc not care about the return type while it's dealing with the expression?
It could have converted b to an int right off the bat as everything else is an integer type.
Is there a rule which explains how one side of an expression is evaluated first regardless of what the other side is?
You ask "Is there a rule?" Of course there is a rule. Any widely used programming language will have a huge set of rules.
You have an expression "a + b". a has type char, b has type float. There's a rule in the C language that the compiler has to find a common type and convert both to the common type. Since one of the values is a floating-point type, the common type must be a floating-point type, which is float, double, or long double. If you look closer at the rules, it turns out the common type must be float or double, and the compiler must document this. It seems the compiler chose "float" as the common type.
So a is converted from char to float, b is already float, both are added, the result has type float. And then there's a rule that to assign float to int, a conversion takes place according to very specific rules.
Any C compiler must follow these rules exactly. There is one exception: If the compiler can produce the results that it is supposed to produce then it doesn't matter how. As long as you can't distinguish it from the outside. So the compiler can change the whole code to "return 11;".
In the C language, partial expressions are evaluated without regard how they are used later. Whether a+b is assigned to an int, a char, a double, it is always evaluated in the same way. There are other languages with different rules, where the fact that a+b is assigned to an int could change how it is evaluated. C is not one of those languages.
If you change it to:
int main(){
char a = 5;
float b = 6.6;
int c = a + 2*b;
return c;
}
then it becomes clear that you have to keep the float 18.2 until the end.
With no optimizations, gcc acts as if this could happen and does a lot of conversions.
With just -O it already does the math itself and directly returns the final integer, even in above example.
There is no in-between reasoning and short-cut here. Why simplify from 5+6.0 to 5+6 but not to 11? Either act stupid and do cvtsi2ss xmm1,eax (and back etc.), or then tell them directly 11.
Related
When I write the following C code, I find that (contrary to what I expected), gcc can accept this code and compile it successfully! I don't know why for it seems it is a wrong statement.
int main() {
int a = 1i;
return 0;
}
I guess it may be accepting 1i as a complex number. Then int a = 1i means int a = 0+1i and 1i is not a valid integer so it only accepts the 0.
int main() {
int a = 1i*1i;
printf("%d\n",a);
return 0;
}
I tried the code above and found that it printa -1. Maybe my thought is correct. But this is the first time I find that the C compiler can do this work. Is my guess correct?
Your intuition is correct. gcc allows for complex number constants as an extension.
From the gcc documentation:
To write a constant with a complex data type, use the suffix i or
j (either one; they are equivalent). For example, 2.5fi has type
_Complex float and 3i has type _Complex int. Such a constant always has a pure imaginary value, but you can form any complex value you
like by adding one to a real constant. This is a GNU extension; if you
have an ISO C99 conforming C library (such as GNU libc), and want to
construct complex constants of floating type, you should include
<complex.h> and use the macros I or _Complex_I instead.
So 1i is the complex number i. Then when you assign it to a, the complex part is truncated and the real part is assigned (and if the real part is a floating point type, that would be converted to int).
This conversion is spelled out in section 6.3.1.7p2 of the C standard:
When a value of complex type is converted to a real type, the
imaginary part of the complex value is discarded and the value of the
real part is converted according to the conversion rules for the
corresponding real type.
I know that in java I can't return double from a function that supose to return an int value without casting, but now when I learn c I see that I can compile something like (only warnings, no error):
int calc(double d, char c) {
return d * c / 3;
}
so my question is, c compiler will always do for me auto casting when needed?
or this is specific working one because of char or something?
C has a concept of implicit conversions, i.e. rules that define under which conditions and how values are converted to different types implicitly, without the need to explicitly cast them (see, for example, cppreference.com). So C is not "auto-casting" everything, but only under certain conditions.
Your return type is int, whereas the result of expression d * c / 3 is double. So the following (implicit) conversion applies:
Real floating-integer conversions
A finite value of any real floating
type can be implicitly converted to any integer type. Except where
covered by boolean conversion above, the rules are: The fractional
part is discarded (truncated towards zero). If the resulting value can
be represented by the target type, that value is used otherwise, the
behavior is undefined
Actually, I think that you may not get any warning. The compiler is performing an implicit conversion, and the only way to force it is adding the flag -Wconversion (I am talking about gcc, something like: warning: conversion to ‘int’ from ‘double’ may alter its value [-Wfloat-conversion]). Anyway, the compiler supposes that you know about it, I mean, the decimal value dissapears in case of an integer conversion.
(See YePhIcK's comment about MSVC)
/**Program for internal typecasting of the compiler**/
#include<stdio.h>
int main(void)
{
float b = 0;
// The Second operand is a integer value which gets added to first operand
// which is of float type. Will the second operand be typecasted to float?
b = (float)15/2 + 15/2;
printf("b is %f\n",b);
return 0;
}
OUTPUT : b is 14.500000
Yes, an integral value can be added to a float value.
The basic math operations (+, -, *, /), when given an operand of type float and int, the int is converted to float first.
So 15.0f + 2 will convert 2 to float (i.e. to 2.0f) and the result is 17.0f.
In your expression (float)15/2 + 15/2, since / has higher precedence than +, the effect will the same as computing ((float)15/2) + (15/2).
(float)15/2 explicitly converts 15 to float and therefore implicitly converts 2 to float, yielding the final result of division as 7.5f.
However, 15/2 does an integer division, so produces the result 7 (there is no implicit conversion to float here).
Since (float)15/2 has been computed as a float, the value 7 is then converted to float before addition. The result will therefore be 14.5f.
Note: floating point types are also characterised by finite precision and rounding error that affects operations. I've ignored that in the above (and it is unlikely to have a notable effect with the particular example anyway).
Note 2: Old versions of C (before the C89/90 standard) actually converted float operands to double in expressions (and therefore had to convert values of type double back to float, when storing the result in a variable of type float). Thankfully the C89/90 standard fixed that.
Rule of thumb: When doing an arithmetic calculation between two different built-in types, the "smaller" type will be converted into the "larger" type.
double > float > long long(C99) > long > short > char.
b = (float)15/2 + 15/2;
Here the first part, (float)15/2 is equivalent to 15.0f / 2. Because an operation involving a "larger" type and a "smaller" type will yield a result in the "larger" type, (float)15/2 is 7.500000, or 7.5f.
When it comes to 15/2, since both operands are integers, the operation is done only on integer level. Therefore the decimal point is stripped (from int), and only gives 7 as a result.
So the expression is calculated into
b = 7.5f + 7;
No doubt you'll have 14.500000 as the final result, because it's exactly 14.5f.
b = (float)15/2 + 15/2;
The first one((float)15/2) will work fine. The second one will also work but will be converted into an integer first, so you will lose precision. Like:
b = (float)15/2 + 15/2;
b = 7.500000f + 7
b = 14.500000
It's worth asking: if an integer value could not be added to floating-point value, what would the symptom be?
Compiler issues error or warning message.
Something gets truncated; you don't get the result you want.
Undefined behavior: you might or might not get the result you want, and the compiler might or might not warn you about it.
But in fact none of these things happen. When you add an integer to a floating-point value, the compiler automatically converts the integer to a floating-point value so it can do the addition that way, and this is perfectly well defined. For example, if you have the code
double d = 7.5;
int i = 7;
double result = d + i;
the compiler interprets this just as if you had written
double result = d + (double)i;
And it works this way for just about all operations: the same logic is applied when you subtract, multiply, or divide a floating-point value and an integer.
And it works this way for just about all types. If you add a long int and an int, the plain int automatically gets converted to a long.
As a general rule (and I really can't think of too many exceptions), the compiler always wants to do arithmetic on two values of the same type. So whenever you have two values of different type, the compiler will just about always convert one of them for you. The full set of rules for how it does this are rather elaborate, but they're all supposed to make sense, and do what you want. The full set of rules is called the usual arithmetic conversions, and if you do a Google search on that phrase you'll find lots of explanations.
One case that does not necessarily do what you want is when the two variables are not different types. In particular, if the two variables are both integers, and the operation you're doing is division, the compiler doesn't have to convert anything: it divides the integer by the integer, discarding any remainder, and gives you an integer result. So if you have
int i = 1;
int j = 2;
int k = i / j;
then k ends up containing 0. And if you have
double d = i / j;
then d ends up containing 0 also, because the compiler follows exactly the same rules when performing the division; it doesn't "peek outside" to see that it's going to need a floating-point result.
P.S. I said, "As a general rule, the compiler always wants to do arithmetic on two values of the same type", and I said I couldn't think of too many exceptions. But if you're curious, two exceptions are the << and >> operators. If you have x << y, where x is a long int and y is a plain int, the compiler does not have to convert y to a long int first.
Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway? And when a fractional value is needed, why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, I often see code similar to the following
float foo = 3.0f;
double bar = 5.0;
// And, unfortunately, even
double baz = 7.0f;
and
void quux(float foo) {
...
}
...
quux(7.0f);
But as far as I can tell those are equivalent to
float foo = 3;
// or
// float foo = 3.0;
double bar = 5;
double baz = 7;
quux(9);
I can understand the method call if you are in a language with overloading (c++, java) where it can actually make a functional difference if the function is overloaded (or will be in the future), but I'm more concerned with C (and to a lesser extent Objective-C), which doesn't have overloading.
So is there any reason to bother with the extra decimal and/or f? Especially in the initialization case, where the declared type is right there?
Many people learned the hard way that
double x = 1 / 3;
doesn't work as expected. So they (myself included) program defensively by using floating-point literals instead of relying on the implicit conversion.
C doesn't have overloading, but it has something called variadic functions. This is where the .0 matters.
void Test( int n , ... )
{
va_list list ;
va_start( list , n ) ;
double d = va_arg( list , double ) ;
...
}
Calling the function without specifying the number is a double will cause undefined behaviour, since the va_arg macro will interpret the variable memory as a double, when in reality it is an integer.
Test( 1 , 3 ) ; has to be Test( 1 , 3.0 ) ;
But you might say; I will never write variadic functions, so why bother?
printf( and family ) are variadic functions.
The call, should generate a warning:
printf("%lf" , 3 ) ; //will cause undefined behavior
But depending on the warning level, compiler, and forgetting to include the correct header, you will get no warning at all.
The problem is also present if the types are switched:
printf("%d" , 3.0 ) ; //undefined behaviour
Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway?
First off, "implicit cast" is an oxymoron (casts are explicit by definition). The expression you're looking for is "implicit [type] conversion".
As to why: because it's more explicit (no pun intended). It's better for the eye and the brain if you have some visual indication about the type of the literal.
why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, because double and float have different precision. Since floating-point is weird and often unintuitive, it is possible that the conversion from double to float (which is lossy) will result in a value that is different from what you actually want if you don't specify the float type manually.
In most cases, it's simply a matter of saying what you mean.
For example, you can certainly write:
#include <math.h>
...
const double sqrt_2 = sqrt(2);
and the compiler will generate an implicit conversion (note: not a cast) of the int value 2 to double before passing it to the sqrt function. So the call sqrt(2) is equivalent to sqrt(2.0), and will very likely generate exactly the same machine code.
But sqrt(2.0) is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double argument, writing 2.0 rather than 2 could be much clearer.
And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5) has to use a floating-point literal, and
My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2 rather than 2.0? (I don't consider saving two keystrokes to be a significant benefit.)
Why this program is giving unexpected numbers(ex: 2040866504, -786655336)?
#include <stdio.h>
int main()
{
int test = 0;
float fvalue = 3.111f;
printf("%d", test? fvalue : 0);
return 0;
}
Why it is printing unexpected numbers instead of 0? should it supposed to do implicit typecast? This program is for learning purpose nothing serious.
Most likely, your platform passes floating point values in a floating point register and integer values in a different register (or on the stack). You told printf to look for an integer, so it's looking in the register integers are passed in (or on the stack). But you passed it a float, so the zero was placed in the floating point register that printf never looked at.
The ternary operator follows language rules to decide the type of its result. It can't sometimes be an integer and sometimes be a float. Those could be different sizes, stored in different places, and so on, which would make it impossible to generate sane code to handle both possible result types.
This is a guess. Perhaps something completely different is happening. Undefined behavior is undefined for a reason. These kinds of things can be impossible to predict and very difficult to understand without lots of experience and knowledge of platform and compiler details. Never let someone convince you that UB is okay or safe because it seems to work on their system.
Because you are using %d for printing a float value. Use %f. Using %d to print a float value invokes undefined behavior.
EDIT:
Regarding OP's comments;
Why it is printing random numbers instead of 0?
When you compile this code, compiler should give you a warning:
[Warning] format '%d' expects argument of type 'int', but argument 2 has type 'double' [-Wformat]
This warning is self explanatory that this line of code is invoking an undefined behavior. This is because, the conversion specification %d specifies that printf is to convert an int value from binary to a string of decimal digits, while %f does the same for a float value. On passing the fvalue compiler know that it is of float type but on the other hand it sees that printf expects an argument of type int. In such cases, sometimes it does what you expect, sometimes it does what I expect. Sometimes it does what nobody expects (Nice Comment by David Schwartz).
See the test cases 1 and 2. It is working fine with %f.
should it supposed to do implicit typecast?
No.
Although the existing upvoted answers are correct, I think they are far too technical and ignore the logic a beginner programmer might have:
Let's look at the statement causing confusion in some heads:
printf("%d", test? fvalue : 0);
^ ^ ^ ^
| | | |
| | | - the value we expect, an integral constant, hooray!
| | - a float value, this won't be printed as the test doesn't evaluate to true
| - an integral value of 0, will evaluate to false
- We will print an integer!
What the compiler sees is a bit different. He agrees on the value of test meaning false. He agrees on fvalue beeing a float and 0 an integer. However, he learned that the different possible outcomes of the ternary operator must be of same type! int and float aren't. In this case, "float wins", 0 becomes 0.0f!
Now printf isn't type safe. This means you can falsely say "print me an integer" and pass an float without the compiler noticing. Exactly that happened. No matter what the value of test is, the compiler deduced that the result will be of type float. Hence, your code is equivalent to:
float x = 0.0f;
printf("%d", x);
At this point, you experience undefined behaviour. float simply isn't something integral what is expected by %d.
The observed behaviour is dependent on the compiler and machine you're using. You may see dancing elephants, although most terminals don't support that afaik.
When we have the expression E1 ? E2 : E3, there are four types involved. Expressions E1, E2 and E3 each have a type (and the types of E2 and E3 can be different). Furthermore, the whole expression E1 ? E2 : E3 has a type.
If E2 and E3 have the same type, then this is easy: the overall expression has that type. We can express this in a meta-notation like this:
(T1 ? T2 : T2) -> T2
"The type of a ternary expression whose alterantives are both of the same type T2
is just T2."
If they don't have the same type, things get somewhat interesting, and the situation is quite similar to E2 and E3 being involved together in an arithmetic operation. For instance if you add together an int and float, the int operand is converted to float. That is what is happening in your program. The type situation is:
(int ? float : int) -> float
the test fails, and so the int value 0 is converted to the float value 0.0.
This float value is not compatible with the %d conversion specifier of printf, which requires an int.
More precisely, the float value undergoes one more. When a float is passed as one of the trailing arguments to a variadic function, it is converted to double.
So in fact the double value 0.0 is being passed to printf where it expects int.
In any case, it is undefined behavior: it is nonportable code for which the ISO standard definition of the C language doesn't offer a meaning.
From here on, we can apply platform-specific reasoning why we don't just see 0. Suppose int is a 32 bit, four byte type, and double is the common 64 bit, 8 byte, IEE754 representation, and that an all-bits-zero is used for 0.0. So then, why isn't a 32 bit portion of this all-bits-zero treated by printf as the int value 0?
Quite possibly, the 64 bit double argument value forces 8 byte alignment when it is put onto the stack, possibly moving the stack pointer by four bytes. And then printf pulls out garbage from those four bytes, rather than zero bits from the double value.