casting in c can cause any compile error? - c

I know that in java I can't return double from a function that supose to return an int value without casting, but now when I learn c I see that I can compile something like (only warnings, no error):
int calc(double d, char c) {
return d * c / 3;
}
so my question is, c compiler will always do for me auto casting when needed?
or this is specific working one because of char or something?

C has a concept of implicit conversions, i.e. rules that define under which conditions and how values are converted to different types implicitly, without the need to explicitly cast them (see, for example, cppreference.com). So C is not "auto-casting" everything, but only under certain conditions.
Your return type is int, whereas the result of expression d * c / 3 is double. So the following (implicit) conversion applies:
Real floating-integer conversions
A finite value of any real floating
type can be implicitly converted to any integer type. Except where
covered by boolean conversion above, the rules are: The fractional
part is discarded (truncated towards zero). If the resulting value can
be represented by the target type, that value is used otherwise, the
behavior is undefined

Actually, I think that you may not get any warning. The compiler is performing an implicit conversion, and the only way to force it is adding the flag -Wconversion (I am talking about gcc, something like: warning: conversion to ‘int’ from ‘double’ may alter its value [-Wfloat-conversion]). Anyway, the compiler supposes that you know about it, I mean, the decimal value dissapears in case of an integer conversion.
(See YePhIcK's comment about MSVC)

Related

Why does gcc accept "int a = 3i;" as a valid statement?

When I write the following C code, I find that (contrary to what I expected), gcc can accept this code and compile it successfully! I don't know why for it seems it is a wrong statement.
int main() {
int a = 1i;
return 0;
}
I guess it may be accepting 1i as a complex number. Then int a = 1i means int a = 0+1i and 1i is not a valid integer so it only accepts the 0.
int main() {
int a = 1i*1i;
printf("%d\n",a);
return 0;
}
I tried the code above and found that it printa -1. Maybe my thought is correct. But this is the first time I find that the C compiler can do this work. Is my guess correct?
Your intuition is correct. gcc allows for complex number constants as an extension.
From the gcc documentation:
To write a constant with a complex data type, use the suffix i or
j (either one; they are equivalent). For example, 2.5fi has type
_Complex float and 3i has type _Complex int. Such a constant always has a pure imaginary value, but you can form any complex value you
like by adding one to a real constant. This is a GNU extension; if you
have an ISO C99 conforming C library (such as GNU libc), and want to
construct complex constants of floating type, you should include
<complex.h> and use the macros I or _Complex_I instead.
So 1i is the complex number i. Then when you assign it to a, the complex part is truncated and the real part is assigned (and if the real part is a floating point type, that would be converted to int).
This conversion is spelled out in section 6.3.1.7p2 of the C standard:
When a value of complex type is converted to a real type, the
imaginary part of the complex value is discarded and the value of the
real part is converted according to the conversion rules for the
corresponding real type.

What is the order in which C expressions are evaluated

int main(){
char a = 5;
float b = 6.0;
int c = a + b;
return c;
}
Looking at the generate instructions with gcc, the above code is evaluated like this:
Load 5 and convert it to float as a
Load 6 as b
Add a and b
Convert the result to an integer and return
Does gcc not care about the return type while it's dealing with the expression?
It could have converted b to an int right off the bat as everything else is an integer type.
Is there a rule which explains how one side of an expression is evaluated first regardless of what the other side is?
You ask "Is there a rule?" Of course there is a rule. Any widely used programming language will have a huge set of rules.
You have an expression "a + b". a has type char, b has type float. There's a rule in the C language that the compiler has to find a common type and convert both to the common type. Since one of the values is a floating-point type, the common type must be a floating-point type, which is float, double, or long double. If you look closer at the rules, it turns out the common type must be float or double, and the compiler must document this. It seems the compiler chose "float" as the common type.
So a is converted from char to float, b is already float, both are added, the result has type float. And then there's a rule that to assign float to int, a conversion takes place according to very specific rules.
Any C compiler must follow these rules exactly. There is one exception: If the compiler can produce the results that it is supposed to produce then it doesn't matter how. As long as you can't distinguish it from the outside. So the compiler can change the whole code to "return 11;".
In the C language, partial expressions are evaluated without regard how they are used later. Whether a+b is assigned to an int, a char, a double, it is always evaluated in the same way. There are other languages with different rules, where the fact that a+b is assigned to an int could change how it is evaluated. C is not one of those languages.
If you change it to:
int main(){
char a = 5;
float b = 6.6;
int c = a + 2*b;
return c;
}
then it becomes clear that you have to keep the float 18.2 until the end.
With no optimizations, gcc acts as if this could happen and does a lot of conversions.
With just -O it already does the math itself and directly returns the final integer, even in above example.
There is no in-between reasoning and short-cut here. Why simplify from 5+6.0 to 5+6 but not to 11? Either act stupid and do cvtsi2ss xmm1,eax (and back etc.), or then tell them directly 11.

What is the purpose of requiring a data type specification of the returned value in a function definition or prototype? Type casting?

Consider the following snippet of code:
#include<stdio.h>
int demo_function(float a);
int main(void)
{
int b;
float fraction_number = 3.15f;
b = demo_function(fraction_number);
printf("The number returned is %d", b);
return 0;
}
int demo_function(float a)
{
float c;
c = a;
printf("The number passed is %.2f \n", c);
return c;
}
The output is:
The number passed is 3.15
The number returned is 3
From this little test code, it seems like actual purpose of writing int as the data type of demo_function's returned value is to type cast.
Firstly, is that the correct interpretation of what is going on?
Secondly, why does the compiler actually need this information? (Or perhaps a better question is, "How does the compiler use this information?"). Specifically, if the compiler sees that variable b is declared as an int, why does it need to explicitly know that the returned value is of type int?
If we end up storing the returned value of variable c into b, what issues would arise if the compiler did NOT require the explicit mention that the returned value is of type int? Would there be information loss as the float c variable tries to get squished into the smaller memory allocated int b variable?
Thanks!
As a practical matter, the compiler needs to know what type a function returns because it needs to know how to interpret the bits a function returns and it needs to know where those bits are.
Suppose that when a function returns, the register used for the return value contains 0x80. If the function is supposed to return an 8-bit unsigned char, then the return value is 128. If the function is supposed to return an 8-bit two’s complement signed char, then the return value is −128. Knowing the type is necessary to know the interpretation of the bits.
In many systems, a function that returns an integer is supposed to put the bits in a certain general register of the processor, but a function that returns a floating-point value is supposed to put the bits in a certain floating-point register. In this case, the caller needs to know the return type of the function in order to know where the bits of the return value are.
At a more abstract level, the compiler needs to know the return type of the function so that it can interpret the expression the function call appears in. Consider that, in C, the expression 5 / 4 performs integer division with truncation and produces 1, but the expression 5. / 4 performs floating-point division and produces 1.25. So, in the expression f(x) / 4, the compiler needs to know what type f returns so that it knows whether to perform integer division of floating-point division.
For another example, suppose f returns a pointer, and the program uses y = *f(x). To execute this code, the compiler has to take the value returned by f and use it as an address to fetch something from memory. But what does it fetch? Does the address point to a one-byte char, an eight-byte double, or a 100-byte structure? The compiler needs to know the type of the pointer returned by f so that it knows what type of object it points to.
From this little test code, it seems like actual purpose of writing int as the data type of demo_function's returned value is to type cast.
The primary purpose is as described above. The fact that the value in a return statement is converted to the return type of the function is a secondary effect; it is merely a convenience of the language, not a necessary effect. (The implicit conversion is not necessary because we could effect the conversion by specifying it explicitly.)
Also note that a cast is an explicit operator. It is not the operation. For example, + and * are operators; they are things that appear in source code that say we want to do certain operations. The actual operations are addition and multiplication. Similarly, a cast is a type name in parentheses; it is some text that appears in source code that specifies we want to perform a conversion.

Ternary operator in C

Why this program is giving unexpected numbers(ex: 2040866504, -786655336)?
#include <stdio.h>
int main()
{
int test = 0;
float fvalue = 3.111f;
printf("%d", test? fvalue : 0);
return 0;
}
Why it is printing unexpected numbers instead of 0? should it supposed to do implicit typecast? This program is for learning purpose nothing serious.
Most likely, your platform passes floating point values in a floating point register and integer values in a different register (or on the stack). You told printf to look for an integer, so it's looking in the register integers are passed in (or on the stack). But you passed it a float, so the zero was placed in the floating point register that printf never looked at.
The ternary operator follows language rules to decide the type of its result. It can't sometimes be an integer and sometimes be a float. Those could be different sizes, stored in different places, and so on, which would make it impossible to generate sane code to handle both possible result types.
This is a guess. Perhaps something completely different is happening. Undefined behavior is undefined for a reason. These kinds of things can be impossible to predict and very difficult to understand without lots of experience and knowledge of platform and compiler details. Never let someone convince you that UB is okay or safe because it seems to work on their system.
Because you are using %d for printing a float value. Use %f. Using %d to print a float value invokes undefined behavior.
EDIT:
Regarding OP's comments;
Why it is printing random numbers instead of 0?
When you compile this code, compiler should give you a warning:
[Warning] format '%d' expects argument of type 'int', but argument 2 has type 'double' [-Wformat]
This warning is self explanatory that this line of code is invoking an undefined behavior. This is because, the conversion specification %d specifies that printf is to convert an int value from binary to a string of decimal digits, while %f does the same for a float value. On passing the fvalue compiler know that it is of float type but on the other hand it sees that printf expects an argument of type int. In such cases, sometimes it does what you expect, sometimes it does what I expect. Sometimes it does what nobody expects (Nice Comment by David Schwartz).
See the test cases 1 and 2. It is working fine with %f.
should it supposed to do implicit typecast?
No.
Although the existing upvoted answers are correct, I think they are far too technical and ignore the logic a beginner programmer might have:
Let's look at the statement causing confusion in some heads:
printf("%d", test? fvalue : 0);
^ ^ ^ ^
| | | |
| | | - the value we expect, an integral constant, hooray!
| | - a float value, this won't be printed as the test doesn't evaluate to true
| - an integral value of 0, will evaluate to false
- We will print an integer!
What the compiler sees is a bit different. He agrees on the value of test meaning false. He agrees on fvalue beeing a float and 0 an integer. However, he learned that the different possible outcomes of the ternary operator must be of same type! int and float aren't. In this case, "float wins", 0 becomes 0.0f!
Now printf isn't type safe. This means you can falsely say "print me an integer" and pass an float without the compiler noticing. Exactly that happened. No matter what the value of test is, the compiler deduced that the result will be of type float. Hence, your code is equivalent to:
float x = 0.0f;
printf("%d", x);
At this point, you experience undefined behaviour. float simply isn't something integral what is expected by %d.
The observed behaviour is dependent on the compiler and machine you're using. You may see dancing elephants, although most terminals don't support that afaik.
When we have the expression E1 ? E2 : E3, there are four types involved. Expressions E1, E2 and E3 each have a type (and the types of E2 and E3 can be different). Furthermore, the whole expression E1 ? E2 : E3 has a type.
If E2 and E3 have the same type, then this is easy: the overall expression has that type. We can express this in a meta-notation like this:
(T1 ? T2 : T2) -> T2
"The type of a ternary expression whose alterantives are both of the same type T2
is just T2."
If they don't have the same type, things get somewhat interesting, and the situation is quite similar to E2 and E3 being involved together in an arithmetic operation. For instance if you add together an int and float, the int operand is converted to float. That is what is happening in your program. The type situation is:
(int ? float : int) -> float
the test fails, and so the int value 0 is converted to the float value 0.0.
This float value is not compatible with the %d conversion specifier of printf, which requires an int.
More precisely, the float value undergoes one more. When a float is passed as one of the trailing arguments to a variadic function, it is converted to double.
So in fact the double value 0.0 is being passed to printf where it expects int.
In any case, it is undefined behavior: it is nonportable code for which the ISO standard definition of the C language doesn't offer a meaning.
From here on, we can apply platform-specific reasoning why we don't just see 0. Suppose int is a 32 bit, four byte type, and double is the common 64 bit, 8 byte, IEE754 representation, and that an all-bits-zero is used for 0.0. So then, why isn't a 32 bit portion of this all-bits-zero treated by printf as the int value 0?
Quite possibly, the 64 bit double argument value forces 8 byte alignment when it is put onto the stack, possibly moving the stack pointer by four bytes. And then printf pulls out garbage from those four bytes, rather than zero bits from the double value.

sqrt() of int type in C

I am programming in the c language on mac os x. I am using sqrt, from math.h, function like this:
int start = Data -> start_number;
double localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
This works, but why? and why am I getting no warning? On the man page for sqrt, it takes a double as parameter, but I give it an int - how can it work?
Thanks
Type conversions which do not cause a loss in precision might not throw warnings. They are cast implicitly.
int --> double //no loss in precision (e.g 3 became 3.00)
double --> int //loss in precision (e.g. 3.01222 became 3)
What triggers a warning and what doesn't is depends largely upon the compiler and the flags supplied to it, however, most compilers (atleast the ones I've used) don't consider implicit type-conversions dangerous enough to warrant a warning, as it is a feature in the language specification.
To warn or not to warn:
C99 Rationale states it like a guideline
One of the important outcomes of exploring this (implicit casting) problem is the understanding that high-quality compilers might do well to look
for such questionable code and offer (optional) diagnostics, and that
conscientious instructors might do well to warn programmers of the
problems of implicit type conversions.
C99 Rationale (Apr 2003) : Page 45
The compiler knows the prototype of sqrt, so it can - and will - produce the code to convert an int argument to double before calling the function.
The same holds the other way round too, if you pass a double to a function (with known prototype) taking an int argument, the compiler will produce the conversion code required.
Whether the compiler warns about such conversions is up to the compiler and the warning-level you requested on the command line.
For the conversion int -> double, which usually (with 32-bit (or 16-bit) ints and 64-bit doubles in IEEE754 format) is lossless, getting a warning for that conversion is probably hard if possible at all.
For the double -> int conversion, with gcc and clang, you need to specifically ask for such warnings using -Wconversion, or they will silently compile the code.
Int can be safely upcast automatically to a double because there's no risk of data loss. The reverse is not true. To turn a double to an int, you have to explicitly cast it.
C-compilers do some automatic casting with double and int.
you could also do the following:
int start = Data -> start_number;
int localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
Even if the localSum is an int this will still work, but it will always cut of anything beyond the point.
For example if the return value of sqrt() is 1.365, it will be stored as a simple 1.

Resources