Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway? And when a fractional value is needed, why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, I often see code similar to the following
float foo = 3.0f;
double bar = 5.0;
// And, unfortunately, even
double baz = 7.0f;
and
void quux(float foo) {
...
}
...
quux(7.0f);
But as far as I can tell those are equivalent to
float foo = 3;
// or
// float foo = 3.0;
double bar = 5;
double baz = 7;
quux(9);
I can understand the method call if you are in a language with overloading (c++, java) where it can actually make a functional difference if the function is overloaded (or will be in the future), but I'm more concerned with C (and to a lesser extent Objective-C), which doesn't have overloading.
So is there any reason to bother with the extra decimal and/or f? Especially in the initialization case, where the declared type is right there?
Many people learned the hard way that
double x = 1 / 3;
doesn't work as expected. So they (myself included) program defensively by using floating-point literals instead of relying on the implicit conversion.
C doesn't have overloading, but it has something called variadic functions. This is where the .0 matters.
void Test( int n , ... )
{
va_list list ;
va_start( list , n ) ;
double d = va_arg( list , double ) ;
...
}
Calling the function without specifying the number is a double will cause undefined behaviour, since the va_arg macro will interpret the variable memory as a double, when in reality it is an integer.
Test( 1 , 3 ) ; has to be Test( 1 , 3.0 ) ;
But you might say; I will never write variadic functions, so why bother?
printf( and family ) are variadic functions.
The call, should generate a warning:
printf("%lf" , 3 ) ; //will cause undefined behavior
But depending on the warning level, compiler, and forgetting to include the correct header, you will get no warning at all.
The problem is also present if the types are switched:
printf("%d" , 3.0 ) ; //undefined behaviour
Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway?
First off, "implicit cast" is an oxymoron (casts are explicit by definition). The expression you're looking for is "implicit [type] conversion".
As to why: because it's more explicit (no pun intended). It's better for the eye and the brain if you have some visual indication about the type of the literal.
why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, because double and float have different precision. Since floating-point is weird and often unintuitive, it is possible that the conversion from double to float (which is lossy) will result in a value that is different from what you actually want if you don't specify the float type manually.
In most cases, it's simply a matter of saying what you mean.
For example, you can certainly write:
#include <math.h>
...
const double sqrt_2 = sqrt(2);
and the compiler will generate an implicit conversion (note: not a cast) of the int value 2 to double before passing it to the sqrt function. So the call sqrt(2) is equivalent to sqrt(2.0), and will very likely generate exactly the same machine code.
But sqrt(2.0) is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double argument, writing 2.0 rather than 2 could be much clearer.
And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5) has to use a floating-point literal, and
My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2 rather than 2.0? (I don't consider saving two keystrokes to be a significant benefit.)
Related
When I write the following C code, I find that (contrary to what I expected), gcc can accept this code and compile it successfully! I don't know why for it seems it is a wrong statement.
int main() {
int a = 1i;
return 0;
}
I guess it may be accepting 1i as a complex number. Then int a = 1i means int a = 0+1i and 1i is not a valid integer so it only accepts the 0.
int main() {
int a = 1i*1i;
printf("%d\n",a);
return 0;
}
I tried the code above and found that it printa -1. Maybe my thought is correct. But this is the first time I find that the C compiler can do this work. Is my guess correct?
Your intuition is correct. gcc allows for complex number constants as an extension.
From the gcc documentation:
To write a constant with a complex data type, use the suffix i or
j (either one; they are equivalent). For example, 2.5fi has type
_Complex float and 3i has type _Complex int. Such a constant always has a pure imaginary value, but you can form any complex value you
like by adding one to a real constant. This is a GNU extension; if you
have an ISO C99 conforming C library (such as GNU libc), and want to
construct complex constants of floating type, you should include
<complex.h> and use the macros I or _Complex_I instead.
So 1i is the complex number i. Then when you assign it to a, the complex part is truncated and the real part is assigned (and if the real part is a floating point type, that would be converted to int).
This conversion is spelled out in section 6.3.1.7p2 of the C standard:
When a value of complex type is converted to a real type, the
imaginary part of the complex value is discarded and the value of the
real part is converted according to the conversion rules for the
corresponding real type.
int main(){
char a = 5;
float b = 6.0;
int c = a + b;
return c;
}
Looking at the generate instructions with gcc, the above code is evaluated like this:
Load 5 and convert it to float as a
Load 6 as b
Add a and b
Convert the result to an integer and return
Does gcc not care about the return type while it's dealing with the expression?
It could have converted b to an int right off the bat as everything else is an integer type.
Is there a rule which explains how one side of an expression is evaluated first regardless of what the other side is?
You ask "Is there a rule?" Of course there is a rule. Any widely used programming language will have a huge set of rules.
You have an expression "a + b". a has type char, b has type float. There's a rule in the C language that the compiler has to find a common type and convert both to the common type. Since one of the values is a floating-point type, the common type must be a floating-point type, which is float, double, or long double. If you look closer at the rules, it turns out the common type must be float or double, and the compiler must document this. It seems the compiler chose "float" as the common type.
So a is converted from char to float, b is already float, both are added, the result has type float. And then there's a rule that to assign float to int, a conversion takes place according to very specific rules.
Any C compiler must follow these rules exactly. There is one exception: If the compiler can produce the results that it is supposed to produce then it doesn't matter how. As long as you can't distinguish it from the outside. So the compiler can change the whole code to "return 11;".
In the C language, partial expressions are evaluated without regard how they are used later. Whether a+b is assigned to an int, a char, a double, it is always evaluated in the same way. There are other languages with different rules, where the fact that a+b is assigned to an int could change how it is evaluated. C is not one of those languages.
If you change it to:
int main(){
char a = 5;
float b = 6.6;
int c = a + 2*b;
return c;
}
then it becomes clear that you have to keep the float 18.2 until the end.
With no optimizations, gcc acts as if this could happen and does a lot of conversions.
With just -O it already does the math itself and directly returns the final integer, even in above example.
There is no in-between reasoning and short-cut here. Why simplify from 5+6.0 to 5+6 but not to 11? Either act stupid and do cvtsi2ss xmm1,eax (and back etc.), or then tell them directly 11.
For example: when I round a float into an int, why should I use :
int i = (int) round(f);
Instead of :
int i = int round(f);
What is the difference between int and (int) in C?
an int is the built-in type integer and (int) is Type Casting.
In C float data values require more storage, So when you are storing a float value in an integer variable you are supposed to type cat it to truncate the fractional part before storing it.
i.e. int i = (int) round(f);
use int i is a definition.
(int) round(f) is casting the results of the round function to an integer, potentially truncating results of round down to an integer
Did you actually try to compile the latter? I will not answer that until you report what the compiler says.
For the first version: That is a typecast (read about the semantics), telling the compiler to convert the value to the type in the parenthesis. Otherwise the compiler might (should) complain about assigning a more "powerful" type to a lesser.
Note that "powerful" here means the type can hold a larger range of values; not necessarily higher precision.
Edit: Yes, you will get "expected expression" for the latter. Because that is simply invalid. So, the simple and full answer is: because the second thing is not valid C and will not even compiler!
Well by convention, doing (int) on a float is typecasting the float to an int. I'm not quite sure why you'd do int _ = int _ , although it may work just as well. By convention though, many languages (there are exceptions) in which you typecast go by the syntax of
... = (_type_) yourdatavalue
Hope that helps!
I just wanted to ask that if say I initialize an integer variable like
int a = 9;
Then, why can't we use this integer value as a by part for the initialization of the float variable, like If I try to write
float g=a.0;
why is this statement actually an error?
What is the problem that is caused inside a compiler when I write this statement?
This is just not C syntax. The only way you use a dot like this is when you access struct members.
To cast int to float, just:
int i = 5;
float f = i;
There is even no need to cast, as pmg suggested.
The problem caused inside the compiler when you write this statement is that the statement does not follow the syntax defined for the C language in the C standard. Thus there is no way to know what it means. To you it is obvious what it should mean, but unlike natural languages, in programming languages things must be defined before being used, and it was never defined what a.0 should mean.
. is an operator (the same way that ! or + is an operator). The compiler looks at a, and decides it can't apply the . operator to it.
If a were a struct, then it might be possible to apply ..
I think you are trying to print the int value as if it was float, i.e. with decimal places, for that you can just use the printf() float specifier to make it print the int as a float.
Try this
#include <stdio.h>
int
main()
{
int x = 4;
printf("%.1f", (float)x);
return 0;
}
As mentioned, C only uses . as a decimal point in floating point constants, like 2.171828 or for structure members, like mystruct.member.
However, the idea of using floating point arrays to hold integers is how many large number (arbitrary precision) libraries operate, such as apfloat.
I am programming in the c language on mac os x. I am using sqrt, from math.h, function like this:
int start = Data -> start_number;
double localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
This works, but why? and why am I getting no warning? On the man page for sqrt, it takes a double as parameter, but I give it an int - how can it work?
Thanks
Type conversions which do not cause a loss in precision might not throw warnings. They are cast implicitly.
int --> double //no loss in precision (e.g 3 became 3.00)
double --> int //loss in precision (e.g. 3.01222 became 3)
What triggers a warning and what doesn't is depends largely upon the compiler and the flags supplied to it, however, most compilers (atleast the ones I've used) don't consider implicit type-conversions dangerous enough to warrant a warning, as it is a feature in the language specification.
To warn or not to warn:
C99 Rationale states it like a guideline
One of the important outcomes of exploring this (implicit casting) problem is the understanding that high-quality compilers might do well to look
for such questionable code and offer (optional) diagnostics, and that
conscientious instructors might do well to warn programmers of the
problems of implicit type conversions.
C99 Rationale (Apr 2003) : Page 45
The compiler knows the prototype of sqrt, so it can - and will - produce the code to convert an int argument to double before calling the function.
The same holds the other way round too, if you pass a double to a function (with known prototype) taking an int argument, the compiler will produce the conversion code required.
Whether the compiler warns about such conversions is up to the compiler and the warning-level you requested on the command line.
For the conversion int -> double, which usually (with 32-bit (or 16-bit) ints and 64-bit doubles in IEEE754 format) is lossless, getting a warning for that conversion is probably hard if possible at all.
For the double -> int conversion, with gcc and clang, you need to specifically ask for such warnings using -Wconversion, or they will silently compile the code.
Int can be safely upcast automatically to a double because there's no risk of data loss. The reverse is not true. To turn a double to an int, you have to explicitly cast it.
C-compilers do some automatic casting with double and int.
you could also do the following:
int start = Data -> start_number;
int localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
Even if the localSum is an int this will still work, but it will always cut of anything beyond the point.
For example if the return value of sqrt() is 1.365, it will be stored as a simple 1.