What is the difference between int and (int) in C? - c

For example: when I round a float into an int, why should I use :
int i = (int) round(f);
Instead of :
int i = int round(f);

What is the difference between int and (int) in C?
an int is the built-in type integer and (int) is Type Casting.

In C float data values require more storage, So when you are storing a float value in an integer variable you are supposed to type cat it to truncate the fractional part before storing it.
i.e. int i = (int) round(f);

use int i is a definition.
(int) round(f) is casting the results of the round function to an integer, potentially truncating results of round down to an integer

Did you actually try to compile the latter? I will not answer that until you report what the compiler says.
For the first version: That is a typecast (read about the semantics), telling the compiler to convert the value to the type in the parenthesis. Otherwise the compiler might (should) complain about assigning a more "powerful" type to a lesser.
Note that "powerful" here means the type can hold a larger range of values; not necessarily higher precision.
Edit: Yes, you will get "expected expression" for the latter. Because that is simply invalid. So, the simple and full answer is: because the second thing is not valid C and will not even compiler!

Well by convention, doing (int) on a float is typecasting the float to an int. I'm not quite sure why you'd do int _ = int _ , although it may work just as well. By convention though, many languages (there are exceptions) in which you typecast go by the syntax of
... = (_type_) yourdatavalue
Hope that helps!

Related

MISRA violation "441 - Float cast to non-float "

I am trying to correct the MISRA violation "441 - Float cast to non-float" that is occurring with the following code:
tULong frames = (tULong)(runTimeSeconds * 40.0f);
runTimeSeconds is a float and obviously 40.0f is assigned as a float. Any ideas?
There is a rule (MISRA-C:2004 10.4) stating the value of a complex expression of floating type may only be cast to a narrower floating type.
(runTimeSeconds * 40.0f) is such a so-called complex expression (a MISRA-C:2004 term). To dodge the MISRA violation, you can introduce a temporary variable:
float tmp = runTimeSeconds * 40.0f;
tULong frames = (tULong)tmp; // no complex expression, this is fine
The rationale for this rule is that complex expressions could potentially contain implicit type promotions and similar dangerous things.
MISRA-C:2004 is also worried/paranoid about incompetent programmers who think that changing code like uint8_t u8a, u8b; ... u8a + u8b into (uint32_t)(u8a + u8b) would somehow cause the addition to get carried out as an unsigned 32 bit type.
These rules have been improved in MISRA-C:2012 and are more reasonable there. A cast from a float expression to an unsigned one is fine as per MISRA-C:2012 10.5.
<math.h> has a nice family of functions that round and convert in one call. No cast needed to convert from float to tULong. Below has a (tULong) cast to handle an integer to integer conversion which may be eliminated depending on unposted issues of range and tULong details.
#include <math.h>
// long int lrintf(float x);
// long long int llrint(double x);
// 4 others
tULong frames = (tULong) llrintf(runTimeSeconds * 40.0f);
This rounds rather than truncates like OP's original code.
If the idea is to truncate the result, use the truncf function:
ULong frames = truncf(runTimeSeconds * 40.0f);
That way, your intention is made explicitly.

Why can't we use an integer variable as a by part of the floating type variable?

I just wanted to ask that if say I initialize an integer variable like
int a = 9;
Then, why can't we use this integer value as a by part for the initialization of the float variable, like If I try to write
float g=a.0;
why is this statement actually an error?
What is the problem that is caused inside a compiler when I write this statement?
This is just not C syntax. The only way you use a dot like this is when you access struct members.
To cast int to float, just:
int i = 5;
float f = i;
There is even no need to cast, as pmg suggested.
The problem caused inside the compiler when you write this statement is that the statement does not follow the syntax defined for the C language in the C standard. Thus there is no way to know what it means. To you it is obvious what it should mean, but unlike natural languages, in programming languages things must be defined before being used, and it was never defined what a.0 should mean.
. is an operator (the same way that ! or + is an operator). The compiler looks at a, and decides it can't apply the . operator to it.
If a were a struct, then it might be possible to apply ..
I think you are trying to print the int value as if it was float, i.e. with decimal places, for that you can just use the printf() float specifier to make it print the int as a float.
Try this
#include <stdio.h>
int
main()
{
int x = 4;
printf("%.1f", (float)x);
return 0;
}
As mentioned, C only uses . as a decimal point in floating point constants, like 2.171828 or for structure members, like mystruct.member.
However, the idea of using floating point arrays to hold integers is how many large number (arbitrary precision) libraries operate, such as apfloat.

Why bother using a float / double literal when not needed?

Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway? And when a fractional value is needed, why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, I often see code similar to the following
float foo = 3.0f;
double bar = 5.0;
// And, unfortunately, even
double baz = 7.0f;
and
void quux(float foo) {
...
}
...
quux(7.0f);
But as far as I can tell those are equivalent to
float foo = 3;
// or
// float foo = 3.0;
double bar = 5;
double baz = 7;
quux(9);
I can understand the method call if you are in a language with overloading (c++, java) where it can actually make a functional difference if the function is overloaded (or will be in the future), but I'm more concerned with C (and to a lesser extent Objective-C), which doesn't have overloading.
So is there any reason to bother with the extra decimal and/or f? Especially in the initialization case, where the declared type is right there?
Many people learned the hard way that
double x = 1 / 3;
doesn't work as expected. So they (myself included) program defensively by using floating-point literals instead of relying on the implicit conversion.
C doesn't have overloading, but it has something called variadic functions. This is where the .0 matters.
void Test( int n , ... )
{
va_list list ;
va_start( list , n ) ;
double d = va_arg( list , double ) ;
...
}
Calling the function without specifying the number is a double will cause undefined behaviour, since the va_arg macro will interpret the variable memory as a double, when in reality it is an integer.
Test( 1 , 3 ) ; has to be Test( 1 , 3.0 ) ;
But you might say; I will never write variadic functions, so why bother?
printf( and family ) are variadic functions.
The call, should generate a warning:
printf("%lf" , 3 ) ; //will cause undefined behavior
But depending on the warning level, compiler, and forgetting to include the correct header, you will get no warning at all.
The problem is also present if the types are switched:
printf("%d" , 3.0 ) ; //undefined behaviour
Why use a double or float literal when you need an integral value and an integer literal will be implicitly cast to a double/float anyway?
First off, "implicit cast" is an oxymoron (casts are explicit by definition). The expression you're looking for is "implicit [type] conversion".
As to why: because it's more explicit (no pun intended). It's better for the eye and the brain if you have some visual indication about the type of the literal.
why bother adding the f (to make a floating point literal) where a double will be cast to a float anyway?
For example, because double and float have different precision. Since floating-point is weird and often unintuitive, it is possible that the conversion from double to float (which is lossy) will result in a value that is different from what you actually want if you don't specify the float type manually.
In most cases, it's simply a matter of saying what you mean.
For example, you can certainly write:
#include <math.h>
...
const double sqrt_2 = sqrt(2);
and the compiler will generate an implicit conversion (note: not a cast) of the int value 2 to double before passing it to the sqrt function. So the call sqrt(2) is equivalent to sqrt(2.0), and will very likely generate exactly the same machine code.
But sqrt(2.0) is more explicit. It's (slightly) more immediately obvious to the reader that the argument is a floating-point value. For a non-standard function that takes a double argument, writing 2.0 rather than 2 could be much clearer.
And you're able to use an integer literal here only because the argument happens to be a whole number; sqrt(2.5) has to use a floating-point literal, and
My question would be this: Why would you use an integer literal in a context requiring a floating-point value? Doing so is mostly harmless, since the compiler will generate an implicit conversion, but what do you gain by writing 2 rather than 2.0? (I don't consider saving two keystrokes to be a significant benefit.)

how to truncate a number with a decimal point into a int? what's the function for this?

The problem occurs when I do a division operation. I would like to know who to truncate a number with a decimal point into a whole number such as 2, 4, 67.
It truncates automatically is you assign value to "int" variable:
int c;
c = a/b;
Or you can cast like this:
c = (int) (a/b);
This truncates it even if c is defined as float or double.
Usually truncation is not the best (depends what you want to achieve of course). Usually result is rounded like this:
c= round(a/b,0);
is more intelligent because rounds result properly. If you use linux, you can easily get reference with "man round" about exact data types etc.
You can use the trunc() function defined in math.h. It will remove fractional part and will return nearest integer not larger than the given number.
This is how it is defined:
double trunc(double x);
Below is how you can use it:
double a = 18.67;
double b = 3.8;
int c = trunc(a/b);
You can check man trunc on Linux to get more details about this function. As pointed out in previous answers, you can cast division result to integer or it will automatically be truncated if assigned to integer but if you were interested to know about a C function which does the job then trunc() is the one.
int result = (int)ceilf(myFloat );
int result = (int)roundf(myFloat );
int result = (int)floor(myFloat);
float result = ceilf(myFloat );
float result = roundf(myFloat );
float result = floor(myFloat);
I think it will be helpful to you.
Manually or implicitly casting from a floating-point type to an integral type causes automatic truncation toward zero. Keep in mind that if the integral type is not sufficiently large to store the value, overflow will occur. If you simply need to print the value with everything past the decimal point truncated, use printf():
printf("%.0f", floor(float_val));
As Tõnu Samuel has pointed out, that printf() invocation will actually round the floating-point parameter by default.

Handling numbers in C

Couldnt understand how numbers are handled in C. Could anyone point to a good tutorial.
#include<stdio.h>
main()
{
printf("%f",16.0/3.0);
}
This code gave: 5.333333
But
#include<stdio.h>
main()
{
printf("%d",16.0/3.0);
}
Gave some garbage value: 1431655765
Then
#include<stdio.h>
main()
{
int num;
num=16.0/3.0;
printf("%d",num);
}
Gives: 5
Then
#include<stdio.h>
main()
{
float num;
num=16/3;
printf("%f",num);
}
Gives: 5.000000
printf is declared as
int printf(const char *format, ...);
the first arg (format) is string, and the rest can be anything. How the rest of the arguments will be used depending on the format specifiers in format. If you have:
printf("%d%c", x, y);
x will be treated as int, y will be treated as char.
So,
printf("%f",16.0/3.0);
is ok, since you ask for float double (%f), pass float double(16.0/3.0)
printf("%d",16.0/3.0);
you ask for int(%d), you pass float double (double and int have different internal representation) so, the bit representation of 16.0/3.0 (double) corresponds to bit representation of 1431655765(int).
int num;
num=16.0/3.0;
compiler knows that you are assigning to int, and converts it for you. Note that this is different than the previous case.
Ok, the first 1 is giving correct value as expected.
Second one you are passing a float while it is treating it as an int (hence the "%d" which is for displaying int datatypes, it is a little complicated to explain why and since it appears your just starting I wouldn't worry about why "%d" does this when passed a float) reading it wrong therefore giving you a wierd value. (not a garbage value though).
Third one it makes 16.0/3.0 an int while assigning it to the int datatype which will result in 5. Because while making the float an int it strips the decimals regardless of rounding.
In the fourth the right hand side (16/3) is treated as an int because you don't have the .0 zero at the end. It evaluates that then assigns 5 to float num. Thus explaining the output.
It is because the formatting strings you are choosing do not match the arguments you are passing. I suggest looking at the documentation on printf. If you have "%d" it expects an integer value, how that value is stored is irrelevant and likely machine dependent. If you have a "%f" it expects a floating point number, also likely machine dependent. If you do:
printf( "%f", <<integer>> );
the printf procedure will look for a floating point number where you have given an integer but it doesn't know its and integer it just looks for the appropriate number of bytes and assumes that you have put the correct things there.
16.0/3.0 is a float
int num = 16.0/3.0 is a float converted to an int
16/3 is an int
float num = 16/3 is an int converted to a float
You can search the web for printf documentation. One page is at http://linux.die.net/man/3/printf
You can understand numbers in C by using concept of Implecit Type Conversion.
During Evaluation of any Expression it adheres to very strict rules of type Conversion.
and your answer of expression is depends on this type conversion rules.
If the oparands are of different types ,the 'lower' type is automatically converted into the 'higher' type before the operation proceeds.
the result is of the higher type.
1:
All short and char are automatically converted to int then
2:
if one of the operands is int and the other is float, the int is converted into float because float is higher than an ** int**.
if you want more information about inplicit conversion you have to refer the book Programming in ANSI C by E Balagurusamy.
Thanks.
Bye:DeeP
printf formats a bit of memory into a human readable string. If you specify that the bit of memory should be considered a floating point number, you'll get the correct representation of a floating point number; however, if you specify that the bit of memory should be considered an integer and it is a floating point number, you'll get garbage.
printf("%d",16.0/3.0);
The result of 16.0/3.0 is 5.333333 which is represented in Single precision floating-point format as follows
0 | 10101010 | 10101010101010101010101
If you read it as 32bit integer value, the result would be 1431655765.
num=16.0/3.0;
is equivalent to num = (int)(16.0/3.0). This converts the result of float value(5.33333) to integer(5).
printf("%f",num);
is same as printf("%f",(float)num);

Resources