why result of (double + int) is 0 (C language) - c

result of
printf("%d\n", 5.0 + 2);
is 0
but
int num = 5.0 + 2;
printf("%d\n", num);
is 7
What's the difference between the two?

The result of 5.0 + 2 is 7.0 and is of type double.
The "%d" format is to print int.
Mismatching format specification and argument type leads to undefined behavior.
To print a float or double value with printf you should use the format "%f".
With
int num = 5.0 + 2;
you convert the result of 5.0 + 2 into the int value 7. Then you print the int value using the correct format specifier.

In all expressions, every operand has a type. 5.0 has type double. 2 has type int.
Whenever a double and an integer are used as operands of the same operator, the integer is silently converted to a double before calculation. The result is of type double.
And so you pass double to printf, but you have told it to expect an int, since you used %d. The result is a bug, the outcome is not defined.
But in case of int num = 5.0 + 2;, you first get a result as double, 7.0. Then force a conversion back to int. That code is equivalent to:
int num = (int)((double)5.0 + (double)2);
More details here: Implicit type promotion rules

The result of expression 5.0 + 2 is of type double, since at least one of the two operands of operator + here is a floating point / double value (so the other one will be converted to double before adding).
If you write printf("%d\n", 5.0 + 2), you will pass a floating point value where the format specifier actually expects an int. This mismatch is undefined behaviour, and the 0 you receive could be something else (another number, a crash, a .... what ever), too.
int num = 5.0 + 2, in contrast, will convert the double-value resulting from 5.0 + 2 back to an integral value (discarding any fractional part). So the value of num will be 7 and will be - since num is an integral type - valid in conjunction with format specifier %d then.

5.0+2 is typed double.
The compiler warning for
int main() { return _Generic(5.0 + 2, struct foo: 0); }
should tell you as much if
int main() { return _Generic(5.0 + 2, double: 5.0+2); }
compiling without error doesn't.
Matching "%d" with a double in printf results in undefined behavior.
Any result is legal, including your harddrive getting erased (unlikely to happen unless your program already has such functionality somewhere in it; if it does, UB can well result it it being inadvertently invoked).

The usual arithmetic conversions are implicitly performed to cast their values to a common type. The compiler first performs integer promotion; if the operands still have different types, then they are converted to the type that appears highest in the following hierarchy -
In int num = 5.0 + 2; this code snippet you are adding a float with integer and storing back to integer again. So, c automatically casts the result into integer to store in an integer type variable. So, while printing using %d, it prints fine.
But, in printf("%d\n", 5.0 + 2); this code snippet, the addition is casted into floating point number as float has higher priority over integer, but you are printing it using %d. Here mismatch of format specifier causing the unexpected result.

Related

Can a int value added to a float value?

/**Program for internal typecasting of the compiler**/
#include<stdio.h>
int main(void)
{
float b = 0;
// The Second operand is a integer value which gets added to first operand
// which is of float type. Will the second operand be typecasted to float?
b = (float)15/2 + 15/2;
printf("b is %f\n",b);
return 0;
}
OUTPUT : b is 14.500000
Yes, an integral value can be added to a float value.
The basic math operations (+, -, *, /), when given an operand of type float and int, the int is converted to float first.
So 15.0f + 2 will convert 2 to float (i.e. to 2.0f) and the result is 17.0f.
In your expression (float)15/2 + 15/2, since / has higher precedence than +, the effect will the same as computing ((float)15/2) + (15/2).
(float)15/2 explicitly converts 15 to float and therefore implicitly converts 2 to float, yielding the final result of division as 7.5f.
However, 15/2 does an integer division, so produces the result 7 (there is no implicit conversion to float here).
Since (float)15/2 has been computed as a float, the value 7 is then converted to float before addition. The result will therefore be 14.5f.
Note: floating point types are also characterised by finite precision and rounding error that affects operations. I've ignored that in the above (and it is unlikely to have a notable effect with the particular example anyway).
Note 2: Old versions of C (before the C89/90 standard) actually converted float operands to double in expressions (and therefore had to convert values of type double back to float, when storing the result in a variable of type float). Thankfully the C89/90 standard fixed that.
Rule of thumb: When doing an arithmetic calculation between two different built-in types, the "smaller" type will be converted into the "larger" type.
double > float > long long(C99) > long > short > char.
b = (float)15/2 + 15/2;
Here the first part, (float)15/2 is equivalent to 15.0f / 2. Because an operation involving a "larger" type and a "smaller" type will yield a result in the "larger" type, (float)15/2 is 7.500000, or 7.5f.
When it comes to 15/2, since both operands are integers, the operation is done only on integer level. Therefore the decimal point is stripped (from int), and only gives 7 as a result.
So the expression is calculated into
b = 7.5f + 7;
No doubt you'll have 14.500000 as the final result, because it's exactly 14.5f.
b = (float)15/2 + 15/2;
The first one((float)15/2) will work fine. The second one will also work but will be converted into an integer first, so you will lose precision. Like:
b = (float)15/2 + 15/2;
b = 7.500000f + 7
b = 14.500000
It's worth asking: if an integer value could not be added to floating-point value, what would the symptom be?
Compiler issues error or warning message.
Something gets truncated; you don't get the result you want.
Undefined behavior: you might or might not get the result you want, and the compiler might or might not warn you about it.
But in fact none of these things happen. When you add an integer to a floating-point value, the compiler automatically converts the integer to a floating-point value so it can do the addition that way, and this is perfectly well defined. For example, if you have the code
double d = 7.5;
int i = 7;
double result = d + i;
the compiler interprets this just as if you had written
double result = d + (double)i;
And it works this way for just about all operations: the same logic is applied when you subtract, multiply, or divide a floating-point value and an integer.
And it works this way for just about all types. If you add a long int and an int, the plain int automatically gets converted to a long.
As a general rule (and I really can't think of too many exceptions), the compiler always wants to do arithmetic on two values of the same type. So whenever you have two values of different type, the compiler will just about always convert one of them for you. The full set of rules for how it does this are rather elaborate, but they're all supposed to make sense, and do what you want. The full set of rules is called the usual arithmetic conversions, and if you do a Google search on that phrase you'll find lots of explanations.
One case that does not necessarily do what you want is when the two variables are not different types. In particular, if the two variables are both integers, and the operation you're doing is division, the compiler doesn't have to convert anything: it divides the integer by the integer, discarding any remainder, and gives you an integer result. So if you have
int i = 1;
int j = 2;
int k = i / j;
then k ends up containing 0. And if you have
double d = i / j;
then d ends up containing 0 also, because the compiler follows exactly the same rules when performing the division; it doesn't "peek outside" to see that it's going to need a floating-point result.
P.S. I said, "As a general rule, the compiler always wants to do arithmetic on two values of the same type", and I said I couldn't think of too many exceptions. But if you're curious, two exceptions are the << and >> operators. If you have x << y, where x is a long int and y is a plain int, the compiler does not have to convert y to a long int first.

Trying to print answer to equation and getting zero in C.

printf("Percent decrease: ");
printf("%.2f", (float)((orgChar-codeChar)/orgChar));
I'm using this statement to print some results to my command console, however, I end up with zero. Putting the equation into another variable doesn't work either.
orgChar = 91 and codeChar = 13, how do I print out this equation?
Integer division will lead to result 0 here and you are type casting the result later to float so eventually you will end up with 0
Make any one of the variables float before division
(orgChar-codeChar)/(float)orgChar
As others have mentioned, the subtraction and division are done using integer math before the cast to (float). By that point, the integer division has a truncated result of 0. Instead:
// (float)((orgChar-codeChar)/orgChar)
((float) orgChar - codeChar)/orgChar
// or
(orgChar - codeChar)/ (float) orgChar
As the float argument gets converted to double as part of the "usual argument promotion" of arguments to a variadic function like printf(), might as well do
printf("%.2f", (orgChar-codeChar)/ (double) orgChar);
Casting, in general, should be avoided. Some casts unintentionally narrow the operation. If unsigned is 32-bit and a1 is uint64_t, then a1 was narrowed before the shift and unexpected results may occur. If a1 was a char, it is nicely converted without trouble to an unsigned.
The second method of *1u will not narrow. It will insure a2*1u is at least the width of an unsigned.
unsigned sh1 = (unsigned) a1 >> b1; // avoid
unsigned sh2 = a2*1u >> b2; // better
So recommend, rather than (float) or (double), use the idiom of multiplying by 1.
printf("%.2f", (orgChar - codeChar) * 1.0 / orgChar);
you don't need to typecast the whole expression. you can simply type cast either the numerator or the denominator to get the float result with precision of 2 decimal places.
for eg:
here in this code defining a variable c as float doesnt guarantee the result to be float.for getting the precise result you need to typecast either the numerator or denominator.
You shouldn't need to cast to float at all. Simply make sure both variables are of type float or double before attempting to print them as floats. This means either declaring the variables as floats, or using the correct function, such as atof () when converting the data to floats (normally this is done when you get the data from the command-line or a file.)
This should work...
#include <stdio.h>
int
main (void)
{
float orgChar = 91;
float codeChar = 13;
printf ("%.2f\n", (orgChar - codeChar) / orgChar);
return 0;
}

lvalue required as left operand of assignment while type casting

I am trying to write some code, which does some calculations based on variables the user gives. The variables can be of sevral types, among them Integer or Decimal, which is where the problem lies. When I do a calculation, I declare the numbers to be calculated and the result as double. Though, when BOTH numbers are integers, I want the result to also be an int. That said, I tried typecasting, by saying the following:
float result;
float num1, num2;
if (strcmp(varray[var1].type, "Integer\n") == 0 && flag != 1){
if (varray[var2].type[0] == 'I'){
(int)num1 = atoi(varray[var1].value);
printf("The result type is Integer \n");
(int)num2 = atoi(varray[var2].value);
(int)result = num1 + num2;
printf("Result =%d\n", result);
flag = 1;
}
}
I get compiler errors on the lines where I type cast to (int), with the message displayed on the title. Am I doing something wrong here? Should I just declare the numbers and result in the conditional, and not declare them before it and type cast within?
You need to cast the rvalue (the result of atoi) instead of the lvalue (num1):
num1 = (float)atoi(varray[var1].value);
(int)num1 is considered an rvalue, so called because it's expected to be on the right side of an assignment statement.
If you wanted to cast the right side to something more suitable for the variable on the left, the right side is where you would do it, such as:
num1 = (float) atoi (varray[var1].value);
This is perfectly valid though you may lose precision if your integers are large enough1. It's also unnecessary since C will happily convert it for you auto-magically. You can just use:
num1 = atoi (varray[var1].value);
You also have a problem with lines like this:
float result = 32.0f;
printf("Result =%d\n", result);
If your format specifiers don't match the types of the parameters, the behaviour is undefined. In this case, you're specifying an integer with %d but giving a floating point value.
If you want to print a floating point value with no decimals, you use %.0f as a format specifier (%f on its own will give a default number of digits after the decimal point, like 32.000000).
1 Unless you're using arrays of millions of floating point values and space is tight, or you're on a system where there's a clear and needed performance benefit to using float, you really should prefer doubles where possible. Their range and precision are much better.

float variables in C [duplicate]

This question already has answers here:
Using %f to print an integer variable
(6 answers)
Closed 3 years ago.
May be this is a simple question but I am not sure about how float variables are stored in memory and why it is behaving in this way, can someone please explain about the following behavior.
#include<stdio.h>
int main ()
{
int a = 9/5;
printf("%f\n", a);
return 0;
}
Output:
0.000000
I have looked at some information on how float variables are stored in memory, it has stuff about mantissa, exponent and sign. But I am not getting how to relate that here.
int a = 9/5;
performs integer division and ignores the remainder, so a is set to 1. Attempting to print that using %f gives undefined behavior, but by chance you got 0.000000 out of it.
Do
double a = 9./5.;
instead, or print with %d if integer division was the desired behavior. (float would also work, but a will be promoted to double when passed to printf, so there's no reason not to use double.)
It is an Undefined Behaviour.
You are using float format specifier (%f) to print an int (a). You should use %d to see correct output.
It is an undefined behaviour in C. Use %d format specifier instead of %f.
Does printf() depend on order of format specifiers? gives you detailed answer.
Here's a brief analysis of your code:
int a = 9/5; // 9/5 = 1.8, but since you are doing integer division and storing the value in an integer it will store 1.
printf("%f\n", a);//Using incorrect format specifiers with respect to datatypes, will cause undefined behavior
printf("%d\n",a);//This should print 1. And correct.
Or if you want the float:
instead of int use float :
float a=9.0f/5;//This will store 1.800000f
//float a=9/5 will store 1.000000 not, 1.8 because of integer divison
printf("%f\n",a); //This will print 1.800000
Also do read this: http://en.wikipedia.org/wiki/IEEE_754-2008 on how floating points work.
Clarification about integer division:
C99:
6.5.5 Multiplicative operators
6 When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.88) If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a
88) This is often called ‘‘truncation toward zero’’.
Just assuming that your division should result in a fraction in normal math won't magically make it a float.
int a = 9/5;
You need to use
float a = 9.0/5;
First you need the right data type i.e. float (or better yet double for higher precision) to store a float.
Secondly, if you are dividing two integers i.e. 9 and 5, it will simply follow integer division i.e. only store the integer part of division and discard the rest. To avoid that i have added .0 after 9. This would force compiler to implicitly covert into float and do the division.
Regarding your mentioning of why it is printing 0, it is already mentioned that trying %f on integer is undefined behavior. Technically, a float is 4 bytes containing 3 byte number and 1 byte exponent and these are combined to generate the resultant value.

Arithmetic expression as printf arg

I am not able to understand why
float x = (4+2%-8);
printf("%f\n", x);
prints 6.000000 and
printf("%f\n", (4+2%-8));
prints 0.000000. Any information will be helpful.
Regards.
The expression (4 + 2 % -8) produces an integer and you are trying to print a float (and they don't match).
In the first case the integer is converted to float (because of the assignment) so later on the printf works because the value is in a format %f expects.
Try this:
printf("%f\n", (4.0 + 2 % -8));
^
It's because here:
float x = (4+2%-8);
The (4+2%-8) is of type int but is converted to float because that's the type of x. However, here:
printf("%f\n", (4+2%-8));
No cast is performed so you pass an int where it expects a float giving you a garbage value. You can fix this with a simple cast:
printf("%f\n", (float)(4+2%-8));
In the first snippet the resulting int value is implicitly converted to float due to assignment, in the second snippet you are lying to the compiler: you tell it to expect a value of type double but "send" a value of type int instead.
Do not lie to the compiler. It will get its revenge.
Note that the printf conversion specifier "%f" expects a value of type double, but "sending" a float is ok because that value is automagically converted to double before the function is called.

Resources