What will be the output of the following C code?
Can the int data type take the floating-point values?
#include <stdio.h>
int main(){
float a = 1.1;
int b = 1.1;
if(a==b)
printf("YES");
else
printf("NO");
}
The output will be NO
It's because int can't store float values correctly. So, the value stored in b will be 1 which is not equal to 1.1. So, the if case will never be satisfied and thus the output will be NO always.
The value stored in b will be 1. When comparing, b will be cast to the float value 1.0f, so the comparison will yield NO.
int b = 1.1;
This truncates the double value 1.1 to 1 before assigning to variable b. The output therefore is NO.
But you can compare int to float. Consider this example:
float a=1.0;
int b=1;
if(a==b)
printf("YES");
else
printf("NO");
Here, a is converted to float before the comparison, and therefore you would get a YES output.
You should get the answer No when you execute it.
That's because when
int b=1.1;
was executed, it initialized a variable b and then assigned the int of 1.1 that is 1. You can check this by outputting the value of b.
Also the type int stores whole numbers only but float can store fractional numbers. The types are completely different.
It will compile without any errors. The output of the program will be "NO".
In C, int and float are 2 different data types. Even when you try to assign the value 1.1 to an integer variable in C, it will be initialized as 1 only.
Others already told you that the answer will be NO. Can the int data type take the floating-point values?
Yes they can, but only in one statement! IIf you want int variable to behave like float variable you need to cast it to float.
For example:
int x = 5;
int y = 3;
printf("%f \n", (float)x/y);
Output will be 5.0/3 and that would be float value. (float)x is called casting. You can also cast float value into integer, (int)some_var etc.
Note: Type of x and y will remain int! They only behave like float in this statement.
However, in your program that won't work, because as soon as you declare int b = 1.1; b becomes 1.
Related
I have an expression which does the same calculation. When I try to do the whole calculation in a single expression and store it in variable "a", the expression calculates the answer as 0. When I divide the equations in two different parts and then calculate it, it gives the answer -0.332087. Obviously, -0.332087 is the correct answer. Can anybody explain why is this program misbehaving like this?
#include<stdio.h>
void main(){
double a, b, c;
int n=0, sumY=0, sumX=0, sumX2=0, sumY2=0, sumXY=0;
n = 85;
sumX = 4276;
sumY = 15907;
sumX2 = 288130;
sumY2 = 3379721;
sumXY = 775966;
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", a);
printf("%lf\n", b/c);
}
Output:
0.000000
-0.332097
In your program
a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
all the variables in the right hand side are of type int, so it will produce a result of type int. The true answer -0.332097 is not a int value, so it will be converted to a valid int value, namely 0. And this 0 is assigned to variable a.
But when you do
b = ((n*sumXY) - (sumX*sumY));
c = ((n*sumX2) - (sumX*sumX));
printf("%lf\n", b/c);
The variable b and c are of type double, so the expression b/c produce a double typed value and the true answer -0.332097 is a valid double value. Thus this part of your code give a right result.
In first equation i.e. a = ((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)) both numerator and denominator will give integer results and the value stored in a will also be integer as integer/integer is integer. In second and third expression as you are solving them individually both b and c will be stored in double and double/double will result in a double i.e. a decimal value.
This problem can be solved by using type casting - or better still using float for the variables.
Add double before your calculation, so after you do your integer calculation in "a", it will convert it to double.
a = (double)((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
First of all (INTEGER)/(INTEGER) is always an INTEGER. So, you can typecast it like a = (double)(((n*sumXY) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX)));
OR
We know that any number (n ∈ ℂ), multiplied by 1.0 always gives the same number (n). So, your code shall be like:
a = ((n*sumXY*1.0L) - (sumX*sumY)) / ((n*sumX2)-(sumX*sumX));
Multiplying by 1.0L or 1.0f converts the whole arithmetic operation to long double data type.
Now you can print the number (-0.332097) to stdout.
Non-standard Code
Your code is:
void main()
{
//// YOUR CODE
}
Which is non-standard C. Instead your code should be like:
int main(int argc, char **argv)
{
//// YOUR CODE
return 0;
}
Change all your INTEGERS to DOUBLES. That should solve the problem
The type of 1st expression is int whereas in 2nd expression it is double.
This question already has answers here:
strange output in comparison of float with float literal
(8 answers)
Closed 4 years ago.
I am new to C language. Here is my question and code:
I have a constant and a variable with the same value and I try to compare them to check if they are equal. It seems to me that since they are assigned the same value, they should be equal but that is not the case.
#include <stdio.h>
#define mypi1 3.14
int main()
{
int ans;
float mypi2 = 3.14;
if(mypi1==mypi2)
{
ans=1;
}
else
{
ans=0;
}
printf("%i",ans);
return 0;
}
My output is 0. Which indicates they are not equal. What is the reasoning behind this? This is a really simple question but I could not find it anywhere. Please help and thanks in advance.
#define mypi1 3.14
float mypi2 = 3.14;
The first of those is a double type, the second is a double coerced into a float.
The expression mypi1==mypi2 will first convert the float back to a double before comparing (the idea is that, if one type is of lesser range/precision than the other, it's converted so that both types are identical).
Hence, if the if statement is failing, it's likely that you lose information in the double -> float -> double round-trip(a).
To be honest, unless you're using a great many floating point values (and storage space is a concern), you should probably just use double everywhere. If you do need float types, use that for both values:
#define mypi1 3.14f
float mypi2 = 3.14f;
Comparing two float variables will not involve any conversions.
(a) See, for example, the following complete program:
#include <stdio.h>
#define x 3.14
int main(void) {
float y = 3.14; // double -> float
double z = y; // -> double
printf("%.50f\n", x);
printf("%.50f\n", z);
}
In this, x is a double and z is a double that's gone through the round-trip conversion discussed above. The output shows the difference that can happen:
3.14000000000000012434497875801753252744674682617188
3.14000010490417480468750000000000000000000000000000
#include<stdio.h>
int main(void) {
int m=2, a=5, b=4;
float c=3.0, d=4.0;
printf("%.2f,%.2f\n", (a/b)*m, (a/d)*m);
printf("%.2f,%.2f\n", (a/d)*m, (a/b)*m);
return 0;
}
The result is:
2.50,0.00
2.50,-5487459522906928958771870404376799406808566324353377030104786519743796498661129086808599726405487030183023928761546165866809436788166721199470577627133198744209879004896284033606071946689658593354711574682628407789000148729336462084532657713450945423953627239707603534923756075420253339949731915621203968.00
I want to know what cause this difference.
However, if I change int to float, the answer is the same as I expect.
The result is:
2.50,2.50
2.50,2.50
You are using wrong format specifiers, try this:
#include<stdio.h>
int main(void)
{
int m=2,a=5,b=4;
float fm=2,fa=5,fb=4;
float c=3.0,d=4.0;
//First expression in this printf is int and second is float due to d
printf("%d , %.2f\n\n",(a/b)*m,(a/d)*m);
//Second expression in this printf is int and first is float due to d
printf("%.2f , %d\n\n",(a/d)*m,(a/b)*m);
printf("%.2f , %.2f\n\n",(fa/b)*fm,(fa/d)*fm);
printf("%.2f , %.2f\n\n",(fa/d)*fm,(fa/b)*fm);
return 0;
}
Output:
2 , 0
0 , 1074003968
2.50 , 2.50
2.50 , 2.50
Section 7.19.6.1 p9 of the C99 standard says:
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
Note that a/b is int if both are ints and is float if at least one is float, and similarly for other arithmetic operators. Thus in a/b, if they both are ints then 5/4 = 1; if at least one is float, then 5/4.0 = 5.0/4.0 = 1.25, because the compiler automatically converts an int into a float before any arithmetics with another float. So your results were expected to be different.
But in your case you seem to use the %.2f format even when you output ints. So print takes the four bytes that have your int and tries to decode those four bytes as if they had some float encoded. Flot numbers are encoded very differently in memory from ints -- it's like taking a Hungarian text and tryint to interpret it as if it was written in English, even if the letters are the same -- the resulting "interpretation" will be just garbage.
What you need to do: any int should be output with %d and any float with %f or similar formats.
If you want the result to be in float cast them:
#include <stdio.h>
int main(void) {
// your code goes here
int m=2,a=5,b=4;
float c=3.0,d=4.0;
printf("%.2f,%.2f\n",(float)(a/b)*m,(float)(a/d)*m);
printf("%.2f,%.2f\n", (float) (a/d)*m,(float)((a/b)*m));
return 0;
}
Hope this helps..:)
You are trying to print integer values
printf("%d,%.2f\n",(a/b)*m,(a/d)*m);
printf("%.2f , %d\n\n",(a/d)*m,(a/b)*m);
In order to print integer values you should use %d using wrong format specifier lead to undefined behavior
I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).
I'm currently trying to use angelscript with a simple code, following the official website's examples.
But when i try to initialise a double variable like below in my script :
double x=1/2;
the variable x appears to be initialised with the value 0.
It only works when i write
double x=1/2.0; or double x=1.0/2;
Does it exist a way to make angelscript work in double precision when i type double x=1/2 without adding any more code in the script ?
Thank you,
Using some macro chicanery:
#include <stdio.h>
#define DIV * 1.0 /
int main(void)
{
double x = 1 DIV 2;
printf("%f\n", x);
return 0;
}
DIV can also be defined as:
#define DIV / (double)
When you divide an int by an int, the result is an int. The quotient is the result, the remainder is discarded. Here, 1 / 2 yields a quotient of zero and a remainder of 1. If you need a double, try 1.0 / 2.
No, there is no way to get a double by dividing two ints without casting the result.
That's because 1 and 2 are integers, and:
int x = 1/2;
Would be 0. If x is actually a double, you get an implicit cast conversion:
double x = (double)(1/2);
Which means 1/2 = 0 becomes 0.0. Notice that is not the same as:
double x = (double)1/2;
Which will do what you want.
Numbers with decimals are doubles, and dividing an int by a double produces a double. You can also do this with casting each number:
double x = (double)1/(double)2;
Which is handy if 1 and 2 are actually int variables -- by casting this way, their value will be converted to a double before the division, so the product will be a double.