How to divide 2 int in c? - c

wanna divide 2 numbers and get the result like this:
5 / 2 = 2.50
But it only outputs 2.
I don't now what i'm doing wrong.
Here my code:
int a;
int b;
int c;
printf("First num\n");
scanf("%d", &a);
printf("Second num\n");
scanf("%d", &b);
c = a / b;
printf("%d", c);

You need a double variable to store the result. int stores only integers. Additionally, you have to typecast the other variables also before performing the division.
Do something like this
double c;
.
.
.
c = (double)a / (double)b;
printf("%f", c);
NOTE:
You do not need the & in printf() statements.

To avoid the typecast in float you can directly use scanf with %f flag.
float a;
float b;
float c;
printf("First number\n");
scanf("%f", &a);
printf("Second number\n");
scanf("%f", &b);
c = a / b;
printf("%f", c);

The '/' - sign is for division. Whenever in C language, you divide an integer with an integer and store the data in an integer, the answer as output is an integer. For example
int a = 3, b = 2, c = 0;
c = a/b; // That is c = 3/2;
printf("%d", c);
The output received is: 1
The reason is the type of variable you have used, i.e. integer (int)
Whenever an integer is used for storing the output, the result will be stored as integer and not a decimal value.
For storing the decimal results, C language provide float, double, long float and long double.
Whenever you perform an operation and desires an output in decimal, then you can use the above mentioned datatypes for your resultant storage variable. For example
int a = 3, b = 2;
float c = 0.0;
c = (float)a/b; // That is c = 3/2;
printf("%.1f", c);
The output received: 1.5
So, I think this will help you to understand the concept.
Remember: When you are using float then the access specifier is %f. You need to convert your answer into float, just as I did, and then the answer will be reflected.

You have to use float or double variables, not int (integer) ones. Also note that a division between two integers will lead to an integer result, meanwhile a division between a float/double and an integer will lead to a float result. That's because C implicitly promote this integer to float.
For example:
5/2 = 2
5/2.0f = 2.5f
Note the .0f, this actually means that we are dividing with a float.

In C, only an int type number is displayed. 5/2 gives a floating point type number. So, the compiler compiles it only with the integer value.

Related

Integer division without changing data type [duplicate]

This question already has answers here:
Dividing 1/n always returns 0.0 [duplicate]
(3 answers)
Closed 9 years ago.
Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?
#include <stdio.h>
void main() {
int a;
float b, c, d;
a = 750;
b = a / 350;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.00 2.14
}
http://codepad.org/j1pckw0y
This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.
#include <stdio.h>
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}
Use casting of types:
int main() {
int a;
float b, c, d;
a = 750;
b = a / (float)350;
c = 750;
d = c / (float)350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
This is another way to solve that:
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0; //if you use 'a / 350' here,
//then it is a division of integers,
//so the result will be an integer
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.
"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.
You should do it like this
b = a / 350.0;
Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.
EDIT: my answer makes me think of the classic old man saying "when I was your age..."
Chapter and verse
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.105) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is
undefined.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.
Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...
Dividing two integers will result in an integer (whole number) result.
You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.

How to set the level of precision for the decimal expansion of a rational number

I'm trying to print out the decimal expansion of a rational number in C. The problem I have is that when I divide the numerator by the denominator I lose precision. C rounds up the repeating part when I don't want it to.
For example, 1562/4995 = 0.3127127127... but in my program I get 1562/4995 = 0.312713. As you can see a part of the number that I need has been lost.
Is there a way to specify C to preserve a higher level of decimal precision?
I have tried to declare the result as a double, long double and float. I also tried to split the expansion into 2 integers seperated by a '.'
However both methods haven't been successful.
int main() {
int numerator, denominator;
numerator = 1562;
denominator = 4995;
double result;
result = (double) numerator / (double) denominator;
printf("%f\n", result);
return 0;
}
I expected the output to be 1562/4995 = 0.3127127127... but the actual output is 1562/4995 = 0.312713
The %f format specifier to printf shows 6 digits after the decimal point by default. If you want to show more digits, use a precision specifier:
printf("%.10f\n", result);
Also, the double type can only accurately store roughly 16 decimal digits of precision.
You need to change your output format, like this:
printf("%.10lf\n", result);
Note two things:
The value after the . specifies the decimal precision required (10 decimal places, here).
Note that I have added an l before the f to explicitly state that the argument is a double rather than a (single-precision) float.
EDIT: Note that, for the printf function, it is not strictly necessary to include the l modifier. However, when you come to use the 'corresponding' scanf function for input, it's absence will generate a warning and (probably) undefined behaviour:
scanf("%f", &result); // Not correct
scanf("%lf", &result); // Correct
If printing is all you want to do, you can do this with the same long division you were taught in elementary school:
#include <stdio.h>
/* Print the decimal representation of N/D with up to
P digits after the decimal point.
*/
#define P 60
static void PrintDecimal(unsigned N, unsigned D)
{
// Print the integer portion.
printf("%u.", N/D);
// Take the remainder.
N %= D;
for (int i = 0; i < P && N; ++i)
{
// Move to next digit position and print next digit.
N *= 10;
printf("%u", N/D);
// Take the remainder.
N %= D;
}
}
int main(void)
{
PrintDecimal(1562, 4995);
putchar('\n');
}

Why does implicit casting from float to double return a nonsense number in my program?

I'm working on a Lab assignment for my introduction to C programming class and we're learning about casting.
As part of an exercise I had to write this program and explain the casting that happens in each exercise:
#include <stdio.h>
int main(void)
{
int a = 2, b = 3;
float f = 2.5;
double d = -1.2;
int int_result;
float real_result;
// exercise 1
int_result = a * f;
printf("%d\n", int_result);
// exercise 2
real_result = a * f;
printf("%f\n", real_result);
// exercise 3
real_result = (float) a * b;
printf("%f\n", real_result);
// exercise 4
d = a + b / a * f;
printf("%d\n", d);
// exercise 5
d = f * b / a + a;
printf("%d\n", d);
return 0;
}
I get the following output:
5
5.000000
6.000000
1074921472
1075249152
For the last two outputs, the mathematical operations that are conducted result in float values. Since the variable they're being stored in is of the type double, the cast from float to double shouldn't affect the values, should it? But when I print out the value of d, I get garbage numbers as shown in the output.
Could someone please explain?
But when I print out the value of d, I get garbage numbers as shown in the output.
You are using %d as the format instead of %f or %lf. When the format specifier and the argument type don't match, you get undefined behavior.
%d takes an int (and prints it in decimal format).
%f takes a double.
%lf is either an error (C89) or equivalent to %f (since C99).

Typecasting from int,float,char,double

I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).

Why dividing two integers doesn't get a float? [duplicate]

This question already has answers here:
Dividing 1/n always returns 0.0 [duplicate]
(3 answers)
Closed 9 years ago.
Can anyone explain why b gets rounded off here when I divide it by an integer although it's a float?
#include <stdio.h>
void main() {
int a;
float b, c, d;
a = 750;
b = a / 350;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.00 2.14
}
http://codepad.org/j1pckw0y
This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.
#include <stdio.h>
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}
Use casting of types:
int main() {
int a;
float b, c, d;
a = 750;
b = a / (float)350;
c = 750;
d = c / (float)350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
This is another way to solve that:
int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0; //if you use 'a / 350' here,
//then it is a division of integers,
//so the result will be an integer
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
}
However, in both cases you are telling the compiler that 350 is a float, and not an integer. Consequently, the result of the division will be a float, and not an integer.
"a" is an integer, when divided with integer it gives you an integer. Then it is assigned to "b" as an integer and becomes a float.
You should do it like this
b = a / 350.0;
Specifically, this is not rounding your result, it's truncating toward zero. So if you divide -3/2, you'll get -1 and not -2. Welcome to integral math! Back before CPUs could do floating point operations or the advent of math co-processors, we did everything with integral math. Even though there were libraries for floating point math, they were too expensive (in CPU instructions) for general purpose, so we used a 16 bit value for the whole portion of a number and another 16 value for the fraction.
EDIT: my answer makes me think of the classic old man saying "when I was your age..."
Chapter and verse
6.5.5 Multiplicative operators
...
6 When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.105) If the quotient a/b is representable, the expression
(a/b)*b + a%b shall equal a; otherwise, the behavior of both a/b and a%b is
undefined.
105) This is often called ‘‘truncation toward zero’’.
Dividing an integer by an integer gives an integer result. 1/2 yields 0; assigning this result to a floating-point variable gives 0.0. To get a floating-point result, at least one of the operands must be a floating-point type. b = a / 350.0f; should give you the result you want.
Probably the best reason is because 0xfffffffffffffff/15 would give you a horribly wrong answer...
Dividing two integers will result in an integer (whole number) result.
You need to cast one number as a float, or add a decimal to one of the numbers, like a/350.0.

Resources