I'm new to C language and I'm having trouble about this simple arithmetic operation to convert ounce to metric ton. I don't know how to fix it. It's always giving me wrong result.
#include<math.h>
#define oz 35273.92
main()
{
int ounces;
float mton;
clrscr();
printf("Enter ounces: ");
scanf("%d",&ounces);
mton = ounces/oz;
printf("The metric ton is %f.", mton);
getch();
return(0);
}
I tried entering 70547.84 but the result is wrong.
Enter ounces: 70547.84
The metric ton is 0.014026
If I enter a number lower than oz it gives me -0.000000
Sorry but I can't reproduce this with my compiler (GCC 6.3.0). The result I get is 1.999976, which is fairly reasonable. Also I'm not getting -0.000000 with an input lower than oz.
I suggest, that you should use floating point values for ounces, as you're inputting a decimal number. If you use int you'll not be able to read past the decimal point. You'll get 70547 in ounces, with .84 left in the input stream.
#include <stdio.h>
#define oz 35273.92
int main() {
float ounces;
float mton;
clrscr();
printf("Enter ounces: ");
scanf("%f", &ounces);
mton = ounces/oz;
printf("The metric ton is %f.", mton);
getch();
return(0);
}
This should give you the desired result.
Related
I have made a simple addition function, but getting wrong summation values, sorry if it is a naive thing to look for. But really want to know why is this happening. I want an alternative solution for this. Thanks in advance!
I am using notepad++ to write the code, and gcc compiler in the windows command prompt to execute the code.
#include <stdlib.h>
#include <stdio.h>
float addit(float num1, float num2)
{
float sumit;
sumit = num1 + num2;
return sumit;
}
int main()
{
float num1, num2;
float answer = 0.000000;
printf("Enter first number: ");
scanf("%f", &num1);
printf("Enter second number: ");
scanf("%f", &num2);
answer = addit(num1, num2);
printf("The summation is: %f", answer);
return 0;
}
The expected output of the addition of 2345.34 and 432.666 is 2778.006.
But, after execution it shows 2778.006104.
Windows cmd window showing execution result as 2778.006104
Welcome to the wonderful world of floating point values! Consider this: between 0 and 1, there are infinitely many numbers. There's 0.1, and there's 0.01, and there's 0.001, and so on. But computers do not have infinite space, so how are we supposed to store numbers? Our solution was to limit the precision of the numbers we stored. As others have mentioned, java's float data type only has 6 or 7 digits of accuracy (in base 10). Because of this, you cannot necessarily guarantee that the result of a mathematical operation on floats will be 100% accurate. You can only be sure that the first 6-7 digits of the result will be.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
#include <stdio.h>
#define s scanf
#define p printf
void main (){
int P,I,A;
float T,R;
clrscr();
p("PRINCIPAL: ");
s("%i",&P);
p("RATE: ");
s("%f",&R);
p("TERM: ");
s("%f",&T);
R = R/100;
I = P*R*T;
A = P+I;
p("I: %i\nMA: %i",I,A);
getch();
}
This really bugs me, if i put PRINCIPAL: 1200 RATE: 3 TERM:1 I: 35 MA: 1235 but if you compute in manually the answer should be I: 36 MA: 1236 the number is decreasing by 1. Why is it happening? why does the answer differ from computer and manual computing?
You try to typecast float to int that causes some data loss. Just like we can not store the big object in the small bag.
#include <stdio.h>
#define s scanf
#define p printf
int main (){
int P;
float T,R,I,A;
p("PRINCIPAL: ");
s("%i",&P);
p("RATE: ");
s("%f",&R);
p("TERM: ");
s("%f",&T);
R = R/100;
I = P*R*T;
A = P+I;
p("\nI: %f\nMA: %f",I,A);
return 0;
}
Your problem is with typecasting, please research on this subject a look at this, it's a little bit hude to explain in a simple text here.
But I can tell you, you are problably losing information when casting from float to int, because this two primitive types in C have diferent max length.
You can see changin the int variables to float and running your own program again, like this:
#include <stdio.h>
#define s scanf
#define p printf
void main (){
float P,I,A;
float T,R;
p("PRINCIPAL: ");
s("%f",&P);
p("RATE: ");
s("%f",&R);
p("TERM: ");
s("%f",&T);
R = R/100;
I = P*R*T;
A = P+I;
p("I: %f\nMA: %f",I,A);
getch();
}
This will produce your desired output, just in float.
Your problem is in the conversion of float to int. If you do your program with everything typed as float, you get the expected results:
#include <stdio.h>
#define s scanf
#define p printf
int main (){
float P,I,A;
float T,R;
p("PRINCIPAL: ");
s("%f",&P);
p("RATE: ");
s("%f",&R);
p("TERM: ");
s("%f",&T);
R = R/100;
I = P*R*T;
A = P+I;
p("I: %f\nMA: %f",I,A);
return 0;
}
outputs:
PRINCIPAL: 1200
RATE: 3
TERM: 1
I: 36.000000
MA: 1236.000000
However, when you convert your float values to int, you just take the integer part; everything to the right of the decimal point gets deleted. So, even though it's printing as 36.000000 when I do it, it's possible that the value of I may be coming out to something like 35.9999999, due to imprecision in floating-point math, and simply displaying as 36.000000 due to rounding in the display process. In this case, you'll just get the 35, and lose everything else.
To solve your problem, either leave everything as a float, or convert your floats to ints by rounding them—for example, by using lroundf in math.h—instead of just casting them.
This question already has answers here:
How do I restrict a float value to only two places after the decimal point in C?
(17 answers)
Closed 6 years ago.
I've just started a class in C Programming, and while I have some background knowledge in JAVA, I'm trying to transition to this programming language. I have a project where I have to round user's input from something like 1.3333 to only two decimal places.
What I have so far is this:
#include <stdio.h>
int main (void)
{
//v is my variable for the value which the user will input
//Declaring variable as floating
float v;
printf("Enter your value: \n");
scanf("%.2f", &v);
v = 0;
printf("The rounded version is: %.2f");
return 0;
}
This is what I have so far based off of what I've read in my book and this link: Rounding Number to 2 Decimal Places in C which my question is different from because it involves user input. My professor does say that I can't use a library function and need to use simple type casts to calculate it. This makes me feel that what I have might be wrong. Would the #include <stdio.h> be considered a library function? Given this information, is my thought process on the right track? If not, then would I do something like divide by variable by 100? Maybe %e for scientific notation?
Thanks ahead of time! Only asking for specific information, not coding or anything. Really want to understand the "hows" and "whys".
First of all #include is a command that you need in order to include and use function that c provides for example for scanf you need to include library.
To round the number in two decimals without using %.2f in scanf you could write:
int x= (v*1000);
if(x%10>6) x=x/10+1 ;
else x= x/10;
printf("%d.%d",x/100,x%100);
I think your professor aims not so much in user input but rather in understanding what happens when converting basic datatypes. Rounding, or at least cutting off digits, without library functions could look as follows:
int main (void)
{
//v is my variable for the value which the user will input
//Declaring variable as floating
float v;
printf("Enter your value: \n");
scanf("%f", &v);
v = float((int)(v*100))/100;
printf("The rounded version is: %f", v);
return 0;
}
Input/Output:
Enter your value:
1.3333333
The rounded version is: 1.330000
Here is a working example that rounds properly without using any library calls other than stdio.
#include <stdio.h>
int main (void)
{
float v;
printf("Enter your value: \n");
scanf("%f", &v);
v = (float)((int)(v * 100 + .5) / 100.0);
printf("The rounded version is: %f\n",v);
return 0;
}
Output:
jnorton#mint18 ~ $ ./a.out
Enter your value:
3.456
The rounded version is: 3.460000
This is my code:
#include <stdio.h>
float aveg(a,b){
float result;
result=(a+b)/2;
return result;
}
int main(void) {
float a,b,avg;
printf("first no: ");
scanf("%f",&a);
printf(" %f\n",a);
printf("second no: ");
scanf("%f",&b);
printf(" %f\n",b);
avg=(a+b)/2;
printf("average is: ");
printf("%.2f", avg);
avg=aveg(a,b);
printf("\n average from function is: ");
printf("%.2f",avg);
}
This is my output...
first no: 3
3.000000
second no: 5
5.000000
average is: 4.00
average from function is: 537133056.00
Can someone explain why I get such a different number from the float function?
I tried declaring the function:
float avg(float a, float b);
But the compiler just got me errors...
Any idea?
The quickest fix is to add a single dot (will explain in a sec):
result=(a+b)/2.;
Both a and b are declared as int (by default since you didn't specify any type) so your (a+b) results in an integer which is then divided by another integer.
To make the result of your division a float you should make sure at least one operand is of float type, which is why the dot in the 2. (which is just a short version of 2.0).
The real fix, of course, is to properly declare the data types of your function parameters:
float aveg(float a, float b){
This will work if you have no other syntax errors in your code.
here is the code as written in visual studio
#include <stdio.h>
void main()
{
int n,i,num,s;
float av;
printf("How Many numbers?");
scanf("%d",&n);
s=0;
for(i=1;i<=n;i++){
printf("enter number #%d : ",i);
scanf("%d", &num);
s=s+num;
}
av=s/n;
printf("The Average is %f",av);
getchar();
}
i really don't know why it isnt displaying the right average :/
The problem is here: av=s/n; you are storing the result of an integer division into a float, there will be some data loss. A simple solution: use typecasting->
av=(float)s/n;
or
av=s/(float)n;
Another alternative: make either s or n a float.
av=s/n; Lookup "integer division". You probably want to use av=(float)s/n;
Division of two integer values doesn't automatically convert to a float value, unless you use a cast.