C don't get the right result - c

I need the result of this variable in a program, but I don't understand why I can't get the right result.
double r = pow((3/2), 2) * 0.0001;
printf("%f", r);

The problem is integer division, where the fractional part (remainder) is discarded
Try:
double r = pow((3.0/2.0), 2) * 0.0001;
The first argument of the pow() expects a double. Because the ratio: 3/2 uses integer values, the result passed to the argument is 1. By changing to float values, the result of the division can retain the fractional part, and the result becomes 1.5, the form expected by the function.

(3/2) involves two integers, so it's integer division, with the result 1. What you want is floating point (double) division, so coerce the division to use doubles by writing it as (3.0/2.0)

Related

How to round a float by casting as an int in C

So I am a second semester freshman in college. My teacher wants us to write a function that round a floating point number to the nearest hundredth. He said that we need to convert the floating point into an integer data type and then covert it back to a floating point. That's all he said. I have spent at least 5 hours trying different ways to do this.
This is my code so far:
#include <stdio.h>
int rounding(int roundedNum);
int main()
{
float userNum,
rounded;
printf("\nThis program will round a number to the nearest hundredths\n");
printf("\nPlease enter the number you want rounded\n>");
scanf("%f", &userNum);
rounded = rounding (userNum);
printf("%f rounded is %f\n", userNum, rounded);
return 0;
}
int rounding(int roundedNum)
{
return roundedNum;
}
Your instructor may be thinking:
float RoundHundredth(float x)
{
// Scale the hundredths place to the integer place.
float y = x * 100;
// Add .5 to cause rounding when converting to an integer.
y += .5f;
// Convert to an integer, which truncates.
int n = y;
// Convert back to float, undo scaling, and return.
return n / 100.f;
}
This is a flawed solution because:
Most C implementations use binary floating point. In binary floating-point, it is impossible to store any fractions that are not multiples of a negative power of two (½, ¼, ⅛, 1/16, 1/32, 1/64,…). So 1/100 cannot be exactly represented. Therefore, no matter what calculations you do, it is impossible to return exactly .01 or .79. The best you can do is get close.
When you perform arithmetic on floating-point numbers, the results are rounded to the nearest representable value. This means that, in x * 100, the result is, in generally, not exactly 100 times x. There is a small error due to rounding. This error cause push the value across the point where rounding changes from one direction to another, so it can make the answer wrong. There are techniques for avoiding this sort of error, but they are too complicated for introductory classes.
There is no need to convert to an integer to get truncation; C has a truncation function for floating-point built-in: trunc for double and truncf for float.
Additionally, the use of truncation in converting to integer compelled us to add ½ to get rounding instead. But, once we are no longer using a conversion to an integer type to get an integer value, we can use the built-in function for rounding floating-point values to integer values: round for double and roundf for float.
If your C implementation has good formatted input/output routines, then an easy way to find the value of a floating-point number rounded to the nearest hundred is to format it (as with snprintf) using the conversion specifier %.2f. A proper C implementation will convert the number to decimal, with two digits after the decimal point, using correct rounding that avoids the arithmetic rounding errors mentioned above. However, then you will have the number in string form.
Here are some hints:
Multiply float with "some power of 10" to ensure the needed precision numbers are shifted left
Cast the new value to a new int variable so the unwanted float bits are discarded
Divide the int by the same power of 10 but add use a float form of that (e.g 10.0) so integer gets converted to float and the new value is the correct value
To test, use printf with the precision (.2f)
The two most common methods of rounding are "Away From Zero" and "Banker's Rounding (To Even)".
Pseudo-code for Rounding Away From Zero
EDIT Even though this is pseudo-code, I should have included the accounting for precision, since we are dealing with floating-point values here.
// this code is fixed for 2 decimal places (n = 2) and
// an expected precision limit of 0.001 (m = 3)
// for any values of n and m, the first multiplicand is 10^(n+1)
// the first divisor is 10^(m + 1), and
// the final divisor is 10^(n)
double roundAwayFromZero(double value) {
boolean check to see if value is a negative number
add precision bumper of (1.0 / 10000) to "value" // 10000.0 is 10^4
multiply "value" by 1000.0 and cast to (int) // 1000.0 is 10^3
if boolean check is true, negate the integer to positive
add 5 to integer result, and divide by 10
if boolean check is true, negate the integer again
divide the integer by 100.0 and return as double // 100.0 is 10^2
ex: -123.456
true
-123.456 + (1.0 / 10000.0) => -123.4561
-123.4561 * 1000.0 => -123456.1 => -123456 as integer
true, so => -(-123456) => 123456
(123456 + 5) / 10 => 123461 / 10 => 12346
true, so => -(12346) => -12346
-12346 / 100.0 => -123.46 ===> return value
}
In your initial question, you expressed a desire for direction only, not the explicit answer in code. This is as vague as I can manage to make it while still making any sense. I'll leave the "Banker's Rounding" version for you to implement as an exercise.
Ok so I figured it out! thank yall for your answers.
//function
float rounding(float roundedNumber)
{
roundedNumber = roundedNumber * 100.0f + 0.5f;
roundedNumber = (int) roundedNumber * 0.01f;
return roundedNumber;
}
So pretty much if I entered 56.12567 as roundedNumber, it would multiply by 100 yielding 5612.567. From there it would add .5 which would determine if it rounds up. In this case, it does. The number would change to 5613.067.
Then you truncate it by converting it into a int and multiply by .01 to get the decimal back over. From there it returns the value to main and prints out the rounded number. Pretty odd way of rounding but I guess thats how you do it in C without using the rounding function.
Well, let's think about it. One thing that's helpful to know is that we can turn a float into an integer by casting:
float x = 5.4;
int y = (int) x;
//y is now equal to 5
When we cast, the float is truncated, meaning that whatever comes after the decimal point is dropped, regardless of its value (i.e. It always rounds towards 0).
So if you think about that and the fact that you care about the hundredths place, you could maybe imagine an approach that consists of manipulating your floating point number in someway such that when you cast it to an int you only truncate information you don't care about (i.e. digits past the hundredths place). Multiplying might be useful here.

Trying to print answer to equation and getting zero in C.

printf("Percent decrease: ");
printf("%.2f", (float)((orgChar-codeChar)/orgChar));
I'm using this statement to print some results to my command console, however, I end up with zero. Putting the equation into another variable doesn't work either.
orgChar = 91 and codeChar = 13, how do I print out this equation?
Integer division will lead to result 0 here and you are type casting the result later to float so eventually you will end up with 0
Make any one of the variables float before division
(orgChar-codeChar)/(float)orgChar
As others have mentioned, the subtraction and division are done using integer math before the cast to (float). By that point, the integer division has a truncated result of 0. Instead:
// (float)((orgChar-codeChar)/orgChar)
((float) orgChar - codeChar)/orgChar
// or
(orgChar - codeChar)/ (float) orgChar
As the float argument gets converted to double as part of the "usual argument promotion" of arguments to a variadic function like printf(), might as well do
printf("%.2f", (orgChar-codeChar)/ (double) orgChar);
Casting, in general, should be avoided. Some casts unintentionally narrow the operation. If unsigned is 32-bit and a1 is uint64_t, then a1 was narrowed before the shift and unexpected results may occur. If a1 was a char, it is nicely converted without trouble to an unsigned.
The second method of *1u will not narrow. It will insure a2*1u is at least the width of an unsigned.
unsigned sh1 = (unsigned) a1 >> b1; // avoid
unsigned sh2 = a2*1u >> b2; // better
So recommend, rather than (float) or (double), use the idiom of multiplying by 1.
printf("%.2f", (orgChar - codeChar) * 1.0 / orgChar);
you don't need to typecast the whole expression. you can simply type cast either the numerator or the denominator to get the float result with precision of 2 decimal places.
for eg:
here in this code defining a variable c as float doesnt guarantee the result to be float.for getting the precise result you need to typecast either the numerator or denominator.
You shouldn't need to cast to float at all. Simply make sure both variables are of type float or double before attempting to print them as floats. This means either declaring the variables as floats, or using the correct function, such as atof () when converting the data to floats (normally this is done when you get the data from the command-line or a file.)
This should work...
#include <stdio.h>
int
main (void)
{
float orgChar = 91;
float codeChar = 13;
printf ("%.2f\n", (orgChar - codeChar) / orgChar);
return 0;
}

Why do we separately cast to "float" in an integer division?

For example:
int number1 = 1, number2= 2;
float variable = (float)number1/(float)number2;
Instead of this, Why can't we use "float" only once? For example:
int number1 = 1, number2= 2;
float variable = (float)(number1/number2);
The objective is to avoid the truncation that comes with integer division. This requires that at least one of the operands of the division be a floating point number. Thus you only need one cast to float, but in the right place. For example,
float variable = number1/(float)number2; // denominator is float
or
float variable = ((float)number1)/number2; // numerator is float
Note that in the second example, one extra set of parentheses has been added for clarity, but due to precedence rules, it is the same as
float variable = (float)number1/number2; // numerator is float, same as above
Also note that in your second example,
float variable = (float)(number1/number2);
the cast to float is applied after the integer division, so this does not avoid truncation. Since the result of the expression is assigned to a float anyway, it is the exact of
float variable = number1/number2;
You can write either expression, but you get different results.
With float variable = (float)(number1 / number2); the value in variable is 0, because the division is done as integer division, and 1/2 is 0, and the result is converted.
With float variable = (float)number1 / (float)number2;, the value in variable is 0.5, because the division is done as floating point division.
Either one of the casts in float variable = (float)number1 / (float)number2; can be omitted and the result is the same; the other operand is converted from int to float before the division occurs.
Since number1 and number2 are ints, the division performed will be integral division. Thus, number1/number2 will evaluate to the int 0. To do floating point arithmetic instead, you need to cast them. Note that simply casting one will suffice, since the other one will be implicitly promoted. So, you can just say ((float)number1)/number2.
In the First case,
1/2 results 0
In the second case, you can use float once, but it has to be applied with one of the numbers before division
1.0/2.0 or 1.0/2 or 1/2.0 results 0.5

Average of an array displays correctly only if casted to float

So I have a pretty noobish question. Although I declare the average as a float, when I calculate it avg = sum / counter;, where counter is the number of elements bigger than 0 in an array, and then print it, I get only 0s after the decimal point.
However if I calculate it by casting to a float, avg = (float) sum/counter;, the average is printed out correctly.
Shouldn't the first one be correct? If I declare a variable as a float, why should I cast it later to a float again?
When you declare
int sum;
int counter;
...
then sum / counter performs an integer division, resulting in an integer value. You can still assign that result to a float variable, but the value will remain the integer part only.
To solve this, you need to cast either sum or counter to a float - only then you are getting the float value also as a result:
float result = (float) sum / counter;
This is, by the way, the same as ((float) sum) / counter - means, the cast as you wrote it applies to sum.
The cast as you wrote it applies to sum, not to avg, the result you obtain is perfectly normal if sum is an integer type.
The operator / applied to two integers performs integer division. The operator / applied to two floating point values performs floating point division. If one value is floating point, the other is promoted to floating point. But if both are integer, integer division is performed.
Later, the result of this operation, which already exists in memory, is assigned to another variable. But that is another story. You need to get the correct result in the first place, and then you can assign it to a variable which will store it.
The problem is sum is of type integer so it rounds of to 0's after the decimal point.It does not automatically type cast.So either declare sum as float or manually type cast the result as float

Cant figure out the way compiler does the calculation

Given is a C code, in which i am trying to figure out how the calculation order would go, well i thought it should be 3/2 first and then *5 or the other way round. But it gives an unexpected output of
5.000000
#include <stdio.h>
int main(void) {
// your code goes here
float a = 3/2*5;
printf("%f", a);
return 0;
}
This is expected.
It calcuates 3/2 first (as integer), which is truncated down to 1. Then it multiplies by 5.
Try casting the numbers to (float) in your calculation - then you'll get the expected answer.
As suggested by damienfrancois, you can also get the compiler to treat them as floating point numbers as follows:
float a = 3.0/2.0*5;
In general, if you don't give any indication otherwise (such as the .0, or a cast), the compiler will treat numbers as an integer
The line
float a = 3/2*5;
computes a as the integer division of 3 by 2, which is 1, then multiplied by 5 and cast to float.
Replace it with
double a = 3.0/2.0*5;
or
float a = 3.0f/2.0f*5;
and you'll get 7.500000
It divides the integer 3 by integer 2, then multiplicates by integer 5, and then converts to a float.
Try float a = 3.f/2*5;
The multiplication and division operator have equal precedence in evaluation. Since both operators are left to right associative, integer division (3/2) is performed first resulting in 1 and then followed by multiplication with 5. Readup on operator associativity in C language
http://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Operator_precedence

Resources