C - pow function unexpected behavior - c

This is a program to find number of digits.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main() {
int i = 0, n;
printf("Enter number: ");
scanf("%d", &n);
while ((n / pow(10, i)) != 0) {
i++;
}
printf("%d", i);
}
This program gives 309 as the output (value of i) on any input. However, if I store the value of pow(10, i) in another variable and put it in while loop, I get the correct output. Please help!

C++ uses the most precise type (when types are mixed) when doing a calculation or evaluation, and here you are effectively mixing a double with an int. You are dividing a user input number by a very large exponential number.
This in theory will never be zero, no matter how big the divisor gets. So, the theoretical result should be infinite. However, due to the limits of a double (which is returned by pow), it will still eventually approximate zero.
If you store the return of pow in an integer, you will no longer be mixing types and will effectively be working out the number of digits or what the approximate log base 10 is (any integer divide by a larger integer will equal zero).

Change:
while( ( n / pow(10,i) )!=0 ){
To:
while( ( n / pow(10,i) ) >= 1 ){
pow returns double and therefore the full result is double and the fractional part will be always non-zero until running out of exponent possible values which is 309 (translated to decimal, binary is 1024).

Related

Determining the number of decimal digits in a floating number

I am trying to write a program that outputs the number of the digits in the decimal portion of a given number (0.128).
I made the following program:
#include <stdio.h>
#include <math.h>
int main(){
float result = 0;
int count = 0;
int exp = 0;
for(exp = 0; int(1+result) % 10 != 0; exp++)
{
result = 0.128 * pow(10, exp);
count++;
}
printf("%d \n", count);
printf("%f \n", result);
return 0;
}
What I had in mind was that exp keeps being incremented until int(1+result) % 10 outputs 0. So for example when result = 0.128 * pow(10,4) = 1280, result mod 10 (int(1+result) % 10) will output 0 and the loop will stop.
I know that on a bigger scale this method is still inefficient since if result was a given input like 1.1208 the program would basically stop at one digit short of the desired value; however, I am trying to first find out the reason why I'm facing the current issue.
My Issue: The loop won't just stop at 1280; it keeps looping until its value reaches 128000000.000000.
Here is the output when I run the program:
10
128000000.000000
Apologies if my description is vague, any given help is very much appreciated.
I am trying to write a program that outputs the number of the digits in the decimal portion of a given number (0.128).
This task is basically impossible, because on a conventional (binary) machine the goal is not meaningful.
If I write
float f = 0.128;
printf("%f\n", f);
I see
0.128000
and I might conclude that 0.128 has three digits. (Never mind about the three 0's.)
But if I then write
printf("%.15f\n", f);
I see
0.128000006079674
Wait a minute! What's going on? Now how many digits does it have?
It's customary to say that floating-point numbers are "not accurate" or that they suffer from "roundoff error". But in fact, floating-point numbers are, in their own way, perfectly accurate — it's just that they're accurate in base two, not the base 10 we're used to thinking about.
The surprising fact is that most decimal (base 10) fractions do not exist as finite binary fractions. This is similar to the way that the number 1/3 does not even exist as a finite decimal fraction. You can approximate 1/3 as 0.333 or 0.3333333333 or 0.33333333333333333333, but without an infinite number of 3's it's only an approximation. Similarly, you can approximate 1/10 in base 2 as 0b0.00011 or 0b0.000110011 or 0b0.000110011001100110011001100110011, but without an infinite number of 0011's it, too, is only an approximation. (That last rendition, with 33 bits past the binary point, works out to about 0.0999999999767.)
And it's the same with most decimal fractions you can think of, including 0.128. So when I wrote
float f = 0.128;
what I actually got in f was the binary number 0b0.00100000110001001001101111, which in decimal is exactly 0.12800000607967376708984375.
Once a number has been stored as a float (or a double, for that matter) it is what it is: there is no way to rediscover that it was initially initialized from a "nice, round" decimal fraction like 0.128. And if you try to "count the number of decimal digits", and if your code does a really precise job, you're liable to get an answer of 26 (that is, corresponding to the digits "12800000607967376708984375"), not 3.
P.S. If you were working with computer hardware that implemented decimal floating point, this problem's goal would be meaningful, possible, and tractable. And implementations of decimal floating point do exist. But the ordinary float and double values any of is likely to use on any of today's common, mass-market computers are invariably going to be binary (specifically, conforming to IEEE-754).
P.P.S. Above I wrote, "what I actually got in f was the binary number 0b0.00100000110001001001101111". And if you count the number of significant bits there — 100000110001001001101111 — you get 24, which is no coincidence at all. You can read at single precision floating-point format that the significand portion of a float has 24 bits (with 23 explicitly stored), and here, you're seeing that in action.
float vs. code
A binary float cannot encode 0.128 exactly as it is not a dyadic rational.
Instead, it takes on a nearby value: 0.12800000607967376708984375. 26 digits.
Rounding errors
OP's approach incurs rounding errors in result = 0.128 * pow(10, exp);.
Extended math needed
The goal is difficult. Example: FLT_TRUE_MIN takes about 149 digits.
We could use double or long double to get us somewhat there.
Simply multiply the fraction by 10.0 in each step.
d *= 10.0; still incurs rounding errors, but less so than OP's approach.
#include <stdio.h>
#include <math.h> int main(){
int count = 0;
float f = 0.128f;
double d = f - trunc(f);
printf("%.30f\n", d);
while (d) {
d *= 10.0;
double ipart = trunc(d);
printf("%.0f", ipart);
d -= ipart;
count++;
}
printf("\n");
printf("%d \n", count);
return 0;
}
Output
0.128000006079673767089843750000
12800000607967376708984375
26
Usefulness
Typically, past FLT_DECMAL_DIG (9) or so significant decimal places, OP’s goal is usually not that useful.
As others have said, the number of decimal digits is meaningless when using binary floating-point.
But you also have a flawed termination condition. The loop test is (int)(1+result) % 10 != 0 meaning that it will stop whenever we reach an integer whose last digit is 9.
That means that 0.9, 0.99 and 0.9999 all give a result of 2.
We also lose precision by truncating the double value we start with by storing into a float.
The most useful thing we could do is terminate when the remaining fractional part is less than the precision of the type used.
Suggested working code:
#include <math.h>
#include <float.h>
#include <stdio.h>
int main(void)
{
double val = 0.128;
double prec = DBL_EPSILON;
double result;
int count = 0;
while (fabs(modf(val, &result)) > prec) {
++count;
val *= 10;
prec *= 10;
}
printf("%d digit(s): %0*.0f\n", count, count, result);
}
Results:
3 digit(s): 128

Why is pow() function in C giving wrong answer when it is odd exponential of 10 in a loop? [duplicate]

This question already has answers here:
Why pow(10,5) = 9,999 in C++
(8 answers)
Closed 2 years ago.
#include <stdio.h>
#include <math.h>
int main()
{
int loop, place_value=0, c = 5;
for(loop = 0; loop < c; loop++)
{
place_value = 0;
place_value = pow(10, loop);
printf("%d \n", place_value);
}
return 0;
}
This code gives
10
99
1000
9999
Why is 99 and 9999 there in 3rd and 5th line instead of 100 and 10000 respectively?
When asking for power normally, it gives right answer.
#include <stdio.h>
#include <math.h>
int main()
{
printf ("%d", (int) pow (10,3 ));
return 0;
}
1000
pow is a difficult routine to implement, and not all implementations give good results. Roughly speaking, the core algorithm for pow(x, y) computes a logarithm from (a part of) x, multiplies it by y, and computes an exponential function on the product. Doing this in floating-point introduces rounding errors that are hard to control.
The result is that the computed result for pow(10, 4) may be something near 10,000 but slightly less or greater. If it is less, than converting it to an integer yields 9999.
When you use arguments hard-coded in source code, the compiler may compute the answer during compilation, possibly using a different algorithm. For example, when y is three, it may simply multiply the first argument by itself, as in x*x*x, rather than using the logarithm-exponent algorithm.
As for why the low result happens with the odd numbers you have tested, consider what happens when we multiply 5.45454545 by various powers of 10 and round to an integer. 5.45454545 rounds down to 5. 54.5454545 rounds up to 55. 545.454545 rounds down to 545. The rounding up or down is a consequence of what fraction happens to land beyond the decimal point. For your cases with pow(10, loop), the bits of the logarithm of 10 may just happen to give this pattern with the few odd numbers you tried.
pow(x, y) function translate more or less to exp(log(x) * y), which will give a result that is not quite the same as x ^ y.
In order to solve this issue you can round this:
round(pow(x, y))
The rule of thumb: never use floating point functions (especially such a complicated ones like pow or log) with integer numbers.
Simply implement integer pow
unsigned intpow(unsigned x)
{
unsigned result = 1;
while(x --) result *= 10;
return result;
}
it will be much faster or even (the fastest one)
int intpow1(unsigned x)
{
const static unsigned vals[] = {1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, /* ... */};
#if defined(CHECK_OVERFLOW)
if(x >= sizeof(vals)/ sizeof(vals[0])) return -1;
#endif
return vals[x];
}

I've made a program in C that takes two inputs, x and n, and raises x to the power of n. 10^10 doesn't work, what happened?

I've made a program in C that takes two inputs, x and n, and raises x to the power of n. 10^10 doesn't work, what happened?
#include <cs50.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
float isEven(int n)
{
return n % 2 == 0;
}
float isOdd(int n)
{
return !isEven(n);
}
float power(int x, int n)
{
// base case
if (n == 0)
{
return 1;
}
// recursive case: n is negative
else if (n < 0)
{
return (1 / power(x, -n));
}
// recursive case: n is odd
else if (isOdd(n))
{
return x * power(x, n-1);
}
// recursive case: n is positive and even
else if (isEven(n))
{
int y = power(x, n/2);
return y * y;
}
return true;
}
int displayPower(int x, int n)
{
printf("%d to the %d is %f", x, n, power(x, n));
return true;
}
int main(void)
{
int x = 0;
printf("What will be the base number?");
scanf("%d", &x);
int n = 0;
printf("What will be the exponent?");
scanf("%d", &n);
displayPower(x, n);
}
For example, here is a pair of inputs that works:
./exponentRecursion
What will be the base number?10
What will be the exponent?9
10 to the 9 is 1000000000.000000
But this is what I get for 10^10:
./exponentRecursion
What will be the base number?10
What will be the exponent?10
10 to the 10 is 1410065408.000000
Why does this write such a weird number?
BTW, 10^11 returns 14100654080.000000, exactly ten times the above.
Perhaps it may be that there is some "Limit" to the data type that I am using? I am not sure.
Your variable x is an int type. The most common internal representation of that is 32 bits. That a signed binary number, so only 31 bits are available for representing a magnitude, with the usual maximum positive int value being 2^31 - 1 = 2,147,483,647. Anything larger that that will overflow, giving a smaller magnitude and possibly a negative sign.
For a greater range, you can change the type of x to long long (usually 64 bits--about 18 digits) or double (usually 64 bits, with 51 bits of precision for about 15 digits).
(Warning: Many implementations use the same representation for int and long, so using long might not be an improvement.)
A float only has enough precision for about 7 decimal digits. Any number with more digits than that will only be an approximations.
If you switch to double you'll get about 16 digits of precision.
When you start handling large numbers with the basic data types in C, you can run into trouble.
Integral types have a limited range of values (such as 4x109 for a 32-bit unsigned integer). Floating point type haver a much larger range (though not infinite) but limited precision. For example, IEEE754 double precision can give you about 16 decimal digits of precision in the range +/-10308
To recover both of these aspects, you'll need to use a bignum library of some sort, such as MPIR.
If you are mixing different data types in a C program, there are several implicit casts done by the compiler. As there are strong rules how the compiler works one can exactly figure out, what happens to your program and why.
As I do not know all of this casting rules, I did the following: Estimating the maximum of precision needed for the biggest result. Then casting explicit every variable and funktion in the process to this precision, even if it is not necessary. Normally this will work like a workarount.

How to get total numbers of digits used in an integer in c language?

We all know that we can get numbers of characters in an string using strlen() function, but if I want to get a number of the digits used in an integer, how can I do it?
Like if there is 1000 stored in a variable a and that variable is an integer type, I want to get length of the number and that is 4.
If you know the number is positive, or you want the minus sign to be included in the count:
snprintf(NULL, 0, "%d", a);
If you want the number of digits in the absolute value of the number:
snprintf(NULL, 0, "+%d", a) - 1;
The key to both these solutions is that snprintf returns the number of bytes it would have written had the buffer been large enough to write into. Since we're not giving it a buffer, it needs to return the number of bytes which it would have written, which is the size of the formatted number.
This may not be the fastest solution, but it does not suffer from inaccuracies in log10 (if you use log10, remember to round rather than truncate to an integer, and beware of large integers which cannot be accurately converted to a double because they would require more than 53 mantissa bits), and it doesn't require writing out a loop.
What you're asking for is the number of decimal digits. Integers are stored in a binary format, so there's no built-in way of determining that. What you can do to count the decimal digits is count how many times you can divide by 10.
int count = 1;
while (a >= 10) {
a /= 10;
count++;
}
Not really sure if I got your question right, but given your example, as far as I could understand you want to know how many digits there are in a given number (for example: 1 has one digit, 10 has two digits, 100 has three digits, so on...). You can achieve it using something like this:
#include <stdio.h>
int main(void){
int numberOfDigits = 0, value;
scanf("%d", &value); //you scan a value
if (value == 0)
numberOfDigits = 1;
else{
while (value != 0){
numberOfDigits++;
value = value / 10;
}
}
printf("Number of digits: %d\n", numberOfDigits);
return 0;
}

Is there a maximum length of integers I can enter for C?

for the b/m, I am trying to sum up the digits of an integer. eg. if I enter 1234, i get the answer 1 + 2 + 3 + 4 = 10.
this works for integers up to 10 digits long. after that, if i enter an integer with 11 digits like 12345678912, it returns me a negative answer.
Could anyone help to explain why this is so please? And if there's anyway I can get around it?
Thank you!
#include <stdio.h>
int main(void)
{
int number, single_digit, sum;
printf("What number would you like to sum:\n");
scanf("%i", &number);
sum = 0;
while(number != 0)
{
single_digit = number % 10;
sum += single_digit;
number = number / 10;
}
printf("The sum of the number is %i.\n", sum);
return 0;
}
Yes, the maximum value an integer can hold is INT_MAX (whose value depends on your platform).
An unsigned int can hold larger (positive) numbers, up to UINT_MAX.
You may be able to fit more in unsigned long or unsigned long long - again, the details are platform-specific. After that, you're looking for a bignum library.
NB. since you just want to sum the digits, using something like haccks' approach is much simpler, and less likely to overflow. It's still possible, though.
The maximum limit for an int is INT_MAX. You are getting -ve value because 12345678912 doesn't fit in the range of int and causes integer overflow.
Better to change your main's body to
sum = 0;
int ch;
printf("Enter the number would you like to sum:\n");
while((ch = getchar()) != '\n' && ch != EOF)
{
sum += ch - '0';
}
printf("The sum of the number is %i.\n", sum);
Since getchar reads single character at a time, you will get your desired output by adding these to sum.
This is happening because an int is only 4 bytes. What that means is any number larger than 2^31 will cause the buffer to overflow. A more detailed explanation can be found here: http://en.wikipedia.org/wiki/Integer_overflow. If you want to get around it, use an unsigned int instead, it will let you go up to 2^32 instead, but it will not let you have any negative numbers.
The int type is (usually) a signed 32-bit integer (you can see your size by printing sizeof(int)*8 to get the number of bits.
This means that the maximum value you can store in an int is 2^32 - 1, but, because int is signed, the range is actually half that.
In C a specific type of integer is stored in a fixed amount of memory. On all current architectures an int is stored in 32 bits. Since int carries a sign, the most significant bit is assigned to the sign. This means that the biggest integer you can store in an int is 2^31 - 1. You are seeing a negative number because the your int is overflowing into the sign bit and making it negative.
Number types in c are limited. You can find the max ints in limits.h
You should read the input as a string (char array) and process each character to allow arbitrary* lenght numbers.
*The sum still need to be less than max int. the input string must be big enough to contain what the user writes
An integer with 11 digits like 12345678912 if too large to fit in number which is an int on your platform.
In C, an int has of range of at least -32767 to 32767. On your platform it apparently has the range -2147483648 to +2147483647.
If code is to read the entire integer at once, the maximum number of digits is limited by the range of the various available integer types.
C provides an integer type called intmax_t and its unsigned partner uintmax_t which typically has the maximum range available on a given platform.
#include <inttypes.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
void MaximalSumOFDigits(void) {
uintmax_t number;
double widthf = log10(UINTMAX_MAX);
int widthi = ((int) floor(widthf)) - 1;
char buf[widthi + 2];
printf("What number would you like to sum: (up to %d digits)\n", widthi);
fgets(buf, sizeof buf, stdin);
char *endptr;
errno = 0;
number = strtoumax(buf, &endptr, 10);
if (*endptr != '\n')
Handle_UnexpectedInput();
if (errno) // This should not easily happen as buffer has limited length
Handle_TooBigANumber();
int sum = 0;
while (number > 0) { // avoiding ASCII dependence
sum += number % 10;
number /= 10;
}
printf("The sum of the number is %d.\n", sum);
}
With a 64-bit uintmax_t, allows numbers up to
18446744073709551615 (any 19 digit number and some 20 digit numbers)
The above suggests to the user input limit of limit of
_9999999999999999999 (any 19 digit number)

Resources