Referring to the below question,
How to concatenate two integers in C
unsigned concatenate(unsigned x, unsigned y) {
unsigned pow = 10;
while(y >= pow)
pow *= 10;
return x * pow + y;
}
How do I get the original values for x and y without using arrays? I need less processing overhead.
How do I get the original values for x and y without using arrays?
Information is lost. To reconstruct x,y addtional infomation is needed.
Example 1234 could have beeen formed by
x y
123 4
12 34
1 234
0 1234
Aside: Improved answer on the original concatenation problem.
You pass two arguments to concatenate().
If you pass two arguments to unconcatenate() you can get your numbers back:
void unconcatenate(unsigned *first, unsigned *second, unsigned concatenated, unsigned limit) {
unsigned pow = 10;
while (pow < limit) pow *= 10;
*first = concatenated / pow;
*second = concatenated % pow;
}
unsigned x, y, foo = concatenate(1234, 56);
unconcatenate(&x, &y, foo, 100);
printf("%u unconcatenaded to %u and %u.\n", foo, x, y);
see https://ideone.com/bg7qMd
I haven't commented on it yet, but that original question makes no sense to me.
In my experience, at least, the goal is to concatenate numbers with a known, fixed number of digits. For example, if I have
int year = 2021;
int month = 10;
int day = 2;
(that is, a date next month), and I want to "concatenate" them, the desired result is of course 20211002. It would be quite wrong to output 2021102 — because nobody could tell whether it was supposed to be October 2, or maybe January 2!
At the other question, there's a comment that "100 * x + y fails when y == 0", as if concatenating 23 and 0 should yield 230, or maybe 23, but that sounds crazy to me.
Since the question here is about getting the original values back, the answer is obviously to know how many digits each original number had. For example, if you know you have two 2-digit numbers x and y, then z = 100 * x + y is absolutely the right way to concatenate them. And then, having done so, getting the original values back is equally straightforward:
x = z / 100;
y = z % 100;
The key is that we picked that magic number 100, that effectively sets y as 2 digits, in advance, and baked it into the code. We did not, as some of the answers at the other question (and the code fragment in this question here) suggest, dynamically and empirically discover a scaling factor at run time by doing successive multiplications by 10. (And by not doing those successive multiplications to compute pow every time, we'll have less processing overhead, too.)
Summary: while(y >= pow) pow *= 10; is the wrong way to concatenate numbers in C. The right way is
z = 100 * x * y;
and once you've done it that way, you get the original numbers back by doing
x = z / 100;
y = z % 100;
If y might have more digits, pick an appropriate multiplier greater than 100, and use that instead. And do be mindful that if you're not careful, your concatenated number can end up bigger than an ordinary int will hold.
P.S. I said "if you know you have two 2-digit numbers x and y", but that's not quite right. If you're using 100 * x + y, then y has to be two digits (or less), but it's okay for x to be more than 2 digits (within limits).
Related
This is what I've found so far online,
int main(void)
{
long a = 12345;
int b = 10;
int remain = a - (a / b) * b;
printf("%i\n", remain);
}
First I wonder how the formula works. Maybe i cant do math, but the priority of operations here seems a bit odd. If i run this code the expected answer of 5 is printed. But I dont get how (a / b) * b doesn't cancel out to 'a' leading to a - a = 0.
Now, this only works for int and long, as soon as double are involved it doesn't work anymore. Anyone might tell me why? Is there an alternative to modulo that works for double?
Also I'm not sure if i understand up to what value a long can go, i found online that the upper limit was 2147483647 but when i input bigger numbers such as the one in 'a' the code runs without any issue up to a certain point...
Thanks for your help I'm new to coding and trying to learn!
Given two double finite numbers x and y, with y not equal to zero, fmod(x, y) produces the remainder of x when divided by y. Specifically, it returns x − ny, where n is chosen so that x − ny has the same sign as x and is smaller in magnitude than y. (So, if x is positive, 0 ≤ fmod(x, y) < x, and, if x is negative, x < fmod(x, y) ≤ 0.)
fmod is declared in <math.h>.
A properly implemented fmod returns an exact result; there is no floating-point error, since the specified result is always representable.
The C standard also specifies remquo to return the remainder and some low bits (at least three) of the quotient n and remainder with a variation on the definition of the remainder. It also specifies variants of these functions for float and long double.
Naive implementation. Limited range. Adds additional floating point imprecisions (as it does some arithmetic)
double naivemod(double x)
{
return x - (long long)x;
}
int main(void)
{
printf("%.50f\n", naivemod(345345.567567756));
printf("%.50f\n", naivemod(.0));
printf("%.50f\n", naivemod(10.5));
printf("%.50f\n", naivemod(-10.0/3));
}
I am trying to take the result of div function in c, cast it to an int and then add that int to a greater int value. I get the error of the title all the time, and i can not understand why
out = div(n, 10);
r = (int) out;
a = a + r;
Compiler shows me as an error the second line and out specifically.
Thank you in advance!
A div_t, as returned by div(), is a structure containing two numbers, the quotient and the remainder.
typedef struct {
int quot;
int rem;
} div_t;
If you've used the div() function then you want either r = out.rem or r = out.quot, not clear which from your example.
If all you want is the quotient, though, r = n / 10 is simpler. And if all you want is the remainder, r = n % 10 (for non-negative n). div() is useful in the case where you need both values - the actual divide instruction on many machines can deliver both results from one instruction.
div(x, y) function does both x / y and x % y in one operation. It returns a structure with rem member having the result of x % y and quot having the result of x / y. In your case you would access these values as out.quot and out.rem and both members are already values of type int. Casting a structure containing two integers into an integer does not make any sense.
On many processors there is a single division opcode that always calculates both, so if you need both, then div(x, y) is giving the other one for free. One common instance is converting a number into a decimal string which requires repeatedly taking quotient and remainder with 10; here you can use div efficiently for positive numbers:
res = div(n, 10);
next_digit = res.rem;
// place next_digit into the string
n = res.quot;
I want to read digit by digit the decimals of the sqrt of 5 in C.
The square root of 5 is 2,23606797749979..., so this'd be the expected output:
2
3
6
0
6
7
9
7
7
...
I've found the following code:
#include<stdio.h>
void main()
{
int number;
float temp, sqrt;
printf("Provide the number: \n");
scanf("%d", &number);
// store the half of the given number e.g from 256 => 128
sqrt = number / 2;
temp = 0;
// Iterate until sqrt is different of temp, that is updated on the loop
while(sqrt != temp){
// initially 0, is updated with the initial value of 128
// (on second iteration = 65)
// and so on
temp = sqrt;
// Then, replace values (256 / 128 + 128 ) / 2 = 65
// (on second iteration 34.46923076923077)
// and so on
sqrt = ( number/temp + temp) / 2;
}
printf("The square root of '%d' is '%f'", number, sqrt);
}
But this approach stores the result in a float variable, and I don't want to depend on the limits of the float types, as I would like to extract like 10,000 digits, for instance. I also tried to use the native sqrt() function and casting it to string number using this method, but I faced the same issue.
What you've asked about is a very hard problem, and whether it's even possible to do "one by one" (i.e. without working space requirement that scales with how far out you want to go) depends on both the particular irrational number and the base you want it represented in. For example, in 1995 when a formula for pi was discovered that allows computing the nth binary digit in O(1) space, this was a really big deal. It was not something people expected to be possible.
If you're willing to accept O(n) space, then some cases like the one you mentioned are fairly easy. For example, if you have the first n digits of the square root of a number as a decimal string, you can simply try appending each digit 0 to 9, then squaring the string with long multiplication (same as you learned in grade school), and choosing the last one that doesn't overshoot. Of course this is very slow, but it's simple. The easy way to make it a lot faster (but still asymptotically just as bad) is using an arbitrary-precision math library in place of strings. Doing significantly better requires more advanced approaches and in general may not be possible.
As already noted, you need to change the algorithm into a digit-by-digit one (there are some examples in the Wikipedia page about the methods of computing of the square roots) and use an arbitrary precision arithmetic library to perform the calculations (for instance, GMP).
In the following snippet I implemented the before mentioned algorithm, using GMP (but not the square root function that the library provides). Instead of calculating one decimal digit at a time, this implementation uses a larger base, the greatest multiple of 10 that fits inside an unsigned long, so that it can produce 9 or 18 decimal digits at every iteration.
It also uses an adapted Newton method to find the actual "digit".
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <gmp.h>
unsigned long max_ul(unsigned long a, unsigned long b)
{
return a < b ? b : a;
}
int main(int argc, char *argv[])
{
// The GMP functions accept 'unsigned long int' values as parameters.
// The algorithm implemented here can work with bases other than 10,
// so that it can evaluate more than one decimal digit at a time.
const unsigned long base = sizeof(unsigned long) > 4
? 1000000000000000000
: 1000000000;
const unsigned long decimals_per_digit = sizeof(unsigned long) > 4 ? 18 : 9;
// Extract the number to be square rooted and the desired number of decimal
// digits from the command line arguments. Fallback to 0 in case of errors.
const unsigned long number = argc > 1 ? atoi(argv[1]) : 0;
const unsigned long n_digits = argc > 2 ? atoi(argv[2]) : 0;
// All the variables used by GMP need to be properly initialized before use.
// 'c' is basically the remainder, initially set to the original number
mpz_t c;
mpz_init_set_ui(c, number);
// At every iteration, the algorithm "move to the left" by two "digits"
// the reminder, so it multplies it by base^2.
mpz_t base_squared;
mpz_init_set_ui(base_squared, base);
mpz_mul(base_squared, base_squared, base_squared);
// 'p' stores the digits of the root found so far. The others are helper variables
mpz_t p;
mpz_init_set_ui(p, 0UL);
mpz_t y;
mpz_init(y);
mpz_t yy;
mpz_init(yy);
mpz_t dy;
mpz_init(dy);
mpz_t dx;
mpz_init(dx);
mpz_t pp;
mpz_init(pp);
// Timing, for testing porpuses
clock_t start = clock(), diff;
unsigned long x_max = number;
// Each "digit" correspond to some decimal digits
for (unsigned long i = 0,
last = (n_digits + decimals_per_digit) / decimals_per_digit + 1UL;
i < last; ++i)
{
// Find the greatest x such that: x * (2 * base * p + x) <= c
// where x is in [0, base), using a specialized Newton method
// pp = 2 * base * p
mpz_mul_ui(pp, p, 2UL * base);
unsigned long x = x_max;
for (;;)
{
// y = x * (pp + x)
mpz_add_ui(yy, pp, x);
mpz_mul_ui(y, yy, x);
// dy = y - c
mpz_sub(dy, y, c);
// If y <= c we have found the correct x
if ( mpz_sgn(dy) <= 0 )
break;
// Newton's step: dx = dy/y' where y' = 2 * x + pp
mpz_add_ui(yy, yy, x);
mpz_tdiv_q(dx, dy, yy);
// Update x even if dx == 0 (last iteration)
x -= max_ul(mpz_get_si(dx), 1);
}
x_max = base - 1;
// The actual format of the printed "digits" is up to you
if (i % 4 == 0)
{
if (i == 0)
printf("%lu.", x);
putchar('\n');
}
else
printf("%018lu", x);
// p = base * p + x
mpz_mul_ui(p, p, base);
mpz_add_ui(p, p, x);
// c = (c - y) * base^2
mpz_sub(c, c, y);
mpz_mul(c, c, base_squared);
}
diff = clock() - start;
long int msec = diff * 1000L / CLOCKS_PER_SEC;
printf("\n\nTime taken: %ld.%03ld s\n", msec / 1000, msec % 1000);
// Final cleanup
mpz_clear(c);
mpz_clear(base_squared);
mpz_clear(p);
mpz_clear(pp);
mpz_clear(dx);
mpz_clear(y);
mpz_clear(dy);
mpz_clear(yy);
}
You can see the outputted digits here.
Your title says:
How to compute the digits of an irrational number one by one?
Irrational numbers are not limited to most square roots. They also include numbers of the form log(x), exp(z), sin(y), etc. (transcendental numbers). However, there are some important factors that determine whether or how fast you can compute a given irrational number's digits one by one (that is, from left to right).
Not all irrational numbers are computable; that is, no one has found a way to approximate them to any desired length (whether by a closed form expression, a series, or otherwise).
There are many ways numbers can be expressed, such as by their binary or decimal expansions, as continued fractions, as series, etc. And there are different algorithms to compute a given number's digits depending on the representation.
Some formulas compute a given number's digits in a particular base (such as base 2), not in an arbitrary base.
For example, besides the first formula to extract the digits of π without computing the previous digits, there are other formulas of this type (known as BBP-type formulas) that extract the digits of certain irrational numbers. However, these formulas only work for a particular base, not all BBP-type formulas have a formal proof, and most importantly, not all irrational numbers have a BBP-type formula (essentially, only certain log and arctan constants do, not numbers of the form exp(x) or sqrt(x)).
On the other hand, if you can express an irrational number as a continued fraction (which all real numbers have), you can extract its digits from left to right, and in any base desired, using a specific algorithm. What is more, this algorithm works for any real number constant, including square roots, exponentials (e and exp(x)), logarithms, etc., as long as you know how to express it as a continued fraction. For an implementation see "Digits of pi and Python generators". See also Code to Generate e one Digit at a Time.
I'm working via a basic 'Programming in C' book.
I have written the following code based off of it in order to calculate the square root of a number:
#include <stdio.h>
float absoluteValue (float x)
{
if(x < 0)
x = -x;
return (x);
}
float squareRoot (float x, float epsilon)
{
float guess = 1.0;
while(absoluteValue(guess * guess - x) >= epsilon)
{
guess = (x/guess + guess) / 2.0;
}
return guess;
}
int main (void)
{
printf("SquareRoot(2.0) = %f\n", squareRoot(2.0, .00001));
printf("SquareRoot(144.0) = %f\n", squareRoot(144.0, .00001));
printf("SquareRoot(17.5) = %f\n", squareRoot(17.5, .00001));
return 0;
}
An exercise in the book has said that the current criteria used for termination of the loop in squareRoot() is not suitable for use when computing the square root of a very large or a very small number.
Instead of comparing the difference between the value of x and the value of guess^2, the program should compare the ratio of the two values to 1. The closer this ratio gets to 1, the more accurate the approximation of the square root.
If the ratio is just guess^2/x, shouldn't my code inside of the while loop:
guess = (x/guess + guess) / 2.0;
be replaced by:
guess = ((guess * guess) / x ) / 1 ; ?
This compiles but nothing is printed out into the terminal. Surely I'm doing exactly what the exercise is asking?
To calculate the ratio just do (guess * guess / x) that could be either higher or lower than 1 depending on your implementation. Similarly, your margin of error (in percent) would be absoluteValue((guess * guess / x) - 1) * 100
All they want you to check is how close the square root is. By squaring the number you get and dividing it by the number you took the square root of you are just checking how close you were to the original number.
Example:
sqrt(4) = 2
2 * 2 / 4 = 1 (this is exact so we get 1 (2 * 2 = 4 = 4))
margin of error = (1 - 1) * 100 = 0% margin of error
Another example:
sqrt(4) = 1.999 (lets just say you got this)
1.999 * 1.999 = 3.996
3.996/4 = .999 (so we are close but not exact)
To check margin of error:
.999 - 1 = -.001
absoluteValue(-.001) = .001
.001 * 100 = .1% margin of error
How about applying a little algebra? Your current criterion is:
|guess2 - x| >= epsilon
You are elsewhere assuming that guess is nonzero, so it is algebraically safe to convert that to
|1 - x / guess2| >= epsilon / guess2
epsilon is just a parameter governing how close the match needs to be, and the above reformulation shows that it must be expressed in terms of the floating-point spacing near guess2 to yield equivalent precision for all evaluations. But of course that's not possible because epsilon is a constant. This is, in fact, exactly why the original criterion gets less effective as x diverges from 1.
Let us instead write the alternative expression
|1 - x / guess2| >= delta
Here, delta expresses the desired precision in terms of the spacing of floating point values in the vicinity of 1, which is related to a fixed quantity sometimes called the "machine epsilon". You can directly select the required precision via your choice of delta, and you will get the same precision for all x, provided that no arithmetic operations overflow.
Now just convert that back into code.
Suggest a different point of view.
As this method guess_next = (x/guess + guess) / 2.0;, once the initial approximation is in the neighborhood, the number of bits of accuracy doubles. Example log2(FLT_EPSILON) is about -23, so 6 iterations are needed. (Think 23, 12, 6, 3, 2, 1)
The trouble with using guess * guess is that it may vanish, become 0.0 or infinity for a non-zero x.
To form a quality initial guess:
assert(x > 0.0f);
int expo;
float signif = frexpf(x, &expo);
float guess = ldexpf(signif, expo/2);
Now iterate N times (e.g. 6), (N based on FLT_EPSILON, FLT_DECIMAL_DIG or FLT_DIG.)
for (i=0; i<N; i++) {
guess = (x/guess + guess) / 2.0f;
}
The cost of perhaps an extra iteration is saved by avoiding an expensive termination condition calculation.
If code wants to compare a/b nearest to 1.0f
Simply use some epsilon factor like 1 or 2.
float a = guess;
float b = x/guess;
assert(b);
float q = a/b;
#define FACTOR (1.0f /* some value 1.0f to maybe 2,3 or 4 */)
if (q >= 1.0f - FLT_EPSILON*N && q <= 1.0f + FLT_EPSILON*N) {
close_enough();
}
First lesson in numerical analysis: for floating point numbers x+y has the potential for large relative errors, especially when the sum is near zero, but x*y has very limited relative errors.
I wrote a function to find the cube root of a number a using the Newton-Raphson method to find the root of the function f(x) = x^3 - a.
#include <stdio.h>
#include <math.h>
double cube_root(double a)
{
double x = a;
double y;
int equality = 0;
if(x == 0)
{
return(x);
}
else
{
while(equality == 0)
{
y = (2 * x * x * x + a) / (3 * x * x);
if(y == x)
{
equality = 1;
}
x = y;
}
return(x);
}
}
f(x) for a = 20 (blue) and a = -20 (red) http://graphsketch.com/?eqn1_color=1&eqn1_eqn=x*x*x%20-%2020&eqn2_color=2&eqn2_eqn=x*x*x%20%2B%2020&eqn3_color=3&eqn3_eqn=&eqn4_color=4&eqn4_eqn=&eqn5_color=5&eqn5_eqn=&eqn6_color=6&eqn6_eqn=&x_min=-8&x_max=8&y_min=-75&y_max=75&x_tick=1&y_tick=1&x_label_freq=5&y_label_freq=5&do_grid=0&bold_labeled_lines=0&line_width=4&image_w=850&image_h=525
The code seemed to be working well, for example it calculates the cube root of 338947578237847893823789474.324623784 just fine, but weirdly fails for some numbers for example 4783748237482394? The code just seems to go into an infinite loop and must be manually terminated.
Can anyone explain why the code should fail on this number? I've included the graph to show that, using the starting value of a, this method should always keep providing closer and closer estimates until the two values are equal to working precision. So I don't really get what's special about this number.
Apart from posting an incorrect formula...
You are performing floating point arithmetic, and floating point arithmetic has rounding errors. Even with the rounding errors, you will get very very close to a cube root, but you won't get exactly there (usually cube roots are irrational, and floating point numbers are rational).
Once your x is very close to the cube root, when you calculate y, you should get the same result as x, but because of rounding errors, you may get something very close to x but slightly different instead. So x != y. Then you do the same calculation starting with y, and you may get x as the result. So your result will forever switch between two values.
You can do the same thing with three numbers x, y and z and quit when either z == y or z == x. This is much more likely to stop, and with a bit of mathematics you might even be able to proof that it will always stop.
Better to calculate the change in x, and determine whether that change is small enough so that the next step will not change x except for rounding errors.
shouldn't it be:
y = x - (2 * x * x * x + a) / (3 * x * x);
?