Recursion to Iteration - Scheme to C - c

Can somewone help me convert this scheme function:
#lang racket
(define (powmod2 x e m)
(define xsqrm (remainder (* x x) m))
(cond [(= e 0) 1]
[(even? e) (powmod2 xsqrm (/ e 2) m)]
[(odd? e) (remainder (* x (powmod2 xsqrm (/ (sub1 e) 2) m)) m)]))
Into a function in C, and don't use recursion i.e use iteration.
I'm out of ideas', the part that is bothering me is when e is odd and then the recursive call is in the remainder function. I dont know how to transfer that in a while loop? any tips or suggestions:
This is what i have so far:
int powmod2(int x, int e, int m) {
int i = x;
int xsqrm = ((x * x) % m);
while (e != 0){
if (e%2 == 0) {
x = xsqrm;
e = (e/2);
xsqrm = ((x * x) % m);
}
else {
i = x;
x = xsqrm;
e = (e - 1)/2;
xsqrm = ((x * x) % m);
}
}
e = 1;
return (i*e)%m;
}

The even version is straightforward because the code has been written tail recursively so the call to (powmod2 xsqrm (/ e 2) m) can be expressed iteratively by replacing e with half of e and x with its square modulo m:
int powmod2(int x, int e, int m) { /* version for when e is a power of 2 */
while ((e /= 2) != 0)
x = (x * x) % m;
return x;
}
However the odd version has not been written tail recursively. One approach is to create a helper method that uses an accumulator. This helper method can then be written tail recursively for both even and odd exponent. You can then transform that into an iteration.

You are having trouble doing the conversion because the original scheme code is not tail recursive. Try to add extra parameters to powmod2 so that you do not need to do the multiplication by remainder in the odd case after calling the recursive function.
To illustrate, its hard to loopify the following function
int fac(n){
if(n == 0) {
return 1;
}else{
return n * fac(n-1)
}
}
But it is easy to loopify the version with an accumulation parameter
int fac(n, res){
if(n == 0) {
return res;
}else{
return fac(n-1, res*n)
}
}
int real_fac(n){ return fac(n, 1); }

Perhaps if you were to run the algorithm with some values to see how the result is calculated, it can help you figure out how to convert it. Let's see a single run for x=5, e=5 and m=7:
1. x=5, e=5, m=7
xsqrm=4
e:odd => res = 5*...%7
2. x=4, e=2, m=7
xsqrm=2
e:even => res = ...%7
3. x=2, e=1, m=7
xsqrm=4
e:odd => res = 2*...%7
4. x=4, e=0, m=7
e==0 => res = 1
res = 5*2%7=3
At step 1, we get a partial calculation for the result: it is 5 times the result of next step mod 7. At step 2, since it is even the result is the same as the result of the next step. At step 3, we've got something similar to step 1. The result we'll feed upstairs is calculated by multiplying next result by 2 (mod 7 again). And at termination, we've got our result to feed upstairs: 1. Now, as we go up, we just know how to calculate res: 2*1%7 for step 3, 2 for step 2, and 2*5%7 for step 1.
One way to implement it is to use a stack. At every partial result, if the exponent is odd, we can push the multiplication factor to the stack, and once we terminate, we can just multiply them all. This is the naive/cheating method for conversion.
There is a more efficient way that you should be able to see when you look at the steps above. Also other answers about converting everything to tail recursive is a very good hint.

The easiest way is to reason what is the original function trying to compute? This is the value of x to the power e module m. If you express e in binary, you can get e = e0 * 1 + e1 * 2 + e2 * 4 + e3 * 8 + ..., where en is either 0 or 1. And x^n = x * e0 + x ^ 2 * e1 + x ^ 4 * e2 + x ^ 8 * e3 + ....
By using the mathematical properties of the modulo operator, ie. (a + b) % m = ((a % m) + (b % m)) % m and (a * b) % m = ((a % m) * (b % m)) % m, we can then rewrite the function as:
int powmod2(int x, int e, int m) {
// This correspond to (= e 0)
int r = 1;
while (e != 0) {
if (e % 2) {
// This correspond to (odd? e)
r = (r * x) % m;
}
// This correspond to the recursive call
// that is done whatever the parity of e.
x = (x * x) % m;
e /= 2;
}
return r;
}

The first step would be writing the original Scheme procedure as a tail recursion. Notice that this rewrite works because of the properties of modular arithmetic:
(define (powmod2 x e m)
(define (iter x e acc)
(let ((xsqrm (remainder (* x x) m)))
(cond ((zero? e) acc)
((even? e) (iter xsqrm (/ e 2) acc))
(else (iter xsqrm (/ (sub1 e) 2) (remainder (* x acc) m))))))
(iter x e 1))
The key element of the above procedure is that the answer is passed in the acc parameter. Now we have a tail recursion, after that the conversion to a fully iterative solution is pretty straightforward:
int powmod2(int x, int e, int m) {
int acc = 1;
int xsqrm = 0;
while (e != 0) {
xsqrm = (x * x) % m;
if (e % 2 == 0) {
x = xsqrm;
e = e / 2;
}
else {
acc = (x * acc) % m;
x = xsqrm;
e = (e - 1) / 2;
}
}
return acc;
}
It can be optimized further, like this:
int powmod2(int x, int e, int m) {
int acc = 1;
while (e) {
if (e & 1) {
e--;
acc = (x * acc) % m;
}
x = (x * x) % m;
e >>= 1;
}
return acc;
}

Related

Writing a function that calculates the sum of squares within a range in one line in C

My try
double sum_squares_from(double x, double n){
return n<=0 ? 0 : x*x + sum_squares_from((x+n-1)*(x+n-1),n-1);
}
Instead of using loops my professor wants us to write functions like this...
What the exercise asks for is a function sum_squares_from() with double x being the starting number and n is the number of number. For example if you do x = 2 and n = 4 you get 2*2+3*3+4*4+5*5. It returns zero if n == 0.
My thinking was that in my example what I have is basically x*x+(x+1)(x+1)+(x+1+1)(x+1+1)+(x+1+1+1)(x+1+1+1) = (x+0)(x+0)+(x+1)(x+1)+(x+2)(x+2)+(x+3)(x+3) = (x+n-1)^2 repeated n times where n gets decremented every time by one until it becomes zero and then you sum everything.
Did I do it right?
(if my professor seems a bit demanding... he somehow does this sort of thing all in his head without auxiliary calculations. Scary guy)
It's not recursive, but it's one line:
int
sum_squares(int x, int n) {
return ((x + n - 1) * (x + n) * (2 * (x + n - 1) + 1) / 6) - ((x - 1) * x * (2 * (x - 1) + 1) / 6);
}
Sum of squares (of integers) has a closed-form solution for 1 .. n. This code calculates the sum of squares from 1 .. (x+n) and then subtracts the sum of squares from 1 .. (x-1).
The original version of this answer used ASCII art.
So,
&Sum;i:0..n i = n(n+1)(&half;)
&Sum;i:0..n i2 = n(n+1)(2n+1)(&frac16;)
We note that,
&Sum;i:0..n (x+i)2
&equals; &Sum;i:0...n x2 + 2xi + i2
&equals; (n+1)x2 + (2x)&Sum;i:0..n i + &Sum;i:0..n i2
&equals; (n+1)x2 + n(n+1)x + n(n+1)(2n+1)(&frac16;)
Thus, your sum has the closed form:
double sum_squares_from(double x, int n) {
return ((n-- > 0)
? (n + 1) * x * x
+ x * n * (n + 1)
+ n * (n + 1) * (2 * n + 1) / 6.
: 0);
}
If I apply some obfuscation, the one-line version becomes:
double sum_squares_from(double x, int n) {
return (n-->0)?(n+1)*(x*x+x*n+n*(2*n+1)/6.):0;
}
If the task is to implement the summation in a loop, use tail recursion. Tail recursion can be mechanically replaced with a loop, and many compilers implement this optimization.
static double sum_squares_from_loop(double x, int n, double s) {
return (n <= 0) ? s : sum_squares_from_loop(x+1, n-1, s+x*x);
}
double sum_squares_from(double x, int n) {
return sum_squares_from_loop(x, n, 0);
}
As an illustration, if you observe the generated assembly in GCC at a sufficient optimization level (-Os, -O2, or -O3), you will notice that the recursive call is eliminated (and sum_squares_from_loop is inlined to boot).
Try it online!
As mentioned in my original comment, n should not be type double, but instead be type int to avoid floating point comparison problems with n <= 0. Making the change and simplifying the multiplication and recursive call, you do:
double sum_squares_from(double x, int n)
{
return n <= 0 ? 0 : x * x + sum_squares_from (x + 1, n - 1);
}
If you think about starting with x * x and increasing x by 1, n times, then the simple x * x + sum_squares_from (x + 1, n - 1) is quite easy to understand.
Maybe this?
double sum_squares_from(double x, double n) {
return n <= 0 ? 0 : (x + n - 1) * (x + n - 1) + sum_squares_from(x, n - 1);
}

modular exponentation funcation generate incorrect result for big input in c

I try two function for modular exponentiation for big base return wrong results,
One of the function is:
uint64_t modular_exponentiation(uint64_t x, uint64_t y, uint64_t p)
{
uint64_t res = 1; // Initialize result
x = x % p; // Update x if it is more than or
// equal to p
while (y > 0)
{
// If y is odd, multiply x with result
if (y & 1)
res = (res*x) % p;
// y must be even now
y = y>>1; // y = y/2
x = (x*x) % p;
}
return res;
}
For input x = 1103362698 ,y = 137911680 , p=1217409241131113809;
It return the value (x^y mod p):749298230523009574(Incorrect).
The correct value is:152166603192600961
The other function i try, gave same result, What is wrong with these functions?
The other one is :
long int exponentMod(long int A, long int B, long int C)
{
// Base cases
if (A == 0)
return 0;
if (B == 0)
return 1;
// If B is even
long int y;
if (B % 2 == 0) {
y = exponentMod(A, B / 2, C);
y = (y * y) % C;
}
// If B is odd
else {
y = A % C;
y = (y * exponentMod(A, B - 1, C) % C) % C;
}
return (long int)((y + C) % C);
}
With p = 1217409241131113809, this value as well as any intermediate values for res and x will be larger than 32 bits. This means that multiplying two of these numbers could result in a value larger than 64 bits which overflows the datatype you're using.
If you restrict the parameters to 32 bit datatypes and use 64 bit datatypes for intermediate values then the function will work. Otherwise you'll need to use a big number library to get correct output.

How can I reduce the time complexity of the following loop?

for(i=0;i<N-2;i++)
count=(count*10)%M;
Here, N can be upto 10^18 and M is (10^9 +7). Since this loop takes O(n) time to execute, I get TLE in my code. Any way to reduce the time complexity?
The question is basically:
(count*a^b)%mod = ((count%mod)*((a^b)%mod))%mod
a = 10, b = 10^18
You can find ((a^b)%mod) using:
long long power(long long x, long long y, long long p)
{
long long res = 1; // Initialize result
x = x % p; // Update x if it is more than or
// equal to p
while (y > 0)
{
// If y is odd, multiply x with result
if (y & 1)
res = (res*x) % p;
// y must be even now
y = y>>1; // y = y/2
x = (x*x) % p;
}
return res;
}
Time Complexity of the power function is O(log y).
In your case count is a 1-digit number so we can simply multiply this with (count%mod), and finally take mod of the result. If count is a big number too, and can cause overflow then we can do:
long long mulmod(long long a, long long b, long long mod)
{
long long res = 0; // Initialize result
a = a % mod;
while (b > 0)
{
// If b is odd, add 'a' to result
if (b % 2 == 1)
res = (res + a) % mod;
// Multiply 'a' with 2
a = (a * 2) % mod;
// Divide b by 2
b /= 2;
}
// Return result
return res % mod;
}

Program for finding nth root of the number without any external library or header like math.h

Is there any way to find nth root of the number without any external library in C? I'm working on a bare metal code so there is no OS. Also, no complete C is there.
You can write a program like this for nth root. This program is for square root.
int floorSqrt(int x)
{
// Base cases
if (x == 0 || x == 1)
return x;
// Staring from 1, try all numbers until
// i*i is greater than or equal to x.
int i = 1, result = 1;
while (result < x)
{
if (result == x)
return result;
i++;
result = i*i;
}
return i-1;
}
You can use the same approach for nth root.
Here there is a C implementation of the the nth root algorithm you can find in wikipedia. It needs an exponentiation algorithm, so I also include an implementation of a basic method for exponentiation by squaring that you can find also find in wikipedia.
double npower(double const base, int const n)
{
if (n < 0) return npower(1/base, -n)
else if (n == 0) return 1.0;
else if (n == 1) return base;
else if (n % 2) return base*npower(base*base, n/2);
else return npower(base*base, n/2);
}
double nroot(double const base, int const n)
{
if (n == 1) return base;
else if (n <= 0 || base < 0) return NAN;
else {
double delta, x = base/n;
do {
delta = (base/npower(x,n-1)-x)/n;
x += delta;
} while (fabs(delta) >= 1e-8);
return x;
}
}
Some comments on this:
The nth root algorithm in wikipedia leaves freedom for the initial guess. In this example I set it up to be base/n, but this was just a guess.
The macro NAN is usually defined in <math.h>, so you would need to define it to be suitable for your needs.
Both functions are implemented in a very rough and simple way, and their performance can be greatly improved with careful thought.
The tolerance in this example is set to 1e-8 and should be changed to something different. It should probably be proportional to the value of the base.
You can try the nth_root C function :
// return a number that, when multiplied by itself nth times, makes N.
unsigned nth_root(const unsigned n, const unsigned nth) {
unsigned a = n, b, c, r = nth ? n + (n > 1) : n == 1 ;
for (; a < r; b = a + (nth - 1) * r, a = b / nth)
for (r = a, a = n, c = nth - 1; c && (a /= r); --c);
return r;
}
Source

Algorithm to find nth root of a number

I am looking for an efficient algorithm to find nth root of a number. The answer must be an integer. I have found that newtons method and bisection method are popular methods. Are there any efficient and simple methods for integer output?
#include <math.h>
inline int root(int input, int n)
{
return round(pow(input, 1./n));
}
This works for pretty much the whole integer range (as IEEE754 8-byte doubles can represent the whole 32-bit int range exactly, which are the representations and sizes that are used on pretty much every system). And I doubt any integer based algorithm is faster on non-ancient hardware. Including ARM. Embedded controllers (the microwave washing machine kind) might not have floating point hardware though. But that part of the question was underspecified.
I know this thread is probably dead, but I don't see any answers I like and that bugs me...
int root(int a, int n) {
int v = 1, bit, tp, t;
if (n == 0) return 0; //error: zeroth root is indeterminate!
if (n == 1) return a;
tp = iPow(v,n);
while (tp < a) { // first power of two such that v**n >= a
v <<= 1;
tp = iPow(v,n);
}
if (tp == a) return v; // answer is a power of two
v >>= 1;
bit = v >> 1;
tp = iPow(v, n); // v is highest power of two such that v**n < a
while (a > tp) {
v += bit; // add bit to value
t = iPow(v, n);
if (t > a) v -= bit; // did we add too much?
else tp = t;
if ( (bit >>= 1) == 0) break;
}
return v; // closest integer such that v**n <= a
}
// used by root function...
int iPow(int a, int e) {
int r = 1;
if (e == 0) return r;
while (e != 0) {
if ((e & 1) == 1) r *= a;
e >>= 1;
a *= a;
}
return r;
}
This method will also work with arbitrary precision fixed point math in case you want to compute something like sqrt(2) to 100 decimal places...
I question your use of "algorithm" when speaking of C programs. Programs and algorithms are not the same (an algorithm is mathematical; a C program is expected to be implementing some algorithm).
But on current processors (like in recent x86-64 laptops or desktops) the FPU is doing fairly well. I guess (but did not benchmark) that a fast way of computing the n-th root could be,
inline unsigned root(unsigned x, unsigned n) {
switch (n) {
case 0: return 1;
case 1: return x;
case 2: return (unsigned)sqrt((double)x);
case 3: return (unsigned)cbrt((double)x);
default: return (unsigned) pow (x, 1.0/n);
}
}
(I made a switch because many processors have hardware to compute sqrt and some have hardware to compute cbrt ..., so you should prefer these when relevant...).
I am not sure that n-th root of a negative number makes sense in general. So my root function takes some unsigned x and returns some unsigned number.  
Here is an efficient general implementation in C, using a simplified version of the "shifting nth root algorithm" to compute the floor of the nth root of x:
uint64_t iroot(const uint64_t x, const unsigned n)
{
if ((x == 0) || (n == 0)) return 0;
if (n == 1) return x;
uint64_t r = 1;
for (int s = ((ilog2(x) / n) * n) - n; s >= 0; s -= n)
{
r <<= 1;
r |= (ipow(r|1, n) <= (x >> s));
}
return r;
}
It needs this function to compute the nth power of x (using the method of exponentiation by squaring):
uint64_t ipow(uint64_t x, unsigned n)
{
if (x <= 1) return x;
uint64_t y = 1;
for (; n != 0; n >>= 1, x *= x)
if (n & 1)
y *= x;
return y;
}
and this function to compute the floor of base-2 logarithm of x:
int ilog2(uint64_t x)
{
#if __has_builtin(__builtin_clzll)
return 63 - ((x != 0) * (int)__builtin_clzll(x)) - ((x == 0) * 64);
#else
int y = -(x == 0);
for (unsigned k = 64 / 2; k != 0; k /= 2)
if ((x >> k) != 0)
{ x >>= k; y += k; }
return y;
#endif
}
Note: This assumes that your compiler understands GCC's __has_builtin test and that your compiler's uint64_t type is the same size as an unsigned long long.
You can try this C function to get the nth_root of an unsigned integer :
unsigned initial_guess_nth_root(unsigned n, unsigned nth){
unsigned res = 1;
for(; n >>= 1; ++res);
return nth ? 1 << (res + nth - 1) / nth : 0 ;
}
// return a number that, when multiplied by itself nth times, makes N.
unsigned nth_root(const unsigned n, const unsigned nth) {
unsigned a = initial_guess_nth_root(n , nth), b, c, r = nth ? a + (n > 0) : n == 1 ;
for (; a < r; b = a + (nth - 1) * r, a = b / nth)
for (r = a, a = n, c = nth - 1; c && (a /= r); --c);
return r;
}
Example of output :
24 == (int) pow(15625, 1.0/3)
25 == nth_root(15625, 3)
0 == nth_root(0, 0)
1 == nth_root(1, 0)
4 == nth_root(4096, 6)
13 == nth_root(18446744073709551614, 17) // 64-bit 20 digits
11 == nth_root(340282366920938463463374607431768211454, 37) // 128-bit 39 digits
Here is the github source.

Resources