Confused as to what this code does
for (L=0; L < levels; L++, N_half>>=1){
func( y, N_half);
} // end: levels for loop
In particular this " N_half>>=1 "
Thanks
It advances the loop by dividing N_half by two at every iteration. It is equivalent to:
for (L=0; L<levels; ++L, N_half=N_half / 2) {
...
}
N_half>>=1 performs a 1-place bitwise shift-right on N_half, which (for non-negative numbers) divides it by 2.
>>= is to >> as += is to +.
>>= operator shifts number's digits k positions at right
examples:
binary form
N = 101010111 // 2-base arithmetic system
N >>= 1; // `division` by 2
N: 010101011
decimal form
N = 123456 // 10-base arithmetic system
N >>= 2; // `division` by 10^2
N: 001234
as usual, the numbers in memory are in binary form and >>=1 is equivalent to division by 2.
If N_half is a positive or unsigned integer, it halves it.
It right shifts N_half by 1 (i.e. divides it by two) and stores the result back in N_half
This seems to be the same as
for (L=0; L < levels; L++)
{
func(y, N_Half);
N_Half /= 2;
}
The question has been rephrased since I answered it, such that this is no longer valid, but added for completeness: If nothing else is done within the loop, it is equivalent to:
N_Half >>= levels;
Caveats:
if N_Half < 2^levels (0 for several iterations)
if N_Half < 0 (first rightshift will depending on the implementation make it a positive number)
Related
I have a tricky requirement in project asking to write function which returns a value 1 (0 otherwise) if given an integer representable as 22n+1. Where n is any non-negative integer.
int find_pow_2n_1(int M);
for e.g: return 1, when M=5 since 5 is output when n=1 -> 21*2+1 .
I am trying to evaluate the equation but it results in log function, not able to find any kind of hint while browsing in google as well .
Solution
int find_pow_2n_1(int M)
{
return 1 < M && !(M-1 & M-2) && M % 3;
}
Explanation
First, we discard values less than two, as we know the first matching number is two.
Then M-1 & M-2 tests whether there is more than one bit set in M-1:
M-1 cannot have zero bits set, since M is greater than one, so M-1 is not zero.
If M-1 has one bit set, then that bit is zero in M-2 and all lower bits are set, so M-1 and M-2 have no set bits in common, so M-1 & M-2 is zero.
If M-1 has more than one bit set, then M-2 has the lowest set bit cleared, but higher set bits remain set. So M-1 and M-2 have set bits in common, so M-1 & M-2 is non-zero.
So, if the test !(M-1 & M-2) passes, we know M-1 is a power of two. So M is one more than a power of two.
Our remaining concern is whether that is an even power of two. We can see that when M is an even power of two plus one, its remainder modulo three is two, whereas when M is an odd power of two plus one, its remainder modulo three is zero:
Remainder of 20+1 = 2 modulo 3 is 2.
Remainder of 21+1 = 3 modulo 3 is 0.
Remainder of 22+1 = 5 modulo 3 is 2.
Remainder of 23+1 = 9 modulo 3 is 0.
Remainder of 24+1 = 17 modulo 3 is 2.
Remainder of 25+1 = 33 modulo 3 is 0.
…
Therefore, M % 3, which tests whether the remainder of M modulo three is non-zero, tests whether M-1 is an even power of two.
There are only a few numbers with that property: make a table lookup array :-)
$ bc
for(n=0;n<33;n++)2^(2*n)+1
2
5
17
65
257
1025
4097
16385
65537
262145
1048577
4194305
16777217
67108865
268435457
1073741825
4294967297
17179869185
68719476737
274877906945
1099511627777
4398046511105
17592186044417
70368744177665
281474976710657
1125899906842625
4503599627370497
18014398509481985
72057594037927937
288230376151711745
1152921504606846977
4611686018427387905
18446744073709551617
Last number above is 2^64 + 1, probably will not fit an int in your implementation.
All proposed solutions are way too complicated or bad in performance. Try the simpler one:
static int is_power_of_2(unsigned long n)
{
return (n != 0 && ((n & (n - 1)) == 0));
}
static int is_power_of_2n(unsigned long n)
{
return is_power_of_2(n) && (__builtin_ffsl(n) & 1);
}
int main(void)
{
int x;
for (x = -3; x < 20; x++)
printf("Is %d = 2^2n + 1? %s\n", x, is_power_of_2n(x - 1) ? "Yes" : "no");
return 0;
}
Implementing __builtin_ffsl(), if you are using ancient compiler, I leave it as a homework (it can be done without tables or divisions).
Example: https://wandbox.org/permlink/gMrzZqhuP4onF8ku
While commenting on #Lundin's comment I realized that you may read a very nice set of bit twiddling hacks from Standford University.
UPDATE. As #grenix noticed the initial question was about the direct check, it may be done with the above code by introducing an additional wrapper, so nothing basically changes:
...
static int is_power_of_2n_plus_1(unsigned long n)
{
return is_power_of_2n(n - 1);
}
int main(void)
{
int x;
for (x = -3; x < 20; x++)
printf("Is %d = 2^2n + 1? %s\n", x, is_power_of_2n_plus_1(x) ? "Yes" : "no");
return 0;
}
Here I am leaving you a pseudocode (or a code that I haven't tested) which I think could help you think of the way to handle your problem :)
#include <math.h>
#include <stdlib.h>
#define EPSILON 0.000001
int find_pow_2n_1(int M) {
M--; // M = pow 2n now
double val = log2(M); // gives us 2n
val /= 2; // now we have n
if((val * 10) / 10 - val) <= EPSILON) return 1; // check whether n is an integer or not
else return 0;
}
For an assignment we are required to write a division algorithm in order to complete a certain question using just addition and recursion. I found that, without using tail recursion, the naive repeated subtraction implementation can easily result in a stack overflow. So doing a quick analysis of this method, and correct me if I'm wrong, shows that if you divide A by B, with n and m binary digits respectively, it should be exponential in n-m. I actually get
O( (n-m)*2^(n-m) )
since you need to subtract an m binary digit number from an n binary digit number 2^(n-m) times in order to drop the n digit number to an n-1 digit number, and you need to do this n-m times to get a number with at most m digits in the repeated subtraction division, so the runtime should be as mentioned. Again, I very well may be wrong so someone please correct me if I am. This is assuming O(1) addition since I'm working with fixed size integers. I suppose with fixed size integers one could argue the algorithm is O(1).
Back to my main question. I developed a different method to perform integer division which works much better, even when using it recursively, based on the idea that for
P = 2^(k_i) + ... 2^(K_0)
we have
A/B = (A - B*P)/B + P
The algorithm goes as follows to caclulate A/B:
input:
A, B
i) Set Q = 0
ii) Find the largest K such that B * 2^K <= A < B * 2(K + 1)
iii) Q -> Q + 2^K
iv) A -> A - B * 2^k
v) Repeat steps ii) through iv) until A <= B
vi) Return Q (and A if you want the remainder)
with the restrictions of using only addition, I simply add B to itself on each recursive call, however here is my code without recursion and with the use of shifts instead of addition.
int div( unsigned int m, unsigned int n )
{
// q is a temporary n, sum is the quotient
unsigned int q, sum = 0;
int i;
while( m > n )
{
i = 0;
q = n;
// double q until it's larger than m and record the exponent
while( q <= m )
{
q <<= 1;
++i;
}
i--;
q >>= 1; // q is one factor of 2 too large
sum += (1<<i); // add one bit of the quotient
m -= q; // new numerator
}
return sum;
}
I feel that sum |= (1<<i) may be more appropriate in order to emphasize I'm dealing with a binary representation, but it didn't seem to give any performance boost and may make it harder to understand. So, if M and N are the number of bits in m and n respectively, an analysis suggests the inner loop is performed M - N times and each time the outer loop is completed that m looses one bit, and it must also be completed M - N times in order for the condition m <= n so I get that it's O( (M - N)^2 ).
So after all of that, I am asking if I am correct about the runtime of the algorithm and whether it can be improved upon?
Your algorithm is pretty good and your analysis of the running time is correct, but you don't need to do the inner loop every time:
unsigned div(unsigned num, unsigned den)
{
//TODO check for divide by zero
unsigned place=1;
unsigned ret=0;
while((num>>1) >= den) //overflow-safe check
{
place<<=1;
den<<=1;
}
for( ;place>0; place>>=1,den>>=1)
{
if (num>=den)
{
num-=den;
ret+=place;
}
}
return ret;
}
That makes it O(M-N)
I'm working on a cryptographic exercise, and I'm trying to calculate (2n-1)mod p where p is a prime number
What would be the best approach to do this? I'm working with C so 2n-1 becomes too large to hold when n is large
I came across the equation (a*b)modp=(a(bmodp))modp, but I'm not sure this applies in this case, as 2n-1 may be prime (or I'm not sure how to factorise this)
Help much appreciated.
A couple tips to help you come up with a better way:
Don't use (a*b)modp=(a(bmodp))modp to compute 2n-1 mod p, use it to compute 2n mod p and then subtract afterward.
Fermat's little theorem can be useful here. That way, the exponent you actually have to deal with won't exceed p.
You mention in the comments that n and p are 9 or 10 digits, or something. If you restrict them to 32 bit (unsigned long) values, you can find 2^n mod p with a simple (binary) modular exponentiation:
unsigned long long u = 1, w = 2;
while (n != 0)
{
if ((n & 0x1) != 0)
u = (u * w) % p; /* (mul-rdx) */
if ((n >>= 1) != 0)
w = (w * w) % p; /* (sqr-rdx) */
}
r = (unsigned long) u;
And, since (2^n - 1) mod p = r - 1 mod p :
r = (r == 0) ? (p - 1) : (r - 1);
If 2^n mod p = 0 - which doesn't actually occur if p > 2 is prime - but we might as well consider the general case - then (2^n - 1) mod p = -1 mod p.
Since the 'common residue' or 'remainder' (mod p) is in [0, p - 1], we add a some multiple of p so that it is in this range.
Otherwise, the result of 2^n mod p was in [1, p - 1], and subtracting 1 will be in this range already. It's probably better expressed as:
if (r == 0)
r = p - 1; /* -1 mod p */
else
r = r - 1;
To take modulus you somehow must have 2^n-1 or you will move in a different direction of algorithms, interesting but seperate direction somehow, so i recommend you to use big int concept as it will be easy... make a structure and implement a big value in small values, e.g.
struct bigint{
int lowerbits;
int upperbits;
}
decomposition of the statement also has solution like 2^n = (2^n-4 * 2^4 )-1%p decompose and seperatly handle them, that will be quite algorithmic then
To compute 2^n - 1 mod p, you can use exponentiation by squaring after first removing any multiple of (p - 1) from n (since a^{p-1} = 1 mod p). In pseudo-code:
n = n % (p - 1)
result = 1
pow = 2
while n {
if n % 2 {
result = (result * pow) % p
}
pow = (pow * pow) % p
n /= 2
}
result = (result + p - 1) % p
I came across the answer that I am posting here, when solving one of the mathematical problems on HackerRank, and it has worked for all the given test cases given there.
If you restrict n and p to 64 bit (unsigned long) values, then here is the mathematical approach :
2^n - 1 can be written as 1*[ (2^n - 1)/(2 - 1) ]
If you look at this carefully, this is the sum of the GP 1 + 2 + 4 + .. + 2^(n-1)
And voila, we know that (a+b)%m = ( (a%m) + (b%m) )%m
If you have a confusion whether the above relation is true or not for addition, you can google for it or you can check this link : http://www.inf.ed.ac.uk/teaching/courses/dmmr/slides/13-14/Ch4.pdf
So, now we can apply the above mentioned relation to our GP, and you would have your answer!!
That is,
(2^n - 1)%p is equivalent to ( 1 + 2 + 4 + .. + 2^(n-1) )%p and now apply the given relation.
First, focus on 2n mod p because you can always subtract one at the end.
Consider the powers of two. This is a sequence of numbers produced by repeatedly multiplying by two.
Consider the modulo operation. If the number is written in base p, you're just grabbing the last digit. Higher digits can be thrown away.
So at some point(s) in the sequence, you get a two-digit number (a 1 in the p's place), and your task is really just to get rid of the first digit (subtract p) when that happens.
Stopping here conceptually, the brute-force approach would be something like this:
uint64_t exp2modp( uint64_t n, uint64_t p ) {
uint64_t ret = 1;
uint64_t limit = p / 2;
n %= p; // Apply Fermat's Little Theorem.
while ( n -- ) {
if ( ret >= limit ) {
ret *= 2;
ret -= p;
} else {
ret *= 2;
}
}
return ret;
}
Unfortunately, this still takes forever for large n and p, and I can't think of any better number theory offhand.
If you have a multiplication facility which can compute (p-1)^2 without overflow, then you can use an analogous algorithm using repeated squaring with a modulo after each square operation, and then take the product of the series of square residuals, again with a modulo after each multiplication.
step 1. x= shifting 1 n times and then subtract 1
step 2.result = logical and operation of x and p
Optimized way to handle the value of n^n (1 ≤ n ≤ 10^9)
I used long long int but it's not good enough as the value might be (1000^1000)
Searched and found the GMP library http://gmplib.org/ and BigInt class but don't wanna use them. I am looking for some numerical method to handle this.
I need to print the first and last k (1 ≤ k ≤ 9) digits of n^n
For the first k digits I am getting it like shown below (it's bit ugly way of doing it)
num = pow(n,n);
while(num){
arr[i++] = num%10;
num /= 10;
digit++;
}
while(digit > 0){
j=digit;
j--;
if(count<k){
printf("%lld",arr[j]);
count++;
}
digit--;
}
and for last k digits am using num % 10^k like below.
findk=pow(10,k);
lastDigits = num % findk;
enter code here
maximum value of k is 9. so i need only 18 digits at max.
I am think of getting those 18 digits without really solving the complete n^n expression.
Any idea/suggestion??
// note: Scope of use is limited.
#include <stdio.h>
long long powerMod(long long a, long long d, long long n){
// a ^ d mod n
long long result = 1;
while(d > 0){
if(d & 1)
result = result * a % n;
a = (a * a) % n;
d >>=1;
}
return result;
}
int main(void){
long long result = powerMod(999, 999, 1000000000);//999^999 mod 10^9
printf("%lld\n", result);//499998999
return 0;
}
Finding the Least Significant Digits (last k digits) are easy because of the property of modular arithmetic, which says: (n*n)%m == (n%m * n%m)%m, so the code shown by BLUEPIXY which followed exponentiation by squaring method will work well for finding k LSDs.
Now, Most Significant Digits (1st k digits) of N^N can be found in this way:
We know,
N^N = 10^(N log N)
So if you calculate N log (N) you will get a number of this format xxxx.yyyy, now we have to use this number as a power of 10, it is easily understandable that xxxx or integer part of the number will add xxxx zeros after 10, which is not important for you! That means, if you calculate 10^0.yyyy, you will get those significants digits you are looking for.
So the solution will be something like this:
double R = N * log10 (N);
R = R - (long long) R; //so taking only the fractional part
double V = pow(10, R);
int powerK = 1;
for (int i=0; i<k; i++) powerK *=10;
V *= powerK;
//Now Print the 1st K digits from V
Why don't you want to use bigint libraries?
bignum arithmetic is very hard to do right and efficiently. You could still get a PhD by working on that subject.
Fist, bigint arithmetic have non-trivial algorithmics
Then, bigint implementations usually need some machine instructions (like add with carry) which are not easily accessible in plain C.
For your specific problem (first and last few digits of NN) you'll better also reason on paper (using arithmetic theorems) to lower the complexity. I am not an expert, but I guess that still remains intractable, perhaps with a complexity worse than O(N)
This is what I need to do:
int lg(int v)
{
int r = 0;
while (v >>= 1) // unroll for more speed...
{
r++;
}
}
I found the above solution at: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLog
This works, butI need to do it without loops, control structures, or constants bigger than 0xFF (255), which has proven to be very hard for me to find. I've been trying to figure something out using conditionals in the form
( x ? y : z ) = (((~(!!x) + 1)) & y) | ((~(~(!!x) + 1)) & z)
but I can't get it to work. Thanks for your time.
Without any control structure, not even the ?: operator, you can simulate your own algo
int r = 0;
x >>= 1;
r += (x != 0);
x >>= 1;
r += (x != 0);
...
provided that, in C,
x is assumed to be positive (otherwise having int x=-1; for instance x >>= 1 n times is always != 0
a condition like x != 0 returns 0 (false) or 1 (*true)
That sounds like a homework. Well, if you can't use control structures, a good alternative is to precalculate what you can: divide and conquer. Solve for a smaller part (one byte, one nibble, your choice), and apply to the parts of your integer.