Inequality with XOR - xor

Given that inverse of XOR is a XOR.
Considering only non-negative integers.
Given 3 integers, a, low and high. We want to find all integers, b, such that:
low <= a^b <= high
Let low=2, high=6, a=1, b is variable.
2 <= 1^b <= 6
Since XOR is inverse of XOR, doing 1^ on each side inequality:
1^2 <= 1^1^b <= 1^6
which simplifies to
3 <= b <= 7
This is incorrect.
According to this minimum value of b for which 2 <= 1^b <= 6 is 3 but minimum value is 2.
b can take value of 6 but 1^6 = 7 which does not fall in range of 2 to 6
What am I doing wrong?

You are wrong in assuming that XOR is monotonic, and that these two equations below are equal:
2 <= 1^b <= 6 ............... 1^2 <= 1^1^b <= 1^6
This propery is not true at all, by xoring different numbers you might change the order. XOR 1 means simply that you toggle bit 1, so 2 will become 3, but 3 will become 2.

Related

Special Pairs in N natural number sequence

You are given a natural number N which represents sequence [1,2...N]. We have to determine the number of pairs (x,y) from this sequence that satisfies the given conditions.
1 <= x <= y <= N
sum of first x-1 numbers (i.e sum of [1,2,3..x-1]) = sum of numbers from x+1 to y (i.e sum of [x+1...y])
Example:-
If N = 3 there is only 1 pair (x=1,y=1) for which (sum of x-1 numbers) = 0 = (sum of x+1 to y)
any other pairs like (1,2),(1,3) or (2,3) does not satisfy the properties. so the answer is 1 as there is only one pair.
Another Example:-
If N=10, for pair (6,8) we can see sum of x-1 numbers i.e [1,2,3,4,5] = 15 = sum of numbers from x+1 to y i.e [7,8], Also another such pair would be (1,1). No other such pair exists so the answer, in this case, would be 2.
How can we approach and solve such problems to find the number of pairs in such a sequence?
Other things I have been able to deduce so far:-
Condition
Answer
Pairs
If 1<=N<=7
1
{(1,1)}
If 8<=N<=48
2
{(1,1),(6,8)}
If 49<=N<=287
3
{(1,1),(6,8),(35,49)}
If 288<=N<=1680
4
-
I tried but am unable to find any pattern or any such thing in these numbers.
Also, 1<=N<=10^16
--edit--
Courtesy of OEIS (link in comments): you can find the k'th value of y using this formula: ( (0.25) * (3.0+2.0*(2**0.5))**k ).floor
This gives us the k'th value in O(log k). First few results:
1
8
49
288
1681
9800
57121
332928
1940449
11309768
65918161
384199200
2239277041
13051463048
76069501249
443365544448
2584123765441
15061377048200
87784138523761
511643454094368
2982076586042447
17380816062160312
101302819786919424
590436102659356160
3441313796169217536
20057446674355949568
116903366249966469120
681362750825442836480
3971273138702690287616
23146276081390697054208
134906383349641499377664
786292024016458181771264
4582845760749107960348672
26710782540478185822224384
155681849482119992477483008
907380314352241764747706368
Notice that the ratio of successive numbers quickly approaches 5.828427124746. Given a value of n, take the log of n base 5.828427124746. The answer will be an integer close to this log.
E.g., say n = 1,000,000,000. Then log(n, 5.8284271247461) = 11.8. The answer is probably 12, but we can check the neighbors to be sure.
11: 65,918,161
12: 384,199,200
13: 2,239,277,041
Confirmed.
-- end edit --
Here's some ruby code to do this. Idea is to have two pointers and increment the pointer for x or y as appropriate. I'm using s(n) to calculate the sums, though this could be done without multiplication by just keeping a running total.
def s(n)
return n*(n+1)/2
end
def f(n)
count = 0
x = 1
y = 1
while y <= n do
if s(x-1) == s(y) - s(x)
count += 1
puts "(#{x}, #{y})"
end
if s(x-1) <= s(y) - s(x)
x += 1
else
y += 1
end
end
end
Here are the first few pairs:
(1, 1)
(6, 8)
(35, 49)
(204, 288)
(1189, 1681)
(6930, 9800)
(40391, 57121)
(235416, 332928)
(1372105, 1940449)
(7997214, 11309768)
(46611179, 65918161)

Fastest way to compute sum of first set bit over consecutive integers?

Edit: I wish SO let me accept 2 answers because neither is complete without the other. I suggest reading both!
I am trying to come up with a fast implementation of a function that given an unsigned 32-bit integer x returns the sum of 2^trailing_zeros(i) for i=1..x-1, where trailing_zeros is the count trailing zeros operation which is defined as returning the 0 bits after the least significant 1 bit. This seems like the kind of problem that should lend itself to a clever bit manipulation implementation that takes the same number of instructions regardless of the input, but I haven't been able to derive it.
Mathematically, 2^trailing_zeros(i) is equivalent to the largest factor of 2 that exactly divides i. So we are summing those largest factors for 1..x-1.
i | 1 2 3 4 5 6 7 8 9 10
-----------------------------------------------------------------------
2^trailing_zeroes(i) | 1 2 1 4 1 2 1 8 1 2
-----------------------------------------------------------------------
Sum (desired value) | 0 1 3 4 8 9 11 12 20 21
It is a little easier to see the structure of 2^trailing_zeroes(i) if we 'plot' the values -- horizontal position increasing from left to right corresponding to i and vertical position increasing from top to bottom corresponding to trailing_zeroes(i).
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
16 16 16 16 16 16 16 16
32 32 32 32
64 64
Here it is easier to see the pattern that 2's are always 4 apart, 8's are always 16 apart, etc. However, each pattern starts at a different time -- 8's don't begin until i=8, 16 doesn't begin until i=16, etc. If you don't take into account that the patterns don't start right away you can come up with formulas that don't work -- for example you might think to determine the number of 8's going into the total you should just compute floor(x/16) but i=25 is far enough to the right to include both of the first two 8s.
The best solution I have come up with so far is:
Set n = floor(log2(x)). This can be computed quickly using the count leading zeros operation. This tells us the highest power of two that is going to be involved in the sum.
Set sum = 0
for i = 1..n
sum += floor((x - 2^i) / 2^(i+1))*2^i + 2^i
The way this works as for each power, it calculates the horizontal distance on the plot between x and the first appearance of that power, e.g. the distance between x and the first 8 is (x-8), and then it divides by the distance between repeating instances of that power, e.g. floor((x-8)/16), which gives us how many times that power appeared, we the sum for that power, e.g. floor((x-8)/16)*8. Then we add one instance of the given power because that calculation excludes the very first time that power appears.
In practice this implementation should be pretty fast because the division/floor can be done by right bit shift and powers of two can be done with 1 bit-shifted to the left. However it seems like it should still be possible to do better. This implementation will loop more for larger inputs, up to 32 times (it's O(log2(n)), ideally we want O(1) without a gigantic lookup table using up all the CPU cache). I've been eyeing the BMI/BMI2 intrinsics but I don't see an obvious way to apply them.
Although my goal is to implement this in a compiled language like C++ or Rust with real bit shifting and intrinsics, I've been prototyping in Python. Included below is my script that includes the implementation I described, z(x), and the code for generating the plot, tower(x).
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from math import pow, floor, log, ceil
def leading_zeros(x):
return len(bin(x).split('b')[-1].split('1')[-1])
def f(x):
s = 0
for c, i in enumerate(range(1,x)):
a = pow(2, len(bin(i).split('b')[-1].split('1')[-1]))
s += a
return s
def g(x): return sum([pow(2,i)*floor((x+pow(2,i)-1)/pow(2,i+1)) for i in range(0,32)])
def h(x):
s = 0
extra = 0
extra_s = 0
for i in range(0,32):
num = (x+pow(2,i)-1)
den = pow(2,i+1)
fraction = num/den
floored = floor(num/den)
power = pow(2,i)
product = power*floored
if product == 0:
break
s += product
extra += (fraction - floored)
extra_s += power*fraction
#print(f"i={i} s={s} num={num} den={den} fraction={fraction} floored={floored} power={power} product={product} extra={extra} extra_s={extra_s}")
return s
def z(x):
upper_bound = floor(log(x,2)) if x > 0 else 0
s = 0
for i in range(upper_bound+1):
num = (x - pow(2,i))
den = pow(2,i+1)
fraction = num/den
floored = floor(fraction)
added = pow(2,i)
s += floored * added
s += added
print(f"i={i} s={s} upper_bound={upper_bound} num={num} den={den} floored={floored} added={added}")
return s
# return sum([floor((x - pow(2,i))/pow(2,i+1) + pow(2,i)) for i in range(floor(log(x, 2)))])
def tower(x):
table = [[" " for i in range(x)] for j in range(ceil(log(x,2)))]
for i in range(1,x):
p = leading_zeros(i)
table[p][i] = 2**p
for row in table:
for col in row:
print(col,end='')
print()
# h(9000)
for i in range(1,16):
tower(i)
print((i, f(i), g(i), h(i), z(i-1)))
Based on the method of Eric Postpischil, here is a way to do it without a loop.
Note that every bit is being multiplied by its position, and the results are summed (sort of, except there is also a factor of 0.5 in it, let's put that aside for now). Let's call those values that are being added up "the partial products" just to call them something, it's not really accurate to call them that, I can't come up with anything better. If we transpose that a little bit, then it's built up like this: the lowest bit of every partial product is the lowest bit of the position of every bit multiplied by that bit. Single-bit-products are bitwise-AND, and the values of the lowest bits of the positions are 0,1,0,1 etc, so it works out to x & 0xAAAAAAAA, the second bit of every partial product is x & 0xCCCCCCCC (and has a "weight" of 2, so this must be multiplied by 2) etc.
Then the whole thing needs to be shifted right by 1, to account for the factor of 0.5
So in total:
unsigned CountCumulativeTrailingZeros(unsigned x)
{
--x;
unsigned sum = x;
sum += (x >> 1) & 0x55555555;
sum += x & 0xCCCCCCCC;
sum += (x & 0xF0F0F0F0) << 1;
sum += (x & 0xFF00FF00) << 2;
sum += (x & 0xFFFF0000) << 3;
return sum;
}
For an additional explanation, here is a more visual example. Let's temporarily drop the factor of 0.5 again, it doesn't fundamentally change the algorithm but adds some complication.
First I write above every bit of v (some example value), the position of that bit in binary (p0 is the least significant bit of the position, p1 the second bit etc). Read the ps vertically, every column is a number:
p0: 10101010101010101010101010101010
p1: 11001100110011001100110011001100
p2: 11110000111100001111000011110000
p3: 11111111000000001111111100000000
p4: 11111111111111110000000000000000
v : 00000000100001000000001000000000
So for example bit 9 is set, and it has (reading from bottom to top) 01001 above it (9 in binary).
What we want to do (why this works has been explained by Eric's answer), is take the indexes of the bits that are set, shift them to their corresponding positions, and add them. In this case, they are already at their own positions (by construction, the numbers were written at their own positions), so there is no shift, but they still need to be filtered so only the numbers that correspond to set bits survive. This is what I meant by the "single bit products": take a bit of v and multiply it by the corresponding bits of p0, p1, etc.
You can look at that as multiplying the bit value by its index as well so 2^bit * bit as mentioned in the comments. That is not how it is done here, but that is effectively what is done.
Back to the example, applying bitwise-AND results in these partial products:
pp0: 00000000100000000000001000000000
pp1: 00000000100001000000000000000000
pp2: 00000000100000000000000000000000
pp3: 00000000000000000000001000000000
pp4: 00000000100001000000000000000000
v : 00000000100001000000001000000000
The only values that are left are 01001, 10010, 10111, and they are at their corresponding positions (so, already shifted to where they need to go).
Those values must be added, while keeping them at their positions. They don't need to be extracted from the strange form which they are in, addition is freely reorderable (associative and commutative) so it's OK to add all the least significant bits of the partial products to the sum first, then all the seconds bits, and so on. But they have to added with the right "weight", after all a set bit in pp0 corresponds to a 1 at that position but a set bit in pp1 really corresponds to a 2 at that position (since it's the second bit of the number that it is part of). So pp0 is used directly, but pp1 is shifted left by 1, pp2 is shifted left by 2, etc.
The the factor of 0.5 must still be accounted for, which I did mostly by shifting over the bits of the partial products by one less than what their weight would imply. pp0 was shifted left by zero, so it must be shifted right by 1 now. This could be done with less complication by just putting return sum >> 1; at the end, but that would reduce the range of values that the function can handle before running into integer wrapping modulo 232 (also it would cost an extra operation, and doing it the weird way does not).
Observe that if we count from 1 to x instead of to x−1, we have a pattern:
x
sum
sum/x
1
1
1
2
3
1.5
4
8
2
8
20
2.5
16
48
3
So we can easily calculate the sum for any power of two p as p • (1 + ½b), where b is the power (equivalently, the number of the bit that is set or the log2 of the power). We can see this by induction: If the sum from 1 to 2b is 2b•(1+½b) (which it is for b=0), then the sum from 1 to 2b+1 reprises the individual term contributions twice except that the last term adds 2b+1 instead of 2b, so the sum is 2•2b•(1+½b) − 2b + 2b+1 = 2b+1•(1+½b) + ½•2b+1 = 2b+1•(1+½(b+1)).
Further, between any two powers of two, the lower bits reprise the previous partial sums. Thus, for any x, we can compute the cumulative number of trailing zeros by summing the sums for the set bits in it. Recalling this provides the sum for numbers from 1 to x, we adjust by to get the desired sum from 1 to x−1 subtracting one from x before computation:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = 0;
for (unsigned bit = 0; bit < sizeof x * CHAR_BIT; ++bit)
sum += (x & 1u << bit) * (1 + bit * .5);
return sum;
}
We can terminate the loop when x is exhausted:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = 0;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) * (1 + bit * .5);
return sum;
}
As harold points out, we can factor out the 1, as summing the value of each bit of x equals x:
unsigned CountCumulative(unsigned x)
{
--x;
unsigned sum = x;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) * bit * .5;
return sum;
}
Then eliminate the floating-point:
unsigned CountCumulative(unsigned x)
{
unsigned sum = --x;
for (unsigned bit = 0; x; ++bit, x >>= 1)
sum += ((x & 1) << bit) / 2 * bit;
return sum;
}
Note that when bit is zero, ((x & 1) << bit) / 2 will lose the fraction, but this irrelevant as * bit makes the contribution zero anyway. For all other values of bit, (x & 1) << bit is even, so the division does not lose anything.
This will overflow unsigned at some point, so one might want to use a wider type for the calculations.
More Code Golf
Another way to add half the values of the bits of x repeatedly depending on their bit position is to shift x (to halve its bit values) and then add that repeatedly while removing successive bits from low to high:
unsigned CountCumulative(unsigned x)
{
unsigned sum = --x;
for (unsigned bit = 0; x >>= 1; ++bit)
sum += x << bit;
return sum;
}

Check subset sum for special array equation

I was trying to solve the following problem.
We are given N and A[0]
N <= 5000
A[0] <= 10^6 and even
if i is odd then
A[i] >= 3 * A[i-1]
if i is even
A[i]= 2 * A[i-1] + 3 * A[i-2]
element at odd index must be odd and at even it must be even.
We need to minimize the sum of the array.
and We are given a Q numbers
Q <= 1000
X<= 10^18
We need to determine is it possible to get subset-sum = X from our array.
What I have tried,
Creating a minimum sum array is easy. Just follow the equations and constraints.
The approach that I know for subset-sum is dynamic programming which has time complexity sum*sizeof(Array) but since sum can be as large as 10^18 that approach won't work.
Is there any equation relation that I am missing?
We can make it with a bit of math:
sorry for latex I am not sure it is possible on stack?
let X_n be the sequence (same as being defined by your A)
I assume X_0 is positive.
Thus sequence is strictly increasing and minimization occurs when X_{2n+1} = 3X_{2n}
We can compute the general term of X_{2n} and X_{2n+1}
v_0 =
X0
X1
v_1 =
X1
X2
the relation between v_0 and v_1 is
M_a =
0 1
3 2
the relation between v_1 and v_2 is
M_b =
0 1
0 3
hence the relation between v_2 and v_0 is
M = M_bM_a =
3 2
9 6
we deduce
v_{2n} =
X_{2n}
X_{2n+1}
v_{2n} = M^n v_0
Follow the classical diagonalization... and we (unless mistaken) get
X_{2n} = 9^n/3 X_0 + 2*9^{n-1}X_1
X_{2n+1} = 9^n X_0 + 2*9^{n-1}/3X_1
recall that X_1 = 3X_0 thus
X_{2n} = 9^n X_0
X_{2n+1} = 3.9^n X_0
Now if we represent the sum we want to check in base 9 we get
9^{n+1} 9^n
___ ________ ___ ___
X^{2n+2} X^2n
In the X^{2n} places we can only put a 1 or a 0 (that means we take the 2n-th elem from the A)
we may also put a 3 in the place of the X^{2n} place which means we selected the 2n+1th elem from the array
so we just have to decompose number in base 9, and check whether all its digits or either 0,1 or 3 (and also if its leading digit is not out of bound of our array....)

Rank and unrank fibonacci bitsequence with k ones

For positive integers n and k, let a "k-fibonacci-bitsequence of n" be a bitsequence with k 1 where the 1 on index i describe not Math.pow(2,i) but Fibonacci(i). These positive integers that add up to n, and let the "rank" of a given k- fibonnaci-bitsequence of n be its position in the sorted list of all of these fibonacci-bitsequences in lexicographic order, starting at 0.
For example, for the number 39 we have following valid k-fibonacci-bitsequences, k <=4. The fibonacci numbers behind the fibonacci-bitsequence in this example are following:
34 21 13 8 5 3 2 1
10001000 k = 2 rank = 0
01101000 k = 3 rank = 0
10000110 k = 3 rank = 1
01101100 k = 4 rank = 0
So, I want to be able to do two things:
Given n, k, and a k-fibonacci-bitsequence of n, I want to find the rank of that k-fibonacci-bitsequence of n.
Given n, k, and a rank, I want to find the k-fibonacci-bitsequence of n with that rank.
Can I do this without having to compute all the k-fibonacci-bitsequences of n that come before the one of interest?
Preliminaries
For brevity lets say »k-fbs of n« instead of »k-fibonacci-bitsequences of n«.
Question
Can I do this without having to compute all the k-fbs of n that come before the one of interest?
I'm not sure. So far I still have to compute some of fbs. However, you might have thought we had to start from 00…0 and count up – this is not the case. We can do it the other way around and start from the highest fbs and work our way down very efficiently.
This is not a complete answer. However, there are some observations that could help you:
Zeckendorf
In the following pseudo-code we use the data-type fbs which is basically an array of bools. We can read and write individual bits using mySeq[i] where bit i represents the Fibonacci number fib(i). Just as in your question, the bits myFbs[0] and myFbs[1] do not exist. All bits are initialized to 0 by default. An fbs can be used without [] to read the represented number (n). The helper function #(fbs) returns the number of set bits (k) inside an fbs. Example for n = 7:
fbs meaning representation helper functions
1 0 1 0
| | | `— 0·fib(2) = 0·1 ——— myFbs[2] = 0 #(myFbs) == 2
| | `——— 1·fib(3) = 1·2 ——— myFbs[3] = 1 myFbs == 7
| `————— 0·fib(4) = 0·3 ——— myFbs[4] = 0
`——————— 1·fib(5) = 1·5 ——— myFbs[5] = 1
For any given n we can easily compute the lexicographical maximum (across all k) fbs of n as this fbs happends to be the Zeckendorf representation of n.
function zeckendorf(int n) returns (fbs z):
1 int i := any (ideally the smallest) number such that fib(start) > n
2 while n-z > 0
3 | if fib(i) < n
4 | | z[i] := 1
5 | i := i - 1
zeckendorf(n) is unique and the only fbs of n with k=#(zeckendorf(n)). Therefore zeckendorf(n) has rank=0. Also, there exists no k'-fbs of n with k'<#(zeckendorf(n)).
Transformation
Any k-fbs of n can be transformed into a (k+1)-fbs of n by replacing the bit sequence 100 by 011 anywhere inside the fbs. This works because fib(i)=fib(i-1)+fib(i-2).
If our input k-fbs of n has rank=0 and we replace the right-most 100 then our resulting (k+1)-fbs of n also has rank=0. If we replace the second-right-most 100 our resulting (k+1)-fbs has rank=1 and so on.
You should be able answer both of your questions using repeated transformations starting at zeckendorf(n). For the first question it might even be sufficient to only look at the k-stable transformations 011…100→100…011 and 100…011→011…100 of the given fbs (think about what these transformations do to the rank).

How does modulus of a smaller dividend and larger divisor work?

7 % 3 = 1 (remainder 1)
how does
3 % 7 (remainder ?)
work?
remainder of 3/7 is 3..since it went 0 times with 3 remainder so 3%7 = 3
7 goes into 3? zero times with 3 left over.
quotient is zero. Remainder (modulus) is 3.
Conceptually, I think of it this way. By definition, your dividend must be equal to (quotient * divisor) + modulus
Or, solving for modulus: modulus = dividend - (quotient * divisor)
Whenever the dividend is less than the divisor, the quotient is always zero which results in the modulus simply being equal to the dividend.
To illustrate with OP's values:
modulus of 3 and 7 = 3 - (0 * 7) = 3
To illustrate with other values:
1 % 3:
1 - (0 * 3) = 1
2 % 3:
2 - (0 * 3) = 2
The same way. The quotient is 0 (3 / 7 with fractional part discarded). The remainder then satisfies:
(a / b) * b + (a % b) = a
(3 / 7) * 7 + (3 % 7) = 3
0 * 7 + (3 % 7) = 3
(3 % 7) = 3
This is defined in C99 §6.5.5, Multiplicative operators.
7 divided by 3 is 2 with a remainder of 1
3 divided by 7 is 0 with a remainder of 3
As long as they're both positive, the remainder will be equal to the dividend. If one or both is negative, then you get reminded that % is really the remainder operator, not the modulus operator. A modulus will always be positive, but a remainder can be negative.
(7 * 0) + 3 = 3; therefore, the remainder is 3.
a % q = r means there is a x so that q * x + r = a.
So, 7 % 3 = 1 because 3 * 2 + 1 = 7,
and 3 % 7 = 3 because 7 * 0 + 3 = 3
It seems you forgot to mention the surprising case, if the divident is smaller and negative:
-3 % 7
result: 4
For my brain to understand this kind of question, I'm always converting it to a real world object, for example if I convert your question 3 % 7. I'm going to represent the "3" as a 3-inch wide metal hole, then the "7" as a 7 inch metal screw. Can you insert the 7 inch metal screw to a 3 inch wide metal hole? Of course not, therefore the answer should be the 3-inch metal hole, it doesn't matter even let say you have a 1000 or a million inch wide screw, it is still 3, because how many times can you insert the 1000 or a million inch wide screw to a 3 inch wide metal hole? Zero times, right?
The most simple and effective catch to remember would be:
Whenever dividend is less than the divisor, modulus is just that dividend.
Let's formulate this:
if x < y, then x % y = x

Resources