binomial coefficient for very high numbers in c - c

So the task I have to solve is to calculate the binomial coefficient for 100>=n>k>=1 and then say how many solutions for n and k are over an under barrier of 123456789.
I have no problem in my formula of calculating the binomial coefficient but for high numbers n & k -> 100 the datatypes of c get to small to calculated this.
Do you have any suggestions how I can bypass this problem with overflowing the datatypes.
I thought about dividing by the under barrier straight away so the numbers don't get too big in the first place and I have to just check if the result is >=1 but i couldn't make it work.

Say your task is to determine how many binomial coefficients C(n, k) for 1 ≤ k < n ≤ 8 exceed a limit of m = 18. You can do this by using the recurrence C(n, k) = C(n − 1, k) + C(n − 1, k − 1) that can visualized in Pascal's triangle.
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 (20) 15 6 1
1 7 (21 35 35 21) 7 1
1 8 (28 56 70 56 28) 8 1
Start at the top and work your way down. Up to n = 5, everything is below the limit of 18. On the next line, the 20 exceeds the limit. From now on, more and more coefficients are beyond 18.
The triangle is symmetric and strictly increasing in the first half of each row. You only need to find the first element that exceeds the limit on each line in order to know how many items to count.
You don't have to store the whole triangle. It is enough to keey the last and current line. Alternatively, you can use the algorithm detailed [in this article][ot] to work your way from left to right on each row. Since you just want to count the coefficients that exceed a limit and don't care about their values, the regular integer types should be sufficient.

First, you'll need a type that can handle the result. The larget number you need to handle is C(100,50) = 100,891,344,545,564,193,334,812,497,256. This number requires 97 bits of precision, so your normal data types won't do the trick. A quad precision IEEE float would do the trick if your environment provides it. Otherwise, you'll need some form of high/arbitrary precision library.
Then, to keep the numbers within this size, you'll want cancel common terms in the numerator and the denominator. And you'll want to calculate the result using ( a / c ) * ( b / d ) * ... instead of ( a * b * ... ) / ( c * d * ... ).

Related

Rank and unrank fibonacci bitsequence with k ones

For positive integers n and k, let a "k-fibonacci-bitsequence of n" be a bitsequence with k 1 where the 1 on index i describe not Math.pow(2,i) but Fibonacci(i). These positive integers that add up to n, and let the "rank" of a given k- fibonnaci-bitsequence of n be its position in the sorted list of all of these fibonacci-bitsequences in lexicographic order, starting at 0.
For example, for the number 39 we have following valid k-fibonacci-bitsequences, k <=4. The fibonacci numbers behind the fibonacci-bitsequence in this example are following:
34 21 13 8 5 3 2 1
10001000 k = 2 rank = 0
01101000 k = 3 rank = 0
10000110 k = 3 rank = 1
01101100 k = 4 rank = 0
So, I want to be able to do two things:
Given n, k, and a k-fibonacci-bitsequence of n, I want to find the rank of that k-fibonacci-bitsequence of n.
Given n, k, and a rank, I want to find the k-fibonacci-bitsequence of n with that rank.
Can I do this without having to compute all the k-fibonacci-bitsequences of n that come before the one of interest?
Preliminaries
For brevity lets say »k-fbs of n« instead of »k-fibonacci-bitsequences of n«.
Question
Can I do this without having to compute all the k-fbs of n that come before the one of interest?
I'm not sure. So far I still have to compute some of fbs. However, you might have thought we had to start from 00…0 and count up – this is not the case. We can do it the other way around and start from the highest fbs and work our way down very efficiently.
This is not a complete answer. However, there are some observations that could help you:
Zeckendorf
In the following pseudo-code we use the data-type fbs which is basically an array of bools. We can read and write individual bits using mySeq[i] where bit i represents the Fibonacci number fib(i). Just as in your question, the bits myFbs[0] and myFbs[1] do not exist. All bits are initialized to 0 by default. An fbs can be used without [] to read the represented number (n). The helper function #(fbs) returns the number of set bits (k) inside an fbs. Example for n = 7:
fbs meaning representation helper functions
1 0 1 0
| | | `— 0·fib(2) = 0·1 ——— myFbs[2] = 0 #(myFbs) == 2
| | `——— 1·fib(3) = 1·2 ——— myFbs[3] = 1 myFbs == 7
| `————— 0·fib(4) = 0·3 ——— myFbs[4] = 0
`——————— 1·fib(5) = 1·5 ——— myFbs[5] = 1
For any given n we can easily compute the lexicographical maximum (across all k) fbs of n as this fbs happends to be the Zeckendorf representation of n.
function zeckendorf(int n) returns (fbs z):
1 int i := any (ideally the smallest) number such that fib(start) > n
2 while n-z > 0
3 | if fib(i) < n
4 | | z[i] := 1
5 | i := i - 1
zeckendorf(n) is unique and the only fbs of n with k=#(zeckendorf(n)). Therefore zeckendorf(n) has rank=0. Also, there exists no k'-fbs of n with k'<#(zeckendorf(n)).
Transformation
Any k-fbs of n can be transformed into a (k+1)-fbs of n by replacing the bit sequence 100 by 011 anywhere inside the fbs. This works because fib(i)=fib(i-1)+fib(i-2).
If our input k-fbs of n has rank=0 and we replace the right-most 100 then our resulting (k+1)-fbs of n also has rank=0. If we replace the second-right-most 100 our resulting (k+1)-fbs has rank=1 and so on.
You should be able answer both of your questions using repeated transformations starting at zeckendorf(n). For the first question it might even be sufficient to only look at the k-stable transformations 011…100→100…011 and 100…011→011…100 of the given fbs (think about what these transformations do to the rank).

Determine the adjacency of two fibonacci number

I have many fibonacci numbers, if I want to determine whether two fibonacci number are adjacent or not, one basic approach is as follows:
Get the index of the first fibonacci number, say i1
Get the index of the second fibonacci number, say i2
Get the absolute value of i1-i2, that is |i1-i2|
If the value is 1, then return true.
else return false.
In the first step and the second step, it may need many comparisons to get the correct index by using accessing an array.
In the third step, it need one subtraction and one absolute operation.
I want to know whether there exists another approach to quickly to determine the adjacency of the fibonacci numbers.
I don't care whether this question could be solved mathematically or by any hacking techniques.
If anyone have some idea, please let me know. Thanks a lot!
No need to find the index of both number.
Given that the two number belongs to Fibonacci series, if their difference is greater than the min. number among them then those two are not adjacent. Other wise they are.
Because Fibonacci series follows following rule:
F(n) = F(n-1) + F(n-2) where F(n)>F(n-1)>F(n-2).
So F(n) - F(n-1) = F(n-2) ,
=> Diff(n,n-1) < F(n-1) < F(n-k) for k >= 1
Difference between two adjacent fibonaci number will always be less than the min number among them.
NOTE : This will only hold if numbers belong to Fibonacci series.
Simply calculate the difference between them. If it is smaller than the smaller of the 2 numbers they are adjacent, If it is bigger, they are not.
Each triplet in the Fibonacci sequence a, b, c conforms to the rule
c = a + b
So for every pair of adjacent Fibonaccis (x, y), the difference between them (y-x) is equal to the value of the previous Fibonacci, which of course must be less than x.
If 2 Fibonaccis, say (x, z) are not adjacent, then their difference must be greater than the smaller of the two. At minimum, (if they are one Fibonacci apart) the difference would be equal to the Fibonacci between them, (which is of course greater than the smaller of the two numbers).
Since for (a, b, c, d)
since c= a+b
and d = b+c
then d-b = (b+c) - b = c
By Binet's formula, the nth Fibonacci number is approximately sqrt(5)*phi**n, where phi is the golden ration. You can use base phi logarithms to recover the index easily:
from math import log, sqrt
def fibs(n):
nums = [1,1]
for i in range(n-2):
nums.append(sum(nums[-2:]))
return nums
phi = (1 + sqrt(5))/2
def fibIndex(f):
return round((log(sqrt(5)*f,phi)))
To test this:
for f in fibs(20): print(fibIndex(f),f)
Output:
2 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55
11 89
12 144
13 233
14 377
15 610
16 987
17 1597
18 2584
19 4181
20 6765
Of course,
def adjacentFibs(f,g):
return abs(fibIndex(f) - fibIndex(g)) == 1
This fails with 1,1 -- but there is little point for explicit testing special logic for such an edge-case. Add it in if you want.
At some stage, floating-point round-off error will become an issue. For that, you would need to replace math.log by an integer log algorithm (e.g. one which involves binary search).
On Edit:
I concentrated on the question of how to recover the index (and I will keep the answer since that is an interesting problem in its own right), but as #LeandroCaniglia points out in their excellent comment, this is overkill if all you want to do is check if two Fibonacci numbers are adjacent, since another consequence of Binet's formula is that sufficiently large adjacent Fibonacci numbers have a ratio which differs from phi by a negligible amount. You could do something like:
def adjFibs(f,g):
f,g = min(f,g), max(f,g)
if g <= 34:
return adjacentFibs(f,g)
else:
return abs(g/f - phi) < 0.01
This assumes that they are indeed Fibonacci numbers. The index-based approach can be used to verify that they are (calculate the index and then use the full-fledged Binet's formula with that index).

Why does array size have to be 3^k+1 for cycle leader iteration algorithm to work?

The cycle leader iteration algorithm is an algorithm for shuffling an array by moving all even-numbered entries to the front and all odd-numbered entries to the back while preserving their relative order. For example, given this input:
a 1 b 2 c 3 d 4 e 5
the output would be
a b c d e 1 2 3 4 5
This algorithm runs in O(n) time and uses only O(1) space.
One unusual detail of the algorithm is that it works by splitting the array up into blocks of size 3k+1. Apparently this is critical for the algorithm to work correctly, but I have no idea why this is.
Why is the choice of 3k + 1 necessary in the algorithm?
Thanks!
This is going to be a long answer. The answer to your question isn't simple and requires some number theory to fully answer. I've spent about half a day working through the algorithm and I now have a good answer, but I'm not sure I can describe it succinctly.
The short version:
Breaking the input into blocks of size 3k + 1 essentially breaks the input apart into blocks of size 3k - 1 surrounded by two elements that do not end up moving.
The remaining 3k - 1 elements in the block move according to an interesting pattern: each element moves to the position given by dividing the index by two modulo 3k.
This particular motion pattern is connected to a concept from number theory and group theory called primitive roots.
Because the number two is a primitive root modulo 3k, beginning with the numbers 1, 3, 9, 27, etc. and running the pattern is guaranteed to cycle through all the elements of the array exactly once and put them into the proper place.
This pattern is highly dependent on the fact that 2 is a primitive root of 3k for any k ≥ 1. Changing the size of the array to another value will almost certainly break this because the wrong property is preserved.
The Long Version
To present this answer, I'm going to proceed in steps. First, I'm going to introduce cycle decompositions as a motivation for an algorithm that will efficiently shuffle the elements around in the right order, subject to an important caveat. Next, I'm going to point out an interesting property of how the elements happen to move around in the array when you apply this permutation. Then, I'll connect this to a number-theoretic concept called primitive roots to explain the challenges involved in implementing this algorithm correctly. Finally, I'll explain why this leads to the choice of 3k + 1 as the block size.
Cycle Decompositions
Let's suppose that you have an array A and a permutation of the elements of that array. Following the standard mathematical notation, we'll denote the permutation of that array as σ(A). We can line the initial array A up on top of the permuted array σ(A) to get a sense for where every element ended up. For example, here's an array and one of its permutations:
A 0 1 2 3 4
σ(A) 2 3 0 4 1
One way that we can describe a permutation is just to list off the new elements inside that permutation. However, from an algorithmic perspective, it's often more helpful to represent the permutation as a cycle decomposition, a way of writing out a permutation by showing how to form that permutation by beginning with the initial array and then cyclically permuting some of its elements.
Take a look at the above permutation. First, look at where the 0 ended up. In σ(A), the element 0 ended up taking the place of where the element 2 used to be. In turn, the element 2 ended up taking the place of where the element 0 used to be. We denote this by writing (0 2), indicating that 0 should go where 2 used to be, and 2 should go were 0 used to be.
Now, look at the element 1. The element 1 ended up where 4 used to be. The number 4 then ended up where 3 used to be, and the element 3 ended up where 1 used to be. We denote this by writing (1 4 3), that 1 should go where 4 used to be, that 4 should go where 3 used to be, and that 3 should go where 1 used to be.
Combining these together, we can represent the overall permutation of the above elements as (0 2)(1 4 3) - we should swap 0 and 2, then cyclically permute 1, 4, and 3. If we do that starting with the initial array, we'll end up at the permuted array that we want.
Cycle decompositions are extremely useful for permuting arrays in place because it's possible to permute any individual cycle in O(C) time and O(1) auxiliary space, where C is the number of elements in the cycle. For example, suppose that you have a cycle (1 6 8 4 2). You can permute the elements in the cycle with code like this:
int[] cycle = {1, 6, 8, 4, 2};
int temp = array[cycle[0]];
for (int i = 1; i < cycle.length; i++) {
swap(temp, array[cycle[i]]);
}
array[cycle[0]] = temp;
This works by just swapping everything around until everything comes to rest. Aside from the space usage required to store the cycle itself, it only needs O(1) auxiliary storage space.
In general, if you want to design an algorithm that applies a particular permutation to an array of elements, you can usually do so by using cycle decompositions. The general algorithm is the following:
for (each cycle in the cycle decomposition algorithm) {
apply the above algorithm to cycle those elements;
}
The overall time and space complexity for this algorithm depends on the following:
How quickly can we determine the cycle decomposition we want?
How efficiently can we store that cycle decomposition in memory?
To get an O(n)-time, O(1)-space algorithm for the problem at hand, we're going to show that there's a way to determine the cycle decomposition in O(1) time and space. Since everything will get moved exactly once, the overall runtime will be O(n) and the overall space complexity will be O(1). It's not easy to get there, as you'll see, but then again, it's not awful either.
The Permutation Structure
The overarching goal of this problem is to take an array of 2n elements and shuffle it so that even-positioned elements end up at the front of the array and odd-positioned elements end up at the end of the array. Let's suppose for now that we have 14 elements, like this:
0 1 2 3 4 5 6 7 8 9 10 11 12 13
We want to shuffle the elements so that they come out like this:
0 2 4 6 8 10 12 1 3 5 7 9 11 13
There are a couple of useful observations we can have about the way that this permutation arises. First, notice that the first element does not move in this permutation, because even-indexed elements are supposed to show up in the front of the array and it's the first even-indexed element. Next, notice that the last element does not move in this permutation, because odd-indexed elements are supposed to end up at the back of the array and it's the last odd-indexed element.
These two observations, put together, means that if we want to permute the elements of the array in the desired fashion, we actually only need to permute the subarray consisting of the overall array with the first and last elements dropped off. Therefore, going forward, we are purely going to focus on the problem of permuting the middle elements. If we can solve that problem, then we've solved the overall problem.
Now, let's look at just the middle elements of the array. From our above example, that means that we're going to start with an array like this one:
Element 1 2 3 4 5 6 7 8 9 10 11 12
Index 1 2 3 4 5 6 7 8 9 10 11 12
We want to get the array to look like this:
Element 2 4 6 8 10 12 1 3 5 7 9 11
Index 1 2 3 4 5 6 7 8 9 10 11 12
Because this array was formed by taking a 0-indexed array and chopping off the very first and very last element, we can treat this as a one-indexed array. That's going to be critically important going forward, so be sure to keep that in mind.
So how exactly can we go about generating this permutation? Well, for starters, it doesn't hurt to take a look at each element and to try to figure out where it began and where it ended up. If we do so, we can write things out like this:
The element at position 1 ended up at position 7.
The element at position 2 ended up at position 1.
The element at position 3 ended up at position 8.
The element at position 4 ended up at position 2.
The element at position 5 ended up at position 9.
The element at position 6 ended up at position 3.
The element at position 7 ended up at position 10.
The element at position 8 ended up at position 4.
The element at position 9 ended up at position 11.
The element at position 10 ended up at position 5.
The element at position 11 ended up at position 12.
The element at position 12 ended up at position 6.
If you look at this list, you can spot a few patterns. First, notice that the final index of all the even-numbered elements is always half the position of that element. For example, the element at position 4 ended up at position 2, the element at position 12 ended up at position 6, etc. This makes sense - we pushed all the even elements to the front of the array, so half of the elements that came before them will have been displaced and moved out of the way.
Now, what about the odd-numbered elements? Well, there are 12 total elements. Each odd-numbered element gets pushed to the second half, so an odd-numbered element at position 2k+1 will get pushed to at least position 7. Its position within the second half is given by the value of k. Therefore, the elements at an odd position 2k+1 gets mapped to position 7 + k.
We can take a minute to generalize this idea. Suppose that the array we're permuting has length 2n. An element at position 2x will be mapped to position x (again, even numbers get halfed), and an element at position 2x+1 will be mapped to position n + 1 + x. Restating this:
The final position of an element at position p is determined as follows:
If p = 2x for some integer x, then 2x ↦ x
If p = 2x+1 for some integer x, then 2x+1 ↦ n + 1 + x
And now we're going to do something that's entirely crazy and unexpected. Right now, we have a piecewise rule for determining where each element ends up: we either divide by two, or we do something weird involving n + 1. However, from a number-theoretic perspective, there is a single, unified rule explaining where all elements are supposed to end up.
The insight we need is that in both cases, it seems like, in some way, we're dividing the index by two. For the even case, the new index really is formed by just dividing by two. For the odd case, the new index kinda looks like it's formed by dividing by two (notice that 2x+1 went to x + (n + 1)), but there's an extra term in there. In a number-theoretic sense, though, both of these really correspond to division by two. Here's why.
Rather than taking the source index and dividing by two to get the destination index, what if we take the destination index and multiply by two? If we do that, an interesting pattern emerges.
Suppose our original number was 2x. The destination is then x, and if we double the destination index to get back 2x, we end up with the source index.
Now suppose that our original number was 2x+1. The destination is then n + 1 + x. Now, what happens if we double the destination index? If we do that, we get back 2n + 2 + 2x. If we rearrange this, we can alternatively rewrite this as (2x+1) + (2n+1). In other words, we've gotten back the original index, plus an extra (2n+1) term.
Now for the kicker: what if all of our arithmetic is done modulo 2n + 1? In that case, if our original number was 2x + 1, then twice the destination index is (2x+1) + (2n+1) = 2x + 1 (modulo 2n+1). In other words, the destination index really is half of the source index, just done modulo 2n+1!
This leads us to a very, very interesting insight: the ultimate destination of each of the elements in a 2n-element array is given by dividing that number by two, modulo 2n+1. This means that there really is a nice, unified rule for determining where everything goes. We just need to be able to divide by two modulo 2n+1. It just happens to work out that in the even case, this is normal integer division, and in the odd case, it works out to taking the form n + 1 + x.
Consequently, we can reframe our problem in the following way: given a 1-indexed array of 2n elements, how do we permute the elements so that each element that was originally at index x ends up at position x/2 mod (2n+1)?
Cycle Decompositions Revisited
At this point, we've made quite a lot of progress. Given any element, we know where that element should end up. If we can figure out a nice way to get a cycle decomposition of the overall permutation, we're done.
This is, unfortunately, where things get complicated. Suppose, for example, that our array has 10 elements. In that case, we want to transform the array like this:
Initial: 1 2 3 4 5 6 7 8 9 10
Final: 2 4 6 8 10 1 3 5 7 9
The cycle decomposition of this permutation is (1 6 3 7 9 10 5 8 4 2). If our array has 12 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12
Final: 2 4 6 8 10 12 1 3 5 7 9 11
This has cycle decomposition (1 7 10 5 9 11 12 6 3 8 4 2 1). If our array has 14 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Final: 2 4 6 8 10 12 14 1 3 5 7 9 11 13
This has cycle decomposition (1 8 4 2)(3 9 12 6)(5 10)(7 11 13 14). If our array has 16 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Final: 2 4 6 8 10 12 14 16 1 3 5 7 9 11 13 15
This has cycle decomposition (1 9 13 15 16 8 4 2)(3 10 5 11 14 7 12 6).
The problem here is that these cycles don't seem to follow any predictable patterns. This is a real problem if we're going to try to solve this problem in O(1) space and O(n) time. Even though given any individual element we can figure out what cycle contains it and we can efficiently shuffle that cycle, it's not clear how we figure out what elements belong to what cycles, how many different cycles there are, etc.
Primitive Roots
This is where number theory comes in. Remember that each element's new position is formed by dividing that number by two, modulo 2n+1. Thinking about this backwards, we can figure out which number will take the place of each number by multiplying by two modulo 2n+1. Therefore, we can think of this problem by finding the cycle decomposition in reverse: we pick a number, keep multiplying it by two and modding by 2n+1, and repeat until we're done with the cycle.
This gives rise to a well-studied problem. Suppose that we start with the number k and think about the sequence k, 2k, 22k, 23k, 24k, etc., all done modulo 2n+1. Doing this gives different patterns depending on what odd number 2n+1 you're modding by. This explains why the above cycle patterns seem somewhat arbitrary.
I have no idea how anyone figured this out, but it turns out that there's a beautiful result from number theory that talks about what happens if you take this pattern mod 3k for some number k:
Theorem: Consider the sequence 3s, 3s·2, 3s·22, 3s·23, 3s·24, etc. all modulo 3k for some k ≥ s. This sequence cycles through through every number between 1 and 3k, inclusive, that is divisible by 3s but not divisible by 3s+1.
We can try this out on a few examples. Let's work modulo 27 = 32. The theorem says that if we look at 3, 3 · 2, 3 · 4, etc. all modulo 27, then we should see all the numbers less than 27 that are divisible by 3 and not divisible by 9. Well, let'see what we get:
3 · 20 = 3 · 1 = 3 = 3 mod 27
3 · 21 = 3 · 2 = 6 = 6 mod 27
3 · 22 = 3 · 4 = 12 = 12 mod 27
3 · 23 = 3 · 8 = 24 = 24 mod 27
3 · 24 = 3 · 16 = 48 = 21 mod 27
3 · 25 = 3 · 32 = 96 = 15 mod 27
3 · 26 = 3 · 64 = 192 = 3 mod 27
We ended up seeing 3, 6, 12, 15, 21, and 24 (though not in that order), which are indeed all the numbers less than 27 that are divisible by 3 but not divisible by 9.
We can also try this working mod 27 and considering 1, 2, 22, 23, 24 mod 27, and we should see all the numbers less than 27 that are divisible by 1 and not divisible by 3. In other words, this should give back all the numbers less than 27 that aren't divisible by 3. Let's see if that's true:
20 = 1 = 1 mod 27
21 = 2 = 2 mod 27
22 = 4 = 4 mod 27
23 = 8 = 8 mod 27
24 = 16 = 16 mod 27
25 = 32 = 5 mod 27
26 = 64 = 10 mod 27
27 = 128 = 20 mod 27
28 = 256 = 13 mod 27
29 = 512 = 26 mod 27
210 = 1024 = 25 mod 27
211 = 2048 = 23 mod 27
212 = 4096 = 19 mod 27
213 = 8192 = 11 mod 27
214 = 16384 = 22 mod 27
215 = 32768 = 17 mod 27
216 = 65536 = 7 mod 27
217 = 131072 = 14 mod 27
218 = 262144 = 1 mod 27
Sorting these, we got back the numbers 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 25, 26 (though not in that order). These are exactly the numbers between 1 and 26 that aren't multiples of three!
This theorem is crucial to the algorithm for the following reason: if 2n+1 = 3k for some number k, then if we process the cycle containing 1, it will properly shuffle all numbers that aren't multiples of three. If we then start the cycle at 3, it will properly shuffle all numbers that are divisible by 3 but not by 9. If we then start the cycle at 9, it will properly shuffle all numbers that are divisible by 9 but not by 27. More generally, if we use the cycle shuffle algorithm on the numbers 1, 3, 9, 27, 81, etc., then we will properly reposition all the elements in the array exactly once and will not have to worry that we missed anything.
So how does this connect to 3k + 1? Well, we need to have that 2n + 1 = 3k, so we need to have that 2n = 3k - 1. But remember - we dropped the very first and very last element of the array when we did this! Adding those back in tells us that we need blocks of size 3k + 1 for this procedure to work correctly. If the blocks are this size, then we know for certain that the cycle decomposition will consist of a cycle containing 1, a nonoverlapping cycle containing 3, a nonoverlapping cycle containing 9, etc. and that these cycles will contain all the elements of the array. Consequently, we can just start cycling 1, 3, 9, 27, etc. and be absolutely guaranteed that everything gets shuffled around correctly. That's amazing!
And why is this theorem true? It turns out that a number k for which 1, k, k2, k3, etc. mod pn that cycles through all the numbers that aren't multiples of p (assuming p is prime) is called a primitive root of the number pn. There's a theorem that says that 2 is a primitive root of 3k for all numbers k, which is why this trick works. If I have time, I'd like to come back and edit this answer to include a proof of this result, though unfortunately my number theory isn't at a level where I know how to do this.
Summary
This problem was tons of fun to work on. It involves cute tricks with dividing by two modulo an odd numbers, cycle decompositions, primitive roots, and powers of three. I'm indebted to this arXiv paper which described a similar (though quite different) algorithm and gave me a sense for the key trick behind the technique, which then let me work out the details for the algorithm you described.
Hope this helps!
Here is most of the mathematical argument missing from templatetypedef’s
answer. (The rest is comparatively boring.)
Lemma: for all integers k >= 1, we have
2^(2*3^(k-1)) = 1 + 3^k mod 3^(k+1).
Proof: by induction on k.
Base case (k = 1): we have 2^(2*3^(1-1)) = 4 = 1 + 3^1 mod 3^(1+1).
Inductive case (k >= 2): if 2^(2*3^(k-2)) = 1 + 3^(k-1) mod 3^k,
then q = (2^(2*3^(k-2)) - (1 + 3^(k-1)))/3^k.
2^(2*3^(k-1)) = (2^(2*3^(k-2)))^3
= (1 + 3^(k-1) + 3^k*q)^3
= 1 + 3*(3^(k-1)) + 3*(3^(k-1))^2 + (3^(k-1))^3
+ 3*(1+3^(k-1))^2*(3^k*q) + 3*(1+3^(k-1))*(3^k*q)^2 + (3^k*q)^3
= 1 + 3^k mod 3^(k+1).
Theorem: for all integers i >= 0 and k >= 1, we have
2^i = 1 mod 3^k if and only if i = 0 mod 2*3^(k-1).
Proof: the “if” direction follows from the Lemma. If
i = 0 mod 2*3^(k-1), then
2^i = (2^(2*3^(k-1)))^(i/(2*3^(k-1)))
= (1+3^k)^(i/(2*3^(k-1))) mod 3^(k+1)
= 1 mod 3^k.
The “only if” direction is by induction on k.
Base case (k = 1): if i != 0 mod 2, then i = 1 mod 2, and
2^i = (2^2)^((i-1)/2)*2
= 4^((i-1)/2)*2
= 2 mod 3
!= 1 mod 3.
Inductive case (k >= 2): if 2^i = 1 mod 3^k, then
2^i = 1 mod 3^(k-1), and the inductive hypothesis implies that
i = 0 mod 2*3^(k-2). Let j = i/(2*3^(k-2)). By the Lemma,
1 = 2^i mod 3^k
= (1+3^(k-1))^j mod 3^k
= 1 + j*3^(k-1) mod 3^k,
where the dropped terms are divisible by (3^(k-1))^2, so
j = 0 mod 3, and i = 0 mod 2*3^(k-1).

How can i find the number of lowest possible square that can fit in the given square

let's suppose i have a square of 7x7.i can fill the square with other squares(i.e the squares of dimension 1x1,2x2.....6x6).How can i can fill the square with least possible smaller squares.please help me.
Consider a square with dimensions s x s. Cutting a smaller square of dimensions m x m out will result in a square of m x m, a square of n x n, and two rectangles of dimensions m x n, where m + n = s.
When s is even, the square can be divided such that m = n, in which case the rectangles will also be squares, resulting in an answer of 4.
However, when s is odd, values of m and n must be chosen such that the resulting rectangle can be filled with the least number of squares possible. There doesn't seem to be an immediately obvious way to figure out the best configuration, so I would suggest coming up with an algorithm to figure out the least number of squares that can be used to fill a rectangle of size m x n (this is a slightly simpler problem and I believe it can be solved with a recursive algorithm). The total number of squares needed will then be equal to 2 x ([number of squares in m x n rectangle] + 1). You can use a loop to check all the sizes of m between 1 and s/2.
Hope that gets you started.
Consider a square with dimensions s x s.
Factorialise s into primes. Then solve the problem for each prime sp. The answer will be the same for sp x sp as for s x s. It is probable that the smallest prime will give the lowest result. I have have no proof of this, but I have checked by hand up to 17 x 17.
This is a generalisation of Otaias notion of an even s resulting in an answer of 4.
Placiing algorithm:
You need to loop from n = (s+1)/2, rounded down, to n = s-1.
Put the n x n square in a corner.
Let m = s - n.
Place m x m squares in the adjacent corners and keep placing them until they (almost) reach the end of the n x n square.
The remaining space will be m x m (if you are lucky), or up to 2m-1 x 2m-1 with a corner piece missing.
Fill the remaining space with a similar algorithm. Start with placing a n2 x n2 square in the corner opposite to the missing corner piece.
Working by hand I have obtained the following results:
s minimum number of squares:
2 4
3 6
5 8
7 9
11 10
13 11
17 12
First check if n is even. If n is even, then the answer is four since there isn't a way to fit 3 squares or 2 squares together to make another square so that solves it for half of all possible cases
BEFORE YOU PROCEED: This approach is incomplete and this may be the WRONG approach
I just intend to throw out an out-of-the-box idea just because I feel like this may help and, hopefully, advance the problem. I feel like it may have some correlation with Goldbach's weak conjecture. The algorithm may be too long to compute for larger values, and I'm not sure how much optimization is happening.
Now my idea would be to try to enumerate all triples (n1,n2,n3) where n1 + n2 + n3 = n AND n1, n2, n3 are all prime (which are >= 2) AND n >= 7 AND n1 <= n2 <= n3
Now let me literally depict my algorithm:
Now my idea is find all possible triples (n1,n2,n3) so it fits the definition stated above. Next set n_s = n1 + n2. IF n_s > n3 follow the depiction above else flip n_s and n3
Now the problem is the white rectangles left over (that should be congruent to each other).
Let n4 x n3 denote the rectangles where:
n4 = n - 2 * n3 \\if following the depicted example
Enumerate all possible triples (n41, n42, n43) (treating n as n = n4, so n3 >= 7) and (n31, n32, n33) (treating n as n = n3, so n3 >= 7). Next find the value where n_s3 == n_s4 and both are the greatest they could be. For example:
Let's suppose x3 = 17 and x4 = 13
Enumeration of x3 = 17:
2 + 2 + 13
3 + 3 + 11
5 + 5 + 7
Enumeration of x_s3:
4 = 2 + 2
6 = 3 + 3
10 = 5 + 5
12 = 5 + 7
14 = 3 + 11
15 = 2 + 13
Enumeration of x4 = 13:
2 + 2 + 7
3 + 5 + 5
Enumeration of x_s4:
4 = 2 + 2
8 = 3 + 5
9 = 2 + 7
10 = 5 + 5
Since 10 is the largest value shared between 13 and 17, you fit a 10 by 10 square in the (both rectangles) and now you have a none parallelogram which get further and further more difficult to fill, but may be (I feel) towards the right direction.
All feed back appreciated.

Finding the maximum area in given binary data

I have a problem with describing algorithm for finding maximum rectangular area of binary data, where 1 occurs k-times more often than 0. Data is always n^2 bits like this:
For example data for n = 4 looks like:
1 0 1 0
0 0 1 1
0 1 1 1
1 1 0 1
Value of k can be 1 .. j (k = 1 means, that number of 0 and 1 is equal).
For above example of data and for k = 1 solution is:
1 0 1 0 <- 4 x '0' and 4 x '1'
0 0 1 1
0 1 1 1
1 1 0 1
But in this example:
1 1 1 0
0 1 0 0
0 0 0 0
0 1 1 1
Solution would be:
1 1 1 0
0 1 0 0
0 0 0 0
0 1 1 1
I tried with few brute force algorithms but for n > 20 it is getting too slow. Can you advise me how I should solve this problem?
As RBerteig proposed - the problem can be also described like that: "In a given square bitmap with cells set to 1 or 0 by some arbitrary process, find the largest rectangular area where the 1's and 0's occur in a specified ratio, k."
Bruteforce should do just fine here for n < 100, if properly implemented: solution below has O(n^4) time and O(n^2) memory complexity. 10^8 operations should be well under 1 second on modern PC (especially considering that each operation is very cheap: few additions and subtractions).
Some observations
There're O(n^4) sub-rectangles to consider and each of them can be a solution.
If we can find number of 1's and 0's in each sub-rectangle in O(1) (constant time), we'll solve problem in O(n^4) time.
If we know number of 1's in some sub-rectangle, we can find number of zeroes (through area).
So, the problem is reduced to following: create data structure allowing to find number of 1's in each sub-rectangle in constant time.
Now, imagine we have sub-rectangle [i0..i1]x[j0..j1]. I.e., it occupies rows between i0 and i1 and columns between j0 and j1. And let count_ones be the function to count number of 1's in subrectangle.
This is the main observation:
count_ones([i0..i1]x[j0..j1]) = count_ones([0..i1]x[0..j1]) - count_ones([0..i0 - 1]x[0..j1]) - count_ones([0..i1]x[0..j0 - 1]) + count_ones([0..i0 - 1]x[0..j0 - 1])
Same observation with practical example:
AAAABBB
AAAABBB
CCCCDDD
CCCCDDD
CCCCDDD
CCCCDDD
If we need to find number of 1's in D sub-rectangle (3x4), we can do it by taking number of 1's in the whole rectangle (A + B + C + D), subtracting number of 1's in (A + B) rectangle, subtracting number of 1's in (A + C) rectangle, and adding number of 1's in (A) rectangle. (A + B + C + D) - (A + B) - (A + C) + (A) = D
Thus, we need table sums, for each i and j containing number of 1's in sub-rectangle [0..i][0..j].
You can create this table in O(n^2), but even the direct way to fill it (for each i and j iterate all elements of [0..i][0..j] area) will be O(n^4).
Having this table,
count_ones([i0..i1]x[j0..j1]) = sums[i1][j1] - sums[i0 - 1][j1] - sums[i1][j0 - 1] + sums[i0 - 1][j0 - 1]
Therefore, time complexity O(n^4) reached.
This is still brute force, but something you should note is that you don't have to recompute everything from scratch for a new i*j rectangle. Instead, for each possible rectangle size, you can move the rectangle across the n*n grid one step at a time, decrementing the counts for the bits no longer within the rectangle and incrementing the counts for the bits that newly entered the rectangle. You could potentially combine this with varying the rectangle size, and try to find an optimal pattern for moving and resizing the rectangle.
Just some hints..
You could impose better restrictions on the values. The requirement leads to condition
N1*(k+1) == S*k, where N1 is number of ones in an area, and S=dx*dy is its surface.
It can be rewritten in better form:
N1/k == S/(k+1).
Because the greatest common divisor of numbers n and n+1 is always 1, then N1 have to be multiple of k and dx*dy to be multiple of k+1. It reduces greatly the possible space of solutions, the larger is k, the better (for dx*dy case you'll need to play with prime divisors of k+1).
Now, because you need just the surface of the largest area with such property, it would be wise to start from largest areas and move to smaller ones. By trying dx*dy from n^2 downto k+1 that would satisfy the divisor and the bounding conditions, you'll find quite fast the solution, muuuch faster than O(n^4), because of a special reason: except cases when the array was specially constructed, if we assume a random input, the probability that there are N1 ones out of S values in the (n-dx+1)*(n-dy+1) areas that have the surface S will constantly grow with decrease of S. (large values of k will make the probability smaller, but in the same time they will make the filter for dx and dy pairs stronger).
Also, this problem: http://ioinformatics.org/locations/ioi99/contest/land/land.shtml , looks somehow similar, maybe you'll find some ideas in their solution.

Resources