Algorithm find number position in snail 2D array - arrays

I have a 2D array square size.
such as :
(3x3) (4x4)
1 2 3 or 1 2 3 4
8 9 4 12 13 14 5
7 6 5 11 16 15 6
10 9 8 7
I am trying to find a solution to get by giving a value and the array size the Y, X position of the 2D array.
Exemple:
>> find_y_x_in_snail(3, 4)
1, 2
# in a 3x3 array, search value 4
return y=1 x=2
the only idea i have to create the snail in a 2D array and return the position.. not that great.
I found the opposite algorithm here (first exemple)
Any idea ?

You could use this function:
def find_y_x_in_snail(n, v):
r = 0
span = n
while v > span:
v -= span
r += 1
span -= r%2
d, m = divmod(r,4);
c = n-1-d
return [d, d+v, c, c-v][m], [d+v-1, c, c-v, d][m] # y, x
Explanation
r is the number of corners the "snake" needs to take to get to the targetted value.
span is the number of values in the current, straight segment of the snake, i.e. it starts with n, and decreases after the next corner, and again every second next corner.
d is the distance the snake has from the nearest side of the matrix, i.e. the "winding" level.
m indicates which of the 4 sides the segment -- containing the targetted value -- is at:
0: up
1: right
2: down
3: left
Depending on m a value is taken from a list with 4 expressions, each one tailored for the corresponding side: it defines the y coordinate. A similar method is applied (but with different expressions) for x.

Related

Find a duplicate in array of integers

This was an interview question.
I was given an array of n+1 integers from the range [1,n]. The property of the array is that it has k (k>=1) duplicates, and each duplicate can appear more than twice. The task was to find an element of the array that occurs more than once in the best possible time and space complexity.
After significant struggling, I proudly came up with O(nlogn) solution that takes O(1) space. My idea was to divide range [1,n-1] into two halves and determine which of two halves contains more elements from the input array (I was using Pigeonhole principle). The algorithm continues recursively until it reaches the interval [X,X] where X occurs twice and that is a duplicate.
The interviewer was satisfied, but then he told me that there exists O(n) solution with constant space. He generously offered few hints (something related to permutations?), but I had no idea how to come up with such solution. Assuming that he wasn't lying, can anyone offer guidelines? I have searched SO and found few (easier) variations of this problem, but not this specific one. Thank you.
EDIT: In order to make things even more complicated, interviewer mentioned that the input array should not be modified.
Take the very last element (x).
Save the element at position x (y).
If x == y you found a duplicate.
Overwrite position x with x.
Assign x = y and continue with step 2.
You are basically sorting the array, it is possible because you know where the element has to be inserted. O(1) extra space and O(n) time complexity. You just have to be careful with the indices, for simplicity I assumed first index is 1 here (not 0) so we don't have to do +1 or -1.
Edit: without modifying the input array
This algorithm is based on the idea that we have to find the entry point of the permutation cycle, then we also found a duplicate (again 1-based array for simplicity):
Example:
2 3 4 1 5 4 6 7 8
Entry: 8 7 6
Permutation cycle: 4 1 2 3
As we can see the duplicate (4) is the first number of the cycle.
Finding the permutation cycle
x = last element
x = element at position x
repeat step 2. n times (in total), this guarantees that we entered the cycle
Measuring the cycle length
a = last x from above, b = last x from above, counter c = 0
a = element at position a, b = elment at position b, b = element at position b, c++ (so we make 2 steps forward with b and 1 step forward in the cycle with a)
if a == b the cycle length is c, otherwise continue with step 2.
Finding the entry point to the cycle
x = last element
x = element at position x
repeat step 2. c times (in total)
y = last element
if x == y then x is a solution (x made one full cycle and y is just about to enter the cycle)
x = element at position x, y = element at position y
repeat steps 5. and 6. until a solution was found.
The 3 major steps are all O(n) and sequential therefore the overall complexity is also O(n) and the space complexity is O(1).
Example from above:
x takes the following values: 8 7 6 4 1 2 3 4 1 2
a takes the following values: 2 3 4 1 2
b takes the following values: 2 4 2 4 2
therefore c = 4 (yes there are 5 numbers but c is only increased when making steps, not initially)
x takes the following values: 8 7 6 4 | 1 2 3 4
y takes the following values: | 8 7 6 4
x == y == 4 in the end and this is a solution!
Example 2 as requested in the comments: 3 1 4 6 1 2 5
Entering cycle: 5 1 3 4 6 2 1 3
Measuring cycle length:
a: 3 4 6 2 1 3
b: 3 6 1 4 2 3
c = 5
Finding the entry point:
x: 5 1 3 4 6 | 2 1
y: | 5 1
x == y == 1 is a solution
Here is a possible implementation:
function checkDuplicate(arr) {
console.log(arr.join(", "));
let len = arr.length
,pos = 0
,done = 0
,cur = arr[0]
;
while (done < len) {
if (pos === cur) {
cur = arr[++pos];
} else {
pos = cur;
if (arr[pos] === cur) {
console.log(`> duplicate is ${cur}`);
return cur;
}
cur = arr[pos];
}
done++;
}
console.log("> no duplicate");
return -1;
}
for (t of [
[0, 1, 2, 3]
,[0, 1, 2, 1]
,[1, 0, 2, 3]
,[1, 1, 0, 2, 4]
]) checkDuplicate(t);
It is basically the solution proposed by #maraca (typed too slowly!) It has constant space requirements (for the local variables), but apart from that only uses the original array for its storage. It should be O(n) in the worst case, because as soon as a duplicate is found, the process terminates.
If you are allowed to non-destructively modify the input vector, then it is pretty easy. Suppose we can "flag" an element in the input by negating it (which is obviously reversible). In that case, we can proceed as follows:
Note: The following assume that the vector is indexed starting at 1. Since it is probably indexed starting at 0 (in most languages), you can implement "Flag item at index i" with "Negate the item at index i-1".
Set i to 0 and do the following loop:
Increment i until item i is unflagged.
Set j to i and do the following loop:
Set j to vector[j].
if the item at j is flagged, j is a duplicate. Terminate both loops.
Flag the item at j.
If j != i, continue the inner loop.
Traverse the vector setting each element to its absolute value (i.e. unflag everything to restore the vector).
It depends what tools are you(your app) can use. Currently a lot of frameworks/libraries exists. For exmaple in case of C++ standart you can use std::map<> ,as maraca mentioned.
Or if you have time you can made your own implementation of binary tree, but you need to keep in mind that insert of elements differs in comarison with usual array. In this case you can optimise search of duplicates as it possible in your particular case.
binary tree expl. ref:
https://www.wikiwand.com/en/Binary_tree

Smallest "n" sums from n arrays

I was trying to do my friends problem set from a few years ago to sharpen up my knowledge about data structures etc. I came across this problem, and I'm not really sure where to start. Hopefully someone could help me out!
We are given n unsorted arrays, each array has n elements. Ex.
3 1 2
7 6 9
4 9 12
Now, say we take one element from each array, and add them up. Lets just call the sum of these elements an "n-sum".
I need to devise an algorithm that gives us the n smallest "n-sums" (duplicates are allowed).
In our above ex, the answer would be:
11, 12, 12
# 11 comes from: 1 (first array) + 6 (second array) + 4 (third array)
# 12 comes from: 2 (first array) + 6 (second array) + 4 (third array)
# 12 comes from: 1 (first array) + 7 (second array) + 4 (third array)
One of the suggestions given were to use a priority queue.
Thanks!
The time is at least O (n^2): You must visit all array elements, because if all elements were equal to 1000 except on in each row being 0, you would have to look at the n elements equal to 0, or you couldn't find the smallest sum.
Sort each row, taking O (n^2 log n) steps. In each row, subtract the first element from all elements in the row, so the first element in each row is 0; after you found the smallest sums you can compensate for that. Your example turns into
3 1 2 -> 1 2 3 -> 0 1 2
7 6 9 -> 6 7 9 -> 0 1 3
4 9 12 -> 4 9 12-> 0 5 7
Now finding all sums ≤ K can be done in m steps if there are m sums: In the first row, pick all values in turn as long as they are ≤ K. In the second row, pick all values in turn as long as the sum from two rows is ≤ K and so on. Since each row starts with 0, no time is wasted.
For example, sums ≤ 5 are: 0+0+0, 0+0+5, 0+1+0, 0+3+0, 1+0+0, 1+1+0, 1+3+0, 2+0+0, 2+1+0, 2+3+0. Many more than the three that we needed. If we stop after finding 3 sums ≤ 5, we know very quickly "there are at least 3 sums ≤ 5". We need to have an early stop, because in the general case there could be n^n possible sums.
If you pick K = "largest element in the second column", then you know there are at least n+1 sums with a value ≤ K, because you can pick all 0's, or all 0's except one value from the second column. In your example, K = 5 (we know that worked). Let X be the value where there are n sums ≤ X but fewer than n sums ≤ X - 1. We find X with binary search between 0 and K, and then we find the sums. Example:
K = 5 is known to be big enough. We try K = 2, and find 4 sums (actually we stop at 3 sums). Too many. We try K = 1, and there are three solutions 0+0+0, 0+1+0 and 1+0+0. We try K = 0, but only one solution.
This part goes very quick, so we'd try to reduce the time used for sorting. We notice that in this case looking at the first two columns was enough. We can in each row find the two smallest items, and in this case that would be enough. If the two smallest items are not enough to determine the n smallest sums, find the third smallest item etc. where needed. For example, since the 2nd largest item of the last row is 5, we wouldn't need the third item of the row, because even the 5 is not element of a sum if K ≤ 4.

I don't understand the required (matlab)

I'm in the beginning of matlab course and trying to do some home work; for the next problem I don't understand what's required. any help?
Write a function called bottom_left that takes two inputs: a matrix N and a scalar n, in that order, where each dimension of N is greater than or equal to n. The function returns the n-by-n square array at the bottom left corner of N.
This is quite simple even for me.
You have a matrix: [1 2 3 4;5 6 7 8;9 10 11 12;13 14 15 16]
and you have a scalar: 2
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
"The function returns the n-by-n square array at the bottom left corner of N"
N=2
therefore
the output is the 2 x 2 array in the bottom left corner:
9 10
13 14
thats it. The additional info "where each dimension of N is greater than or equal to n" is just to confuse a bit what to do since the input matrix is given and does not needs to be created. Now, being this a homework you can find out how to get such array for any given matrix.
What you need to do first is to test if one of the dimensions of your Matrix N (I'm going to assume it's not a square Matrix) is lower than the scalar n. If both of the dimensions are higher then n, then we need to simply put the left lower block of the Matrix N into the variable out. The last operation is done by Matrix Indexing, which for more information you can visit this Link
function [ out ] = bottom_left( N, n )
[m,b]=size(N); % m is number of rows, b is number of columns
if (min(m,b)>=n) % to test if one of the dimensions is lower then the scalar n
out=N((m-n+1):m,1:n); % extracting the lower left n-by-n bloc from the Matrix by Indexing
end
end

Why does array size have to be 3^k+1 for cycle leader iteration algorithm to work?

The cycle leader iteration algorithm is an algorithm for shuffling an array by moving all even-numbered entries to the front and all odd-numbered entries to the back while preserving their relative order. For example, given this input:
a 1 b 2 c 3 d 4 e 5
the output would be
a b c d e 1 2 3 4 5
This algorithm runs in O(n) time and uses only O(1) space.
One unusual detail of the algorithm is that it works by splitting the array up into blocks of size 3k+1. Apparently this is critical for the algorithm to work correctly, but I have no idea why this is.
Why is the choice of 3k + 1 necessary in the algorithm?
Thanks!
This is going to be a long answer. The answer to your question isn't simple and requires some number theory to fully answer. I've spent about half a day working through the algorithm and I now have a good answer, but I'm not sure I can describe it succinctly.
The short version:
Breaking the input into blocks of size 3k + 1 essentially breaks the input apart into blocks of size 3k - 1 surrounded by two elements that do not end up moving.
The remaining 3k - 1 elements in the block move according to an interesting pattern: each element moves to the position given by dividing the index by two modulo 3k.
This particular motion pattern is connected to a concept from number theory and group theory called primitive roots.
Because the number two is a primitive root modulo 3k, beginning with the numbers 1, 3, 9, 27, etc. and running the pattern is guaranteed to cycle through all the elements of the array exactly once and put them into the proper place.
This pattern is highly dependent on the fact that 2 is a primitive root of 3k for any k ≥ 1. Changing the size of the array to another value will almost certainly break this because the wrong property is preserved.
The Long Version
To present this answer, I'm going to proceed in steps. First, I'm going to introduce cycle decompositions as a motivation for an algorithm that will efficiently shuffle the elements around in the right order, subject to an important caveat. Next, I'm going to point out an interesting property of how the elements happen to move around in the array when you apply this permutation. Then, I'll connect this to a number-theoretic concept called primitive roots to explain the challenges involved in implementing this algorithm correctly. Finally, I'll explain why this leads to the choice of 3k + 1 as the block size.
Cycle Decompositions
Let's suppose that you have an array A and a permutation of the elements of that array. Following the standard mathematical notation, we'll denote the permutation of that array as σ(A). We can line the initial array A up on top of the permuted array σ(A) to get a sense for where every element ended up. For example, here's an array and one of its permutations:
A 0 1 2 3 4
σ(A) 2 3 0 4 1
One way that we can describe a permutation is just to list off the new elements inside that permutation. However, from an algorithmic perspective, it's often more helpful to represent the permutation as a cycle decomposition, a way of writing out a permutation by showing how to form that permutation by beginning with the initial array and then cyclically permuting some of its elements.
Take a look at the above permutation. First, look at where the 0 ended up. In σ(A), the element 0 ended up taking the place of where the element 2 used to be. In turn, the element 2 ended up taking the place of where the element 0 used to be. We denote this by writing (0 2), indicating that 0 should go where 2 used to be, and 2 should go were 0 used to be.
Now, look at the element 1. The element 1 ended up where 4 used to be. The number 4 then ended up where 3 used to be, and the element 3 ended up where 1 used to be. We denote this by writing (1 4 3), that 1 should go where 4 used to be, that 4 should go where 3 used to be, and that 3 should go where 1 used to be.
Combining these together, we can represent the overall permutation of the above elements as (0 2)(1 4 3) - we should swap 0 and 2, then cyclically permute 1, 4, and 3. If we do that starting with the initial array, we'll end up at the permuted array that we want.
Cycle decompositions are extremely useful for permuting arrays in place because it's possible to permute any individual cycle in O(C) time and O(1) auxiliary space, where C is the number of elements in the cycle. For example, suppose that you have a cycle (1 6 8 4 2). You can permute the elements in the cycle with code like this:
int[] cycle = {1, 6, 8, 4, 2};
int temp = array[cycle[0]];
for (int i = 1; i < cycle.length; i++) {
swap(temp, array[cycle[i]]);
}
array[cycle[0]] = temp;
This works by just swapping everything around until everything comes to rest. Aside from the space usage required to store the cycle itself, it only needs O(1) auxiliary storage space.
In general, if you want to design an algorithm that applies a particular permutation to an array of elements, you can usually do so by using cycle decompositions. The general algorithm is the following:
for (each cycle in the cycle decomposition algorithm) {
apply the above algorithm to cycle those elements;
}
The overall time and space complexity for this algorithm depends on the following:
How quickly can we determine the cycle decomposition we want?
How efficiently can we store that cycle decomposition in memory?
To get an O(n)-time, O(1)-space algorithm for the problem at hand, we're going to show that there's a way to determine the cycle decomposition in O(1) time and space. Since everything will get moved exactly once, the overall runtime will be O(n) and the overall space complexity will be O(1). It's not easy to get there, as you'll see, but then again, it's not awful either.
The Permutation Structure
The overarching goal of this problem is to take an array of 2n elements and shuffle it so that even-positioned elements end up at the front of the array and odd-positioned elements end up at the end of the array. Let's suppose for now that we have 14 elements, like this:
0 1 2 3 4 5 6 7 8 9 10 11 12 13
We want to shuffle the elements so that they come out like this:
0 2 4 6 8 10 12 1 3 5 7 9 11 13
There are a couple of useful observations we can have about the way that this permutation arises. First, notice that the first element does not move in this permutation, because even-indexed elements are supposed to show up in the front of the array and it's the first even-indexed element. Next, notice that the last element does not move in this permutation, because odd-indexed elements are supposed to end up at the back of the array and it's the last odd-indexed element.
These two observations, put together, means that if we want to permute the elements of the array in the desired fashion, we actually only need to permute the subarray consisting of the overall array with the first and last elements dropped off. Therefore, going forward, we are purely going to focus on the problem of permuting the middle elements. If we can solve that problem, then we've solved the overall problem.
Now, let's look at just the middle elements of the array. From our above example, that means that we're going to start with an array like this one:
Element 1 2 3 4 5 6 7 8 9 10 11 12
Index 1 2 3 4 5 6 7 8 9 10 11 12
We want to get the array to look like this:
Element 2 4 6 8 10 12 1 3 5 7 9 11
Index 1 2 3 4 5 6 7 8 9 10 11 12
Because this array was formed by taking a 0-indexed array and chopping off the very first and very last element, we can treat this as a one-indexed array. That's going to be critically important going forward, so be sure to keep that in mind.
So how exactly can we go about generating this permutation? Well, for starters, it doesn't hurt to take a look at each element and to try to figure out where it began and where it ended up. If we do so, we can write things out like this:
The element at position 1 ended up at position 7.
The element at position 2 ended up at position 1.
The element at position 3 ended up at position 8.
The element at position 4 ended up at position 2.
The element at position 5 ended up at position 9.
The element at position 6 ended up at position 3.
The element at position 7 ended up at position 10.
The element at position 8 ended up at position 4.
The element at position 9 ended up at position 11.
The element at position 10 ended up at position 5.
The element at position 11 ended up at position 12.
The element at position 12 ended up at position 6.
If you look at this list, you can spot a few patterns. First, notice that the final index of all the even-numbered elements is always half the position of that element. For example, the element at position 4 ended up at position 2, the element at position 12 ended up at position 6, etc. This makes sense - we pushed all the even elements to the front of the array, so half of the elements that came before them will have been displaced and moved out of the way.
Now, what about the odd-numbered elements? Well, there are 12 total elements. Each odd-numbered element gets pushed to the second half, so an odd-numbered element at position 2k+1 will get pushed to at least position 7. Its position within the second half is given by the value of k. Therefore, the elements at an odd position 2k+1 gets mapped to position 7 + k.
We can take a minute to generalize this idea. Suppose that the array we're permuting has length 2n. An element at position 2x will be mapped to position x (again, even numbers get halfed), and an element at position 2x+1 will be mapped to position n + 1 + x. Restating this:
The final position of an element at position p is determined as follows:
If p = 2x for some integer x, then 2x ↦ x
If p = 2x+1 for some integer x, then 2x+1 ↦ n + 1 + x
And now we're going to do something that's entirely crazy and unexpected. Right now, we have a piecewise rule for determining where each element ends up: we either divide by two, or we do something weird involving n + 1. However, from a number-theoretic perspective, there is a single, unified rule explaining where all elements are supposed to end up.
The insight we need is that in both cases, it seems like, in some way, we're dividing the index by two. For the even case, the new index really is formed by just dividing by two. For the odd case, the new index kinda looks like it's formed by dividing by two (notice that 2x+1 went to x + (n + 1)), but there's an extra term in there. In a number-theoretic sense, though, both of these really correspond to division by two. Here's why.
Rather than taking the source index and dividing by two to get the destination index, what if we take the destination index and multiply by two? If we do that, an interesting pattern emerges.
Suppose our original number was 2x. The destination is then x, and if we double the destination index to get back 2x, we end up with the source index.
Now suppose that our original number was 2x+1. The destination is then n + 1 + x. Now, what happens if we double the destination index? If we do that, we get back 2n + 2 + 2x. If we rearrange this, we can alternatively rewrite this as (2x+1) + (2n+1). In other words, we've gotten back the original index, plus an extra (2n+1) term.
Now for the kicker: what if all of our arithmetic is done modulo 2n + 1? In that case, if our original number was 2x + 1, then twice the destination index is (2x+1) + (2n+1) = 2x + 1 (modulo 2n+1). In other words, the destination index really is half of the source index, just done modulo 2n+1!
This leads us to a very, very interesting insight: the ultimate destination of each of the elements in a 2n-element array is given by dividing that number by two, modulo 2n+1. This means that there really is a nice, unified rule for determining where everything goes. We just need to be able to divide by two modulo 2n+1. It just happens to work out that in the even case, this is normal integer division, and in the odd case, it works out to taking the form n + 1 + x.
Consequently, we can reframe our problem in the following way: given a 1-indexed array of 2n elements, how do we permute the elements so that each element that was originally at index x ends up at position x/2 mod (2n+1)?
Cycle Decompositions Revisited
At this point, we've made quite a lot of progress. Given any element, we know where that element should end up. If we can figure out a nice way to get a cycle decomposition of the overall permutation, we're done.
This is, unfortunately, where things get complicated. Suppose, for example, that our array has 10 elements. In that case, we want to transform the array like this:
Initial: 1 2 3 4 5 6 7 8 9 10
Final: 2 4 6 8 10 1 3 5 7 9
The cycle decomposition of this permutation is (1 6 3 7 9 10 5 8 4 2). If our array has 12 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12
Final: 2 4 6 8 10 12 1 3 5 7 9 11
This has cycle decomposition (1 7 10 5 9 11 12 6 3 8 4 2 1). If our array has 14 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Final: 2 4 6 8 10 12 14 1 3 5 7 9 11 13
This has cycle decomposition (1 8 4 2)(3 9 12 6)(5 10)(7 11 13 14). If our array has 16 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Final: 2 4 6 8 10 12 14 16 1 3 5 7 9 11 13 15
This has cycle decomposition (1 9 13 15 16 8 4 2)(3 10 5 11 14 7 12 6).
The problem here is that these cycles don't seem to follow any predictable patterns. This is a real problem if we're going to try to solve this problem in O(1) space and O(n) time. Even though given any individual element we can figure out what cycle contains it and we can efficiently shuffle that cycle, it's not clear how we figure out what elements belong to what cycles, how many different cycles there are, etc.
Primitive Roots
This is where number theory comes in. Remember that each element's new position is formed by dividing that number by two, modulo 2n+1. Thinking about this backwards, we can figure out which number will take the place of each number by multiplying by two modulo 2n+1. Therefore, we can think of this problem by finding the cycle decomposition in reverse: we pick a number, keep multiplying it by two and modding by 2n+1, and repeat until we're done with the cycle.
This gives rise to a well-studied problem. Suppose that we start with the number k and think about the sequence k, 2k, 22k, 23k, 24k, etc., all done modulo 2n+1. Doing this gives different patterns depending on what odd number 2n+1 you're modding by. This explains why the above cycle patterns seem somewhat arbitrary.
I have no idea how anyone figured this out, but it turns out that there's a beautiful result from number theory that talks about what happens if you take this pattern mod 3k for some number k:
Theorem: Consider the sequence 3s, 3s·2, 3s·22, 3s·23, 3s·24, etc. all modulo 3k for some k ≥ s. This sequence cycles through through every number between 1 and 3k, inclusive, that is divisible by 3s but not divisible by 3s+1.
We can try this out on a few examples. Let's work modulo 27 = 32. The theorem says that if we look at 3, 3 · 2, 3 · 4, etc. all modulo 27, then we should see all the numbers less than 27 that are divisible by 3 and not divisible by 9. Well, let'see what we get:
3 · 20 = 3 · 1 = 3 = 3 mod 27
3 · 21 = 3 · 2 = 6 = 6 mod 27
3 · 22 = 3 · 4 = 12 = 12 mod 27
3 · 23 = 3 · 8 = 24 = 24 mod 27
3 · 24 = 3 · 16 = 48 = 21 mod 27
3 · 25 = 3 · 32 = 96 = 15 mod 27
3 · 26 = 3 · 64 = 192 = 3 mod 27
We ended up seeing 3, 6, 12, 15, 21, and 24 (though not in that order), which are indeed all the numbers less than 27 that are divisible by 3 but not divisible by 9.
We can also try this working mod 27 and considering 1, 2, 22, 23, 24 mod 27, and we should see all the numbers less than 27 that are divisible by 1 and not divisible by 3. In other words, this should give back all the numbers less than 27 that aren't divisible by 3. Let's see if that's true:
20 = 1 = 1 mod 27
21 = 2 = 2 mod 27
22 = 4 = 4 mod 27
23 = 8 = 8 mod 27
24 = 16 = 16 mod 27
25 = 32 = 5 mod 27
26 = 64 = 10 mod 27
27 = 128 = 20 mod 27
28 = 256 = 13 mod 27
29 = 512 = 26 mod 27
210 = 1024 = 25 mod 27
211 = 2048 = 23 mod 27
212 = 4096 = 19 mod 27
213 = 8192 = 11 mod 27
214 = 16384 = 22 mod 27
215 = 32768 = 17 mod 27
216 = 65536 = 7 mod 27
217 = 131072 = 14 mod 27
218 = 262144 = 1 mod 27
Sorting these, we got back the numbers 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 25, 26 (though not in that order). These are exactly the numbers between 1 and 26 that aren't multiples of three!
This theorem is crucial to the algorithm for the following reason: if 2n+1 = 3k for some number k, then if we process the cycle containing 1, it will properly shuffle all numbers that aren't multiples of three. If we then start the cycle at 3, it will properly shuffle all numbers that are divisible by 3 but not by 9. If we then start the cycle at 9, it will properly shuffle all numbers that are divisible by 9 but not by 27. More generally, if we use the cycle shuffle algorithm on the numbers 1, 3, 9, 27, 81, etc., then we will properly reposition all the elements in the array exactly once and will not have to worry that we missed anything.
So how does this connect to 3k + 1? Well, we need to have that 2n + 1 = 3k, so we need to have that 2n = 3k - 1. But remember - we dropped the very first and very last element of the array when we did this! Adding those back in tells us that we need blocks of size 3k + 1 for this procedure to work correctly. If the blocks are this size, then we know for certain that the cycle decomposition will consist of a cycle containing 1, a nonoverlapping cycle containing 3, a nonoverlapping cycle containing 9, etc. and that these cycles will contain all the elements of the array. Consequently, we can just start cycling 1, 3, 9, 27, etc. and be absolutely guaranteed that everything gets shuffled around correctly. That's amazing!
And why is this theorem true? It turns out that a number k for which 1, k, k2, k3, etc. mod pn that cycles through all the numbers that aren't multiples of p (assuming p is prime) is called a primitive root of the number pn. There's a theorem that says that 2 is a primitive root of 3k for all numbers k, which is why this trick works. If I have time, I'd like to come back and edit this answer to include a proof of this result, though unfortunately my number theory isn't at a level where I know how to do this.
Summary
This problem was tons of fun to work on. It involves cute tricks with dividing by two modulo an odd numbers, cycle decompositions, primitive roots, and powers of three. I'm indebted to this arXiv paper which described a similar (though quite different) algorithm and gave me a sense for the key trick behind the technique, which then let me work out the details for the algorithm you described.
Hope this helps!
Here is most of the mathematical argument missing from templatetypedef’s
answer. (The rest is comparatively boring.)
Lemma: for all integers k >= 1, we have
2^(2*3^(k-1)) = 1 + 3^k mod 3^(k+1).
Proof: by induction on k.
Base case (k = 1): we have 2^(2*3^(1-1)) = 4 = 1 + 3^1 mod 3^(1+1).
Inductive case (k >= 2): if 2^(2*3^(k-2)) = 1 + 3^(k-1) mod 3^k,
then q = (2^(2*3^(k-2)) - (1 + 3^(k-1)))/3^k.
2^(2*3^(k-1)) = (2^(2*3^(k-2)))^3
= (1 + 3^(k-1) + 3^k*q)^3
= 1 + 3*(3^(k-1)) + 3*(3^(k-1))^2 + (3^(k-1))^3
+ 3*(1+3^(k-1))^2*(3^k*q) + 3*(1+3^(k-1))*(3^k*q)^2 + (3^k*q)^3
= 1 + 3^k mod 3^(k+1).
Theorem: for all integers i >= 0 and k >= 1, we have
2^i = 1 mod 3^k if and only if i = 0 mod 2*3^(k-1).
Proof: the “if” direction follows from the Lemma. If
i = 0 mod 2*3^(k-1), then
2^i = (2^(2*3^(k-1)))^(i/(2*3^(k-1)))
= (1+3^k)^(i/(2*3^(k-1))) mod 3^(k+1)
= 1 mod 3^k.
The “only if” direction is by induction on k.
Base case (k = 1): if i != 0 mod 2, then i = 1 mod 2, and
2^i = (2^2)^((i-1)/2)*2
= 4^((i-1)/2)*2
= 2 mod 3
!= 1 mod 3.
Inductive case (k >= 2): if 2^i = 1 mod 3^k, then
2^i = 1 mod 3^(k-1), and the inductive hypothesis implies that
i = 0 mod 2*3^(k-2). Let j = i/(2*3^(k-2)). By the Lemma,
1 = 2^i mod 3^k
= (1+3^(k-1))^j mod 3^k
= 1 + j*3^(k-1) mod 3^k,
where the dropped terms are divisible by (3^(k-1))^2, so
j = 0 mod 3, and i = 0 mod 2*3^(k-1).

Dynamic programming - Largest square block

I need to find the largest square of 1's in a giant file full of 1's and 0's. I know i have to use dynamic programming. I am storing it in a 2D array. Any help with the algorithm to find the largest square would be great, thanks!
example input:
1 0 1 0 1 0
1 0 1 1 1 1
0 1 1 1 1 1
0 0 1 1 1 1
1 1 1 1 1 1
answer:
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
My code so far:
int Square (Sq[int x][int y]) {
if (Sq[x][y]) == 0) {
return 0;
}
else {
return 1+MIN( Sq(X-1,Y), Sq(X,Y-1), Sq(X-1,Y-1) );
}
}
(assuming values already entered into the array)
int main() {
int Sq[5][6]; //5,6 = bottom right conner
int X = Square(Sq[5][6]);
}
How do I go on from there?
Here is a sketch of the solution:
For each of the cells we will keep a counter of how big a square can be made using that cell as top left. Clearly all cells with 0 will have 0 as the count.
Start iterating from bottom right cell and go to bottom left, then go to one row up and repeat.
At each scan do this:
If the cell has 0 assign count=0
If the cell has 1 and is an edge cell (bottom or right edge only), assign count=1
For all other cells, check the count of the cell on its right, right-below, and below. Take the min of them and add 1 and assign that to the count. Keep a global max_count variable to keep track of the max count so far.
At the end of traversing the matrix, max_count will have the desired value.
Complexity is no more that the cost of traversal of the matrix.
This is how the matrix will look like after the traversal. Values in parentheses are the counts, i.e. biggest square that can be made using the cell as top left.
1(1) 0(0) 1(1) 0(0) 1(1) 0(0)
1(1) 0(0) 1(4) 1(3) 1(2) 1(1)
0(0) 1(1) 1(3) 1(3) 1(2) 1(1)
0(0) 0(0) 1(2) 1(2) 1(2) 1(1)
1(1) 1(1) 1(1) 1(1) 1(1) 1(1)
Implementation in Python
def max_size(mat, ZERO=0):
"""Find the largest square of ZERO's in the matrix `mat`."""
nrows, ncols = len(mat), (len(mat[0]) if mat else 0)
if not (nrows and ncols): return 0 # empty matrix or rows
counts = [[0]*ncols for _ in xrange(nrows)]
for i in reversed(xrange(nrows)): # for each row
assert len(mat[i]) == ncols # matrix must be rectangular
for j in reversed(xrange(ncols)): # for each element in the row
if mat[i][j] != ZERO:
counts[i][j] = (1 + min(
counts[i][j+1], # east
counts[i+1][j], # south
counts[i+1][j+1] # south-east
)) if i < (nrows - 1) and j < (ncols - 1) else 1 # edges
return max(c for rows in counts for c in rows)
LSBRA(X,Y) means "Largest Square with Bottom-Right At X,Y"
Pseudocode:
LSBRA(X,Y):
if (x,y) == 0:
0
else:
1+MIN( LSBRA(X-1,Y), LSBRA(X,Y-1), LSBRA(X-1,Y-1) )
(For edge cells, you can skip the MIN part and just return 1 if (x,y) is not 0.)
Work diagonally through the grid in "waves", like the following:
0 1 2 3 4
+----------
0 | 1 2 3 4 5
1 | 2 3 4 5 6
2 | 3 4 5 6 7
3 | 4 5 6 7 8
or alternatively, work through left-to-right, top-to-bottom, as long as you fill in edge cells.
0 1 2 3 4
+----------
0 | 1 2 3 4 5
1 | 6 7 8 9 .
2 | . . . . .
3 | . . . . .
That way you'll never run into a computation where you haven't previously computed the necessary data - so all of the LSBRA() "calls" are actually just table lookups of your previous computation results (hence the dynamic programming aspect).
Why it works
In order to have a square with a bottom-right at X,Y - it must contain the overlapping squares of one less dimension that touch each of the other 3 corners. In other words, to have
XXXX
XXXX
XXXX
XXXX
you must also have...
XXX. .XXX .... ....
XXX. .XXX XXX. ....
XXX. .XXX XXX. ....
.... .... XXX. ...X
As long as you have those 3 (each of the LSBRA checks) N-size squares plus the current square is also "occupied", you will have an (N+1)-size square.
The first algorithm that comes to my mind is:
'&&' column/row 1 with column/row 2 if, this is to say do an '&&' operation between each entry and its corresponding entry in the other column/row.
Check the resulting column, if there are any length 2 1's that means we hit a 2x2 square.
And the next column with the result of the first two. If there are any length 3 1's we have hit a 3x3 square.
Repeat until all columns have been used.
Repeat 1-4 starting at column 2.
I won't show you the implementation as its quite straightforward and your problem sounds like homework. Additionally there are likely much more efficient ways to do this, as this will become slow if the input was very large.
Let input matrix is M: n x m
T[i][j] is DP matrix which contains largest square side with squares bottom right angle (i,j).
General rule to fill the table:
if (M[i][j] == 1) {
int v = min(T[i][j-1], T[i-1][j]);
v = min(v, T[i-1][j-1]);
T[i][j] = v + 1;
}
else
T[i][j] = 0;
The result square size is max value in T.
Filling T[i][0] and T[0][j] is trivial.
I am not sure if this algo can be used for your huge file,
but you don't need to store entire matrix T but only current and previous lines only.
Following notes can help to undestand general idea:
all squares with right bottom angles (i-1, j), (i, j-1), (i-1, j-1) with size s are inside square of with right bottom angle (i, j) with size s+1.
if there is square of size s+1 with right bottom corner at (i, j), then size of maximal square with right bottom angles (i-1, j), (i, j-1), (i-1, j-1) is at least s.
Opposite is also true. If size of at least one square with bottom right angles at (i-1, j), (i, j-1), (i-1, j-1) is less then s, then size of square with right bottom corner at (i, j) can not be larger then s+1.
OK, the most inefficient way but simple would be:
select first item. check if 1, if so you have a 1x1 square.
check one below and one to right, if 1, then check row 2 col 2, if 1, 2x2 square.
check row 3 col 1, col 2 and col 3, plus row 1 col 3, row 2 col 3, if 1, 3x3.
So basically you keep expanding the row and col together and check all the cells inside their boundaries. As soon as you hit a 0, it's broken, so you move along 1 point in a row, and start again.
At end of row, move to next row.
until the end.
You can probably see how those fit into while loops etc, and how &&s can be used to check for the 0s, and as you look at it, you'll perhaps also notice how it can be sped up. But as the other answer just mentioned, it does sound a little like homework so we'll leave the actual code up to you.
Good luck!
The key here is that you can keep track of the root of the area instead of the actual area, using dynamic programming.
The algorithm is as follow:
Store an 2D array of ints called max-square, where an element at index i,j represents the size of the square it's in with i,j being the bottom right corner. (if max[i,j] = 2, it means that index i,j is the bottom right corner of a square of size 2^2 = 4)
For each index i,j:
if at i,j the element is 0, then set max-square i,j to 0.
else:
Find the minimum of max-square[i - 1, j] and max-square[i, j - 1] and max-square[i - 1][j -1]. set max-square[i, j] to 1 + the minimum of the 3. Inductively, you'll end up filling in the max-square array. Find/or keep track of the maximum value in the process, return that value^2.
Take a look at these solutions people have proposed:
https://leetcode.com/discuss/questions/oj/maximal-square?sort=votes
Let N be the amount of cells in the 2D array. There exists a very efficient algorithm to list all the maximum empty rectangles. The largest empty square is inside one of these empty rectangles, and founding it is trivial once the list of the maximum empty rectangles has been computed. A paper presenting a O(N) algorithm to create such a list can be found at www.ulg.ac.be/telecom/rectangles as well as source code (not optimized). Note that a proof exists (see the paper) that the number of largest empty rectangles is bounded by N. Therefore, selecting the largest empty square can be done in O(N), and the overall method is also O(N). In practice, this method is very fast. The implementation is very easy to do, since the whole code should not be more than 40 lines of C (the algorithm to list all the maximum empty rectangles takes about 30 lines of C).

Resources