Google App Engine Geohashing - google-app-engine

I am writing a web application using GWT and App Engine. My application will need to post and query items based on their latitude, longitude.
As a result of google's distributed database design you can't simple query a set of inequalities. Instead they suggest doing geohashing. The method is described on this page.
http://code.google.com/appengine/articles/geosearch.html
Essentially you pre compute a bounding box so that you can query items that have been tagged with that bounding box.
There is one part of the process that I don't understand. What does the "slice" attribute mean?
Thanks for your help!

For a complete java portage of Geomodel, please see http://code.google.com/p/javageomodel/.
There is a demo class to explain you how to use it.

Rather than implementing the geohash yourself, you might be interested in the GeoModel open source project that implements a geohash-like system on Google App Engine. Rather than understanding all the details, you can just import this library and make calls like proximity_fetch() and bounding_box_fetch().
This more recent article describes how it works and provides an example that uses it.

Instead of defining a bounding box with 4 coordinates (min and max latitude, min and max longitude), you can define it with the coordinates of the North-West corner of the box, and two parameters : resolution and slice.
The resolution defines the scale of the box, it is implemented as the number of figures below the decimal-point.
The slice is the width and height of the box, using the least significant figure as its unit.
The comments in geobox.py explain this in more details, with good examples :
To query for members of a bounding box, we start with some input coordinates
like lat=37.78452 long=-122.39532 (both resolution 5). We then round these
coordinates up and down to the nearest "slice" to generate a geobox. A "slice"
is how finely to divide each level of resolution in the geobox. The minimum
slice size is 1, the maximum does not have a limit, since larger slices will
just spill over into lower resolutions (hopefully the examples will explain).
Some examples:
resolution=5, slice=2, and lat=37.78452 long=-122.39532:
"37.78452|-122.39532|37.78450|-122.39530"
resolution=5, slice=10, and lat=37.78452 long=-122.39532:
"37.78460|-122.39540|37.78450|-122.39530"
resolution=5, slice=25, and lat=37.78452 long=-122.39532:
"37.78475|-122.39550|37.78450|-122.39525"

I'm working on a GWT/GAE project and had the same problem. My solution was to use a Geohash class that I modified slightly to be GWT-friendly. It's great for my needs of proximity searches.
If you've never seen Geohashes in action, check out Dave Troy's JS demo page.

An alternative to do geo spatial searches in App Engine is the Search Api. You won't need to worry about geohashing or implementation details, and you'll be able to search for elements close to geo point.
https://developers.google.com/appengine/docs/python/search/overview#Performing_Location-Based_Searches

I was working on a GAE project with geohashing and this python library did the trick for me: http://mappinghacks.com/code/geohash.py.txt

I was also in need of a Java version of GeoModel. I was working with Geohash before, which allowed me to fetch locations in a given bounding box. But there are considerable limitations to this when it comes to sorting: in order to get BigTable to accept a filter like geohash > '" + bottomLeft + "' && geohash < '" + topRight + "'", you have to order the list by geohash as well, which makes it impossible to sort it by other criteria (especially if you want to use pagination). At the same time, I just can't think of a solution to sort the results by distance (from a given user-position, i.e. the center of the bounding box), other than in Java-code. Again, this will not work if you need to have pagination.
Because of these problems I had to use a different approach, and GeoModel/Geoboxes seemed to be the way. So, I ported the Python-code to Java and it's just working fine! Here is the result:
public class Geobox {
private static double roundSlicedown(double coord, double slice) {
double remainder = coord % slice;
if (remainder == Double.NaN) {
return coord;
}
if (coord > 0) {
return coord - remainder + slice;
} else {
return coord - remainder;
}
}
private static double[] computeTuple(double lat, double lng,
int resolution, double slice) {
slice = slice * Math.pow(10, -resolution);
double adjustedLat = roundSlicedown(lat, slice);
double adjustedLng = roundSlicedown(lng, slice);
return new double[] { adjustedLat, adjustedLng - slice,
adjustedLat - slice, adjustedLng };
}
private static String formatTuple(double[] values, int resolution) {
StringBuffer s = new StringBuffer();
String format = String.format("%%.%df", resolution);
for (int i = 0; i < values.length; i++) {
s.append(String.format(format, values[i]).replace(',','.'));
if (i < values.length - 1) {
s.append("|");
}
}
return s.toString();
}
public static String compute(double lat, double lng, int resolution,
int slice) {
return formatTuple(computeTuple(lat, lng, resolution, slice),
resolution);
}
public static List<String> computeSet(double lat, double lng,
int resolution, double slice) {
double[] primaryBox = computeTuple(lat, lng, resolution, slice);
slice = slice * Math.pow(10, -resolution);
List<String> set = new ArrayList<String>();
for (int i = -1; i < 2; i++) {
double latDelta = slice * i;
for (int j = -1; j < 2; j++) {
double lngDelta = slice * j;
double[] adjustedBox = new double[] { primaryBox[0] + latDelta,
primaryBox[1] + lngDelta, primaryBox[2] + latDelta,
primaryBox[3] + lngDelta };
set.add(formatTuple(adjustedBox, resolution));
}
}
return set;
}
}

sorry for the late answer, but I didn't return to this page for some time. A GeoDao implementation using the Geobox approach could look like this:
public class GeoDaoImpl extends DaoImpl<T extends GeoModel> {
// geobox configs are: resolution, slice, use set (1 = true)
private final static int[][] GEOBOX_CONFIGS =
{ { 4, 5, 1 },
{ 3, 2, 1 },
{ 3, 8, 0 },
{ 3, 16, 0 },
{ 2, 5, 0 } };
public GeoDaoImpl(Class<T> persistentClass) {
super(persistentClass);
}
public List<T> findInGeobox(double lat, double lng, int predefinedBox, String filter, String ordering, int offset, int limit) {
return findInGeobox(lat, lng, GEOBOX_CONFIGS[predefinedBox][0], GEOBOX_CONFIGS[predefinedBox][1], filter, ordering, offset, limit);
}
public List<T> findInGeobox(double lat, double lng, int resolution, int slice, String filter, String ordering, int offset, int limit) {
String box = Geobox.compute(lat, lng, resolution, slice);
if (filter == null) {
filter = "";
} else {
filter += " && ";
}
filter += "geoboxes=='" + box + "'";
return super.find(persistentClass, filter, ordering, offset, limit);
}
public List<T> findNearest(final double lat, final double lng, String filter, String ordering, int offset, int limit) {
LinkedHashMap<String, T> uniqueList = new LinkedHashMap<String, T>();
int length = offset + limit;
for (int i = 0; i < GEOBOX_CONFIGS.length; i++) {
List<T> subList = findInGeobox(lat, lng, i, filter, ordering, 0, limit);
for (T model : subList) {
uniqueList.put(model.getId(), model);
}
if (uniqueList.size() >= length) {
break;
}
}
List<T> list = new ArrayList<T>();
int i = 0;
for (String key : uniqueList.keySet()) {
if (i >= offset && i <= length) {
list.add(uniqueList.get(key));
}
i++;
}
Collections.sort(list, new Comparator<T>() {
public int compare(T model1, T model2) {
double distance1 = Geoutils.distFrom(model1.getLatitude(), model1.getLongitude(), lat, lng);
double distance2 = Geoutils.distFrom(model2.getLatitude(), model2.getLongitude(), lat, lng);
return Double.compare(distance1, distance2);
}
});
return list;
}
#Override
public void save(T model) {
preStore(model);
super.save(model);
}
private void preStore(T model) {
// geoboxes are needed to find the nearest entities and sort them by distance
List<String> geoboxes = new ArrayList<String>();
for (int[] geobox : GEOBOX_CONFIGS) {
// use set
if (geobox[2] == 1) {
geoboxes.addAll(Geobox.computeSet(model.getLatitude(), model.getLongitude(), geobox[0], geobox[1]));
} else {
geoboxes.add(Geobox.compute(model.getLatitude(), model.getLongitude(), geobox[0], geobox[1]));
}
}
model.setGeoboxes(geoboxes);
}
}

Related

How to calculate a text size?

Apparently the com.codename1.ui.plaf.LookAndFeel.getTextAreaSize(TextArea, boolean) doesn't return an exact text size.
I would like to determine the size of a speech bubble, or the size of a possibly multi-line text in a speech bubble.
How would I do that? Is there a utility method for that somewhere in CN1?
This returns the preferred size. With multiline text we size based on the number of rows/columns since the text is modifiable and scrollable its content isn't fully taken into consideration for this calculation. Even when it is taken into consideration this can't be 100% accurate as even 1 pixel difference in width can cause a line break which will reflow everything. There are many special cases involved with text sizing so you shouldn't rely on accuracy. Try using setActAsLabel or SpanLabel.
Apparently there is currently no text area component in Codename One that can be used to size to its content.
Here is a custom one that does that:
private class Hint extends Component {
final String text;
final int rowsGap = new TextArea().getRowsGap();
private Hint(String aText) {
text = aText;
}
#Override
protected Dimension calcPreferredSize() {
int prefW = 0;
int prefH = 0;
Style style = getStyle();
Font font = style.getFont();
List<String> lines = StringUtil.tokenize(text, "\n");
int tally = 0;
for (String line: lines) {
tally++;
prefW = font.stringWidth(line);
}
prefH = tally * (font.getHeight() + rowsGap);
prefW += getUnselectedStyle().getHorizontalPadding() + 5;
prefH += style.getPaddingTop() + style.getPaddingBottom();
if(style.getBorder() != null) {
prefW = Math.max(style.getBorder().getMinimumWidth(), prefW);
prefH = Math.max(style.getBorder().getMinimumHeight(), prefH);
}
if(getUIManager().getLookAndFeel().isBackgroundImageDetermineSize() && style.getBgImage() != null) {
prefW = Math.max(style.getBgImage().getWidth(), prefW);
prefH = Math.max(style.getBgImage().getHeight(), prefH);
}
return new Dimension(prefW, prefH);
}
#Override
public void paint(Graphics aGraphics) {
Style style = getStyle();
int xLeft = style.getPaddingLeft(false), yTop = style.getPaddingTop();
List<String> lines = StringUtil.tokenize(text, "\n");
int y = yTop;
aGraphics.setColor(0x000000);
for (String line: lines) {
aGraphics.drawString(line, getX() + xLeft, getY() + y);
y += style.getFont().getHeight() + rowsGap;
}
}
}

Algorithm to iterate N-dimensional array in pseudo random order

I have an array that I would like to iterate in random order. That is, I would like my iteration to visit each element only once in a seemingly random order.
Would it be possible to implement an iterator that would iterate elements like this without storing the order or other data in a lookup table first?
Would it be possible to do it for N-dimensional arrays where N>1?
UPDATE: Some of the answers mention how to do this by storing indices. A major point of this question is how to do it without storing indices or other data.
I decided to solve this, because it annoyed me to death not remembering the name of solution that I had heard before. I did however remember in the end, more on that in the bottom of this post.
My solution depends on the mathematical properties of some cleverly calculated numbers
range = array size
prime = closestPrimeAfter(range)
root = closestPrimitiveRootTo(range/2)
state = root
With this setup we can calculate the following repeatedly and it will iterate all elements of the array exactly once in a seemingly random order, after which it will loop to traverse the array in the same exact order again.
state = (state * root) % prime
I implemented and tested this in Java, so I decided to paste my code here for future reference.
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Random;
public class PseudoRandomSequence {
private long state;
private final long range;
private final long root;
private final long prime;
//Debugging counter
private int dropped = 0;
public PseudoRandomSequence(int r) {
range = r;
prime = closestPrimeAfter(range);
root = modPow(generator(prime), closestPrimeTo(prime / 2), prime);
reset();
System.out.println("-- r:" + range);
System.out.println(" p:" + prime);
System.out.println(" k:" + root);
System.out.println(" s:" + state);
}
// https://en.wikipedia.org/wiki/Primitive_root_modulo_n
private static long modPow(long base, long exp, long mod) {
return BigInteger.valueOf(base).modPow(BigInteger.valueOf(exp), BigInteger.valueOf(mod)).intValue();
}
//http://e-maxx-eng.github.io/algebra/primitive-root.html
private static long generator(long p) {
ArrayList<Long> fact = new ArrayList<Long>();
long phi = p - 1, n = phi;
for (long i = 2; i * i <= n; ++i) {
if (n % i == 0) {
fact.add(i);
while (n % i == 0) {
n /= i;
}
}
}
if (n > 1) fact.add(n);
for (long res = 2; res <= p; ++res) {
boolean ok = true;
for (long i = 0; i < fact.size() && ok; ++i) {
ok &= modPow(res, phi / fact.get((int) i), p) != 1;
}
if (ok) {
return res;
}
}
return -1;
}
public long get() {
return state - 1;
}
public void advance() {
//This loop simply skips all results that overshoot the range, which should never happen if range is a prime number.
dropped--;
do {
state = (state * root) % prime;
dropped++;
} while (state > range);
}
public void reset() {
state = root;
dropped = 0;
}
private static boolean isPrime(long num) {
if (num == 2) return true;
if (num % 2 == 0) return false;
for (int i = 3; i * i <= num; i += 2) {
if (num % i == 0) return false;
}
return true;
}
private static long closestPrimeAfter(long n) {
long up;
for (up = n + 1; !isPrime(up); ++up)
;
return up;
}
private static long closestPrimeBefore(long n) {
long dn;
for (dn = n - 1; !isPrime(dn); --dn)
;
return dn;
}
private static long closestPrimeTo(long n) {
final long dn = closestPrimeBefore(n);
final long up = closestPrimeAfter(n);
return (n - dn) > (up - n) ? up : dn;
}
private static boolean test(int r, int loops) {
final int array[] = new int[r];
Arrays.fill(array, 0);
System.out.println("TESTING: array size: " + r + ", loops: " + loops + "\n");
PseudoRandomSequence prs = new PseudoRandomSequence(r);
final long ct = loops * r;
//Iterate the array 'loops' times, incrementing the value for each cell for every visit.
for (int i = 0; i < ct; ++i) {
prs.advance();
final long index = prs.get();
array[(int) index]++;
}
//Verify that each cell was visited exactly 'loops' times, confirming the validity of the sequence
for (int i = 0; i < r; ++i) {
final int c = array[i];
if (loops != c) {
System.err.println("ERROR: array element #" + i + " was " + c + " instead of " + loops + " as expected\n");
return false;
}
}
//TODO: Verify the "randomness" of the sequence
System.out.println("OK: Sequence checked out with " + prs.dropped + " drops (" + prs.dropped / loops + " per loop vs. diff " + (prs.prime - r) + ") \n");
return true;
}
//Run lots of random tests
public static void main(String[] args) {
Random r = new Random();
r.setSeed(1337);
for (int i = 0; i < 100; ++i) {
PseudoRandomSequence.test(r.nextInt(1000000) + 1, r.nextInt(9) + 1);
}
}
}
As stated in the top, about 10 minutes after spending a good part of my night actually getting a result, I DID remember where I had read about the original way of doing this. It was in a small C implementation of a 2D graphics "dissolve" effect as described in Graphics Gems vol. 1 which in turn is an adaption to 2D with some optimizations of a mechanism called "LFSR" (wikipedia article here, original dissolve.c source code here).
You could collect all possible indices in a list and then remove a random indece to visit. I know this is sort of like a lookup table, but i don't see any other option than this.
Here is an example for a one-dimensional array (adaption to multiple dimensions should be trivial):
class RandomIterator<T> {
T[] array;
List<Integer> remainingIndeces;
public RandomIterator(T[] array) {
this.array = array;
this.remainingIndeces = new ArrayList<>();
for(int i = 0;i<array.length;++i)
remainingIndeces.add(i);
}
public T next() {
return array[remainingIndeces.remove((int)(Math.random()*remainingIndeces.size()))];
}
public boolean hasNext() {
return !remainingIndeces.isEmpty();
}
}
On a side note: If this code is performance relevant, this method would perform worse by far, as the random removing from the list triggers copies if you use a list backed by an array (a linked-list won't help either, as indexed access is O(n)). I would suggest a lookup-structure (e.g. HashSet in Java) that stores all visited indices to circumvent this problem (though that's exactly what you did not want to use)
EDIT: Another approach is to copy said array and use a library function to shuffle it and then traverse it in linear order. If your array isn't that big, this seems like the most readable and performant option.
You would need to create a pseudo random number generator that generates values from 0 to X-1 and takes X iterations before repeating the cycle, where X is the product of all the dimension sizes. I don't know if there is a generic solution to doing this. Wiki article for one type of random number generator:
http://en.wikipedia.org/wiki/Linear_congruential_generator
Yes, it is possible. Imagine 3D array (you not likely use anything more than that). This is like a cube and where all 3 lines connect is a cell. You can enumerate your cells 1 to N using a dictionary, you can do this initialization in loops, and create a list of cells to use for random draw
Initialization
totalCells = ... (xMax * yMax * zMax)
index = 0
For (x = 0; x < xMax ; x++)
{
For (y = 0; y < yMax ; y++)
{
For (z = 0; z < zMax ; z++)
{
dict.Add(i, new Cell(x, y, z))
lst.Add(i)
i++
}
}
}
Now, all you have to do is iterate randomly
Do While (lst.Count > 0)
{
indexToVisit = rand.Next(0, lst.Count - 1)
currentCell = dict[lst[indexToVisit]]
lst.Remove(indexToVisit)
// Do something with current cell here
. . . . . .
}
This is pseudo code, since you didn't mention language you work in
Another way is to randomize 3 (or whatever number of dimensions you have) lists and then just nested loop through them - this will be random in the end.

apache commons math - NotStrictlyPositiveException when only 1 value exists in bin

I am trying to use apache commons math for kernel density estimation for a group of values. One bin happens to have only one value, and when I try to call cumulativeProbability() I get a NotStrictlyPositiveException. Is there any way to prevents this? I can't be sure that all the bins will have at least one value.
Thanks.
Given that this bug is still there, I wrote my own implementation of the EmpiricalDistribution class, following their guidelines.
I only re-implemented the functionality that I needed, i.e. computing the entropy of a distribution, but you can easily extend it to your needs.
public class EmpiricalDistribution {
private double[] values;
private int[] binCountArray;
private double maxValue, minValue;
private double mean, stDev;
public EmpiricalDistribution(double[] values) {
this.values = values;
int binCount = NumberUtil.roundToClosestInt(values.length / 10.0);
binCountArray = new int[binCount];
maxValue = Double.NEGATIVE_INFINITY;
minValue = Double.POSITIVE_INFINITY;
for (double value : values) {
if (value > maxValue) maxValue = value;
if (value < minValue) minValue = value;
}
double binRange = (maxValue - minValue) / binCount;
for (double value : values) {
int bin = (int) ((value - minValue) / binRange);
bin = Math.min(binCountArray.length - 1, bin);
binCountArray[bin]++;
}
mean = (new Mean()).evaluate(values);
stDev = (new StandardDeviation()).evaluate(values, mean);
}
public double getEntropy() {
double entropy = 0;
for (int valuesInBin : binCountArray) {
if (valuesInBin == 0) continue;
double binProbability = valuesInBin / (double) values.length;
entropy -= binProbability * FastMath.log(2, binProbability);
}
return entropy;
}
public double getMean() {
return mean;
}
public double getStandardDeviation() {
return stDev;
}
}
I get the same error with one of my distributions.
Reading the Javadoc of this class, it says the following:
USAGE NOTES:
The binCount is set by default to 1000. A good rule of thumb
is to set the bin count to approximately the length of the input
file divided by 10.
I've initialised my EmpiricalDistribution with a binCount equals to 10% of my initial data length and now everything is working ok:
double[] baseLine = getBaseLineValues();
...
// Initialise binCount
distribution = new EmpiricalDistribution(baseLine.length/10);
// Load base line data
distribution.load(baseLine);
// Now you can obtain random values based on this distribution
double randomValue = distribution.getNextValue();

Getting delta of array values from constantly updating array

Having a problem getting something which should be rather simple to work.
I am constantly updating an array with new values and as I do so I need to get the delta or difference between the lowest and highest values. The length of the array should remain constant at 10.
The problem is that only the 1st and last values of my delta array seem to change. What am I missing?
Although in AS3, should be almost identical in Java or Javascript
private var _deltaArray:Array= new Array();
private function update(myVal:int):void{
if (_deltaArray.length < 10) {
_deltaArray.push(myVal);
}
if (_deltaArray.length >= 10) {
_deltaArray.push(myVal);
var delta:int =getDelta(_deltaArray);
_deltaArray.shift();
}
}//end func
private function getDelta(a:Array):int {
var total:Number=0;
var L:int=a.length
if (L > 1) {
a.sort(Array.NUMERIC);
var delta:int=int(a[0]) - int(a[L - 1]);
trace('getDelta delta= ' + delta);
}
return delta;
}//end func
This is just a suggestion, but why not keep a running count of your delta? I can only code this in pseudo-code, but:
private double max = Double.MIN;
private double min = Double.MAX;
private void update(integer value) {
array.push(value);
max = value > max ? value : max;
min = value < min ? value : min;
if (array.length > 10) {
array.shift();
}
}
private int delta() { return max - min; }
Found the answer.
I needed to clone my array BEFORE sorting it to retrieve delta value

Algorithm for determining if strided arrays overlap?

In the library I'm working on, we have data sets (which may be subsets of other data sets) that are distributed in memory in three-dimensional rectangular strided arrays. That is, an array A can be subscripted as A(i,j,k), where each index ranges from zero to some upper bound, and the location of each element in memory is given by:
A(i,j,k) = A0 + i * A_stride_i + j * A_stride_j + k * A_stride_k
where A0 is a base pointer, and A_stride_i et al are dimensional strides.
Now, because these data sets may be subsets of other data sets rather than each occupying their own independent malloc'ed block of memory, it's entirely possible that they may overlap (where overlap means that A(i,j,k) < B(m,n,p) is neither always true nor always false), and if they overlap they may interleave with each other or they may collide with each other (where collide means that A(i,j,k) == B(m,n,p) for some sextet of indices).
Therein lies the question. Some operations on two data sets (for example, a copy) are only valid if the arrays do not collide with each other, but are valid if they overlap in an interleaved non-colliding fashion. I'd like to add a function for two data sets whether two data sets collide or not.
Is there an existing algorithm for doing this in a reasonably efficient and straightforward way?
It's fairly easy to check whether the data sets overlap or not, so the key question is: Given two data sets of this form that overlap, what is an efficient algorithm to determine if they interleave or collide?
Example:
As a simple example, suppose we have memory locations from 0 to F (in hex):
0 1 2 3 4 5 6 7 8 9 A B C D E F
I'll also consider only 2D arrays here, for simplicity. Suppose we have one of size 2,3 (that is, 0 <= i < 2 and 0 <= j < 3), with a stride_i = 1 and stride_j = 4, at a base address of 2. This will occupy (with occupied locations denoted by their i,j pair):
0 1 2 3 4 5 6 7 8 9 A B C D E F
* * * * * *
Likewise, if we have another array of the same sizes and strides, starting at a base address of 4, that will look like this:
0 1 2 3 4 5 6 7 8 9 A B C D E F
o o o o o o
In the terminology that I was using in describing the problem, these arrays "overlap", but they do not collide.
Restrictions and Assumptions:
We can assume that the strides are positive and, if desired, that they are in increasing order. Neither of things are true in the actual library, but it is reasonably simple to rearrange the array definition to get to this point.
We can assume that arrays do not self-interleave. This is also not enforced by the library, but would be a pathological case, and can be warned about separately. That is (assuming the strides are in increasing order, and i ranges from zero to max_i and so forth):
stride_j >= max_i * stride_i
stride_k >= max_j * stride_j
Points, of course, for methods that do not require these assumptions, as rearranging the array definition into a canonical order is a bit of work that's ideally avoided.
The two arrays cannot be assumed to have equal sizes or strides.
I don't think there's value in keeping track of things during construction -- there's no information occurring at construction that is not present when doing the test. Also, "construction" may simply be "consider the subset of this larger array with this base pointer, these strides, and these sizes."
Worst Likely Cases
svick's answer reminds me that I should probably add something about some typical "worse" cases that I expect this to see. One of the worst will be when we have an array that represents some very large number of complex values, stored in consecutive (real, imag) pairs, and then we have two sub-arrays containing the real and imaginary parts respectively -- so, you've got a few million elements in the array, alternating between the arrays. As this is not an unlikely case, it should be testable with something other than abysmal performance.
I think the following C# program should work. It uses the branch and bound method and works for arrays of any number of dimensions.
using System;
using System.Collections.Generic;
namespace SO_strides
{
sealed class Dimension
{
public int Min { get; private set; }
public int Max { get; private set; }
public int Stride { get; private set; }
private Dimension() { }
public Dimension(int max, int stride)
{
Min = 0;
Max = max;
Stride = stride;
}
public Dimension[] Halve()
{
if (Max == Min)
throw new InvalidOperationException();
int split = Min + (Max - Min) / 2;
return new Dimension[]
{
new Dimension { Min = Min, Max = split, Stride = Stride },
new Dimension { Min = split + 1, Max = Max, Stride = Stride }
};
}
}
sealed class ArrayPart
{
public int BaseAddr { get; private set; }
public Dimension[] Dimensions { get; private set; }
public int FirstNonconstantIndex { get; private set; }
int? min;
public int Min
{
get
{
if (min == null)
{
int result = BaseAddr;
foreach (Dimension dimension in Dimensions)
result += dimension.Min * dimension.Stride;
min = result;
}
return min.Value;
}
}
int? max;
public int Max
{
get
{
if (max == null)
{
int result = BaseAddr;
foreach (Dimension dimension in Dimensions)
result += dimension.Max * dimension.Stride;
max = result;
}
return max.Value;
}
}
public int Size
{
get
{
return Max - Min + 1;
}
}
public ArrayPart(int baseAddr, Dimension[] dimensions)
: this(baseAddr, dimensions, 0)
{
Array.Sort(dimensions, (d1, d2) => d2.Stride - d1.Stride);
}
private ArrayPart(int baseAddr, Dimension[] dimensions, int fni)
{
BaseAddr = baseAddr;
Dimensions = dimensions;
FirstNonconstantIndex = fni;
}
public bool CanHalve()
{
while (FirstNonconstantIndex < Dimensions.Length
&& Dimensions[FirstNonconstantIndex].Min == Dimensions[FirstNonconstantIndex].Max)
FirstNonconstantIndex++;
return FirstNonconstantIndex < Dimensions.Length;
}
public ArrayPart[] Halve()
{
Dimension[][] result = new Dimension[2][];
Dimension[] halves = Dimensions[FirstNonconstantIndex].Halve();
for (int i = 0; i < 2; i++)
{
result[i] = (Dimension[])Dimensions.Clone();
result[i][FirstNonconstantIndex] = halves[i];
}
return new ArrayPart[]
{
new ArrayPart(BaseAddr, result[0], FirstNonconstantIndex),
new ArrayPart(BaseAddr, result[1], FirstNonconstantIndex)
};
}
}
sealed class CandidateSet
{
public ArrayPart First { get; private set; }
public ArrayPart Second { get; private set; }
public CandidateSet(ArrayPart first, ArrayPart second)
{
First = first;
Second = second;
}
public bool Empty
{
get
{
return First.Min > Second.Max || Second.Min > First.Max;
}
}
public CandidateSet[] Halve()
{
int firstSize = First.Size;
int secondSize = Second.Size;
CandidateSet[] result;
if (firstSize > secondSize && First.CanHalve())
{
ArrayPart[] halves = First.Halve();
result = new CandidateSet[]
{
new CandidateSet(halves[0], Second),
new CandidateSet(halves[1], Second)
};
}
else if (Second.CanHalve())
{
ArrayPart[] halves = Second.Halve();
result = new CandidateSet[]
{
new CandidateSet(First, halves[0]),
new CandidateSet(First, halves[1])
};
}
else
throw new InvalidOperationException();
return result;
}
public static bool HasSolution(ArrayPart first, ArrayPart second)
{
Stack<CandidateSet> stack = new Stack<CandidateSet>();
stack.Push(new CandidateSet(first, second));
bool found = false;
while (!found && stack.Count > 0)
{
CandidateSet candidate = stack.Pop();
if (candidate.First.Size == 1 && candidate.Second.Size == 1)
found = true;
else
{
foreach (CandidateSet half in candidate.Halve())
if (!half.Empty)
stack.Push(half);
}
}
return found;
}
}
static class Program
{
static void Main()
{
Console.WriteLine(
CandidateSet.HasSolution(
new ArrayPart(2, new Dimension[] { new Dimension(1, 1), new Dimension(2, 4) }),
new ArrayPart(4, new Dimension[] { new Dimension(1, 1), new Dimension(2, 4) })
)
);
}
}
}

Resources