How does the hashlife alg go on forever in Golly? - c

In hashlife the field is typically treated as a theoretically infinite grid, with the pattern in question centered near the origin. A quadtree is used to represent the field. Given a square of 2^(2k) cells, 2k on a side, at the kth level of the tree, the hash table stores the 2^(k-1) by 2^(k-1) square of cells in the center, 2^(k-2) generations in the future. For example, for a 4x4 square it stores the 2x2 center, 1 generation forward; and for an 8x8 square it stores the 4x4 center, 2 generations forward.
So given a 8x8 initial configuration we get a 4x4 square 1 generation forward centered w.r.t. the 8x8 square and a 2x2 square 2 generations forward (1 generation forward w.r.t the 4x4 square) centered w.r.t the 8x8 square. With every new generation our view of the grid reduces, in-turn we get the next state of the automata. We canot go any further after getting the inner most 2x2 square 2^(k-2) generations forward.
So how does the hashlife in Golly go on forever? Also its view of the field never seems to reduce. It seems to show the state of the whole automata after 2^(k-2) generations. More so given a starting configuration which expands with time, the view of the algorithm seems to increase. The view of the grid zooms out to show the expanding automata?

There's a good article on Dr. Dobb's which goes into detail about how HashLife works. The basic answer is that you don't merely run the algorithm on the existing nodes, you also use new shifted nodes to get the next generation.

To be clear (because your ^ symbols were missing), you are asking:
Given a square of 2^(2k) cells, 2^k on a side, at the kth level of the
tree, the hash table stores the 2^(k-1)-by-2^(k-1) square of cells in
the center, 2^(k-2) generations in the future. [...]
So given a 8x8 initial configuration [...] With every new generation
our view of the grid reduces, in-turn we get the next state of the
automata. We canot go any further after getting the inner most 2x2
square 2^k-2 generations forward.
So how does the hashlife in Golly go on forever? Also its view of the
field never seems to reduce.
Instead of starting with your 8x8 pattern, imagine instead that you start with a bigger pattern that happens to contain your 8x8 pattern inside it. For example, you could start with a 16x16 pattern that has your 8x8 pattern in the center, and a 4-row margin of blank cells all around the edges. Such a pattern is easy to construct, by assembling blank 4x4 nodes with the 4x4 subnodes of your 8x8 start pattern.
Given such a 16x16 pattern, the HashLife algorithm can give you an 8x8 answer, 4 generations in the future.
You want more? Okay, start with a 32x32 pattern that contains mostly blank space, with the 8x8 pattern in the center. With this you can get a 16x16 answer that is 8 generations into the future.
What if your pattern contains moving objects that move fast enough that they go outside that 16x16 area after 8 generations? Simple -- start with a 64x64 start pattern, but instead of trying to run it for a whole 16 generations, just run it for 8 generations.
In general, all cases of arbitrarily large, possibly expanding patterns, over arbitrarily long periods of time, can be handled (and in fact are handled in Golly) by adding as much blank space as needed around the outside of the pattern.

The centered squares are only the precomputed stuff. The algorithm indeed keeps the whole universe at all times and updates all parts of it, not just the centers.

Related

Efficient Boundary Approximation

Imagine I have the following structure represented in an array:
The blue cells represent "boundaries" and the red cell represents the structures origin. I have a function that calculates the distances of each interior cell (cells which aren't boundaries) to its closest boundary and to the origin.
Currently I do this with a nested for loop which essentially tests all cell positions to my current position and selects the cell with the smallest distance which is also marked a boundary cell.
For small data-sets this is okay, but when you have a large array of possible points to iterate through this comes painfully slow.
I am looking for a solution which would be faster but trade accuracy. Currently I am able to return the exact closest boundary cell to any given interior cell but I only really need a close approximation of which cell is closest.
Each cell in the array already has the following information:
Arbitrary position (used for distance calculation)
Is a Boundary Cell
A list of neighbours (any cells which share an edge)
Things to note:
The structure does not necessarily conform to any type of specific polygon shape
The array isn't necessarily ordered in any logical way
The array is flat (i.e 1D)
Possible solutions I have thought of (but have otherwise untested):
An A* approach (as each cell knows its neighbour I could do something like this but I think it would be worse for performance than my current brute force method
A priority queue which sorts from smallest to largest distance from origin (but unsure of how to achieve approximate closest border)
I am assuming that the cells are irrelevant. Everything just depends on the distinguished points in the cells. Finding the distance to the origin is one calculation and cannot be improved. So your problem reduces to: You have red points and white points (to stick to your color scheme), and you want to find the closest blue point to each white point.
This is a version of nearest-neighbor search. There is extensive
literature on this problem, as well as variants such as approximate nearest-neighbor search. Here is one paper that could lead you to others:
Connor, Michael, and Piyush Kumar. "Practical Nearest Neighbor Search in the Plane." SEA. 2010. (Springer link.)
The bottom line is that with appropriate data structures, you can achieve
O(log n) query time per white point, which is much faster than the naive linear search.

Proper Heuristic Mechanism For Hill Climbing

The following problem is an exam exercise I found from an Artificial Intelligence course.
"Suggest a heuristic mechanism that allows this problem to be solved, using the Hill-Climbing algorithm. (S=Start point, F=Final point/goal). No diagonal movement is allowed."
Since it's obvious that Manhattan Distance or Euclidean Distance will send the robot at (3,4) and no backtracking is allowed, what is a possible solution (heuristic mechanism) to this problem?
EDIT: To make the problem clearer, I've marked some of the Manhattan distances on the board:
It would be obvious that, using Manhattan distance, the robot's next move would be at (3,4) since it has a heuristic value of 2 - HC will choose that and get stuck forever. The aim is try and never go that path by finding the proper heuristic algorithm.
I thought of the obstructions as being hot, and that heat rises. I make the net cost of a cell the sum of the Manhattan metric distance to F plus a heat-penalty. Thus there is an attractive force drawing the robot towards F as well as a repelling force which forces it away from the obstructions.
There are two types of heat penalties:
1) It is very bad to touch an obstruction. Look at the 2 or 3 cells neighboring cells in the row immediately below a given cell. Add 15 for every obstruction cell which is directly below the given cell and 10 for every diagonal neighbor which is directly below
2) For cells not in direct contact with the instructions -- the heat is more diffuse. I calculate it as 6 times the average number of obstruction blocks below the cell both in its column and in its neighboring columns.
The following shows the result of combining this all, as well as the path taken from S to F:
A crucial point it the way that the averaging causes the robot to turn left rather than right when it hits the top row. The unheated columns towards the left make that the cooler direction. It is interesting to note how all cells (with the possible exception of the two at the upper-right corner) are drawn to F by this heuristic.

Check that smaller cubes fill bigger cube

Given one large cube (axis aligned and on integer coordinates), and many smaller cubes (also axis aligned and on integer coordinates). How can we check that the large cube is perfectly filled by the smaller cubes.
Currently we check that:
For each small cube it is fully contained by the large cube.
That it doesn't intersect any other small cube.
The sum of the volumes of the small cubes equals the volume of the large cube.
This is ok for small numbers of cubes but we need to support this test of cubes with dimensions greater than 2^32. Even at 2^16 the number of small cubes required to fill the large cube is large enough that step 2 takes a while (O(n^2) checking each cube intersects no other).
Is there a better algorithm?
EDIT:
There seems to be some confusion over this. I am not trying to split a cube into smaller cubes. That's already done. Part of our program splits large OpenCL ranges (axis aligned cubes on integer coordinates) into lots of smaller ranges that fit into a hardware job.
What I'm doing is hooking into this system and checking that the jobs it produces correctly cover the large initial range. My algorithm above works, but it's slow and given the amount of tests we have to run I'd like to keep these tests as fast as possible.
We are talking about 3D right?
For 2D one can do a similar (but simpler) process (with, I believe, an O(n log n) running time algorithm).
The basic idea of the below is the sweep-line algorithm.
Note that rectangle intersection can done by checking whether any corner of any cube is contained in any other cube.
You can improve on (2) as follows:
Split each cube into 2 rectangles on the y-z plane (so you'd have 2 rectangles defined by the same set of 4 (y,z) coordinates, but the x coordinates will be different between the rectangles).
Define the rectangle with the smaller x-coordinate as the start of a cube and the other rectangle as the end of a cube.
Sort the rectangles by x-coordinate
Have an initially empty interval tree
(each interval should also store a reference to the rectangle to which it belongs)
For each rectangle:
Look up the y-coordinate of each point of the rectangle in the interval tree.
For each matching interval, look up its rectangle and check whether the point is also contained within the z-coordinates (this is all that's required because the tree only contains x-coordinates in the correct range and we check the y-coordinates by doing the interval lookup).
If it is, we have overlap.
If the rectangle is the start of a cube, insert the 2 y-coordinates of the rectangle as an interval into the interval tree.
Otherwise, remove the interval defined by the 2 y-coordinates from the tree.
The running time is between O(n) (best case) and O(n2) (worst case), depending on how much overlap there is in the x- and y-coordinates (more overlap is worse).
order your insert cubes
insert the biggest insert cube in one of the corners of your cube and split up the remaining cube into subcubes
insert the second biggest insert cube in the first of the sub cubes that will fit and add the remaining subcubes of this subcube to the set of subcubes
etc.
Another go, again only addressing step 2 in the original question:
Define a space-filling curve with good spatial locality, such as a 3D Hilbert Curve.
For each cube calculate the pair of coordinates on the curve for the points at which the curve both enters and leaves the cube. The space-filling curve will enter and leave some cubes more than once, calculate more than one pair of coordinates for these cases.
You've now got I don't know how many pairs of coordinates, but I'd guess no more than 2^18. These coordinates define intervals along the space-filling curve, so sort them and look for overlaps.
Time complexity is probably dominated by the sort, space complexity is probably quite big.

Algorithm for best fit rectangle

I'm looking for an algorithm to do a best fit of an arbitrary rectangle to an unordered set of points. Specifically, I'm looking for a rectangle where the sum of the distances of the points to any one of the rectangle edges is minimised. I've found plenty of best fit line, circle and ellipse algorithms, but none for a rectangle. Ideally, I'd like something in C, C++ or Java, but not really that fussy on the language.
The input data will typically be comprised of most points lying on or close to the rectangle, with a few outliers. The distribution of data will be uneven, and unlikely to include all four corners.
Here are some ideas that might help you.
We can estimate if a point is on an edge or on a corner as follows:
Collect the point's n neares neighbours
Calculate the points' centroid
Calculate the points' covariance matrix as follows:
Start with Covariance = ((0, 0), (0, 0))
For each point calculate d = point - centroid
Covariance += outer_product(d, d)
Calculate the covariance's eigenvalues. (e.g. with SVD)
Classify point:
if one eigenvalue is large and the other very small, we are probably on an edge
otherwise we should be on a corner
Extract all corner points and do a segmentation. Choose the four segments with most entries. The centroid of those segments are candidates for the rectangle's corners.
Calculate the normalized direction vectors of two opposite sides and calculate their mean. Calculate the mean of the other two opposite sides. These are the direction vectors of a parallelogram. If you want a rectangle, calculate a perpendicular vector to one of those directions and calculate the mean with the other direction vector. Then the rectangle's direction's are the mean vector and a perpendicular vector.
In order to calculate the corners, you can project the candidates on their directions and move them so that they form the corners of a rectangle.
The idea of a line of best fit is to compute the vertical distances between your points and the line y=ax+b. Then you can use calculus to find the values of a and b that minimize the sum of the squares of the distances. The reason squaring is chosen over absolute value is because the former is differentiable at 0.
If you were to try the same approach with a rectangle, you would run into the problem that the square of the distance to the side of a rectangle is a piecewise defined function with 8 different pieces and is not differentiable when the pieces meet up inside the rectangle.
In order to proceed, you'll need to decide on a function that measures how far a point is from a rectangle that is everywhere differentiable.
Here's a general idea. Make a grid with smallish cells; calculate best fit line for each not-too-empty cell (the calculation is immediate1, there's no search involved). Join adjacent cells while making sure the standard deviation is improving/not worsening much. Thus we detect the four sides and the four corners, and divide our points into four groups, each belonging to one of the four sides.
Next, we throw away the corner cells, put the true rectangle in place of the four approximate
lines and do a bit of hill climbing (or whatever). The calculation of best fit line may be augmented for this case, since the two lines are parallel, and we've already separated our points into the four groups (for a given rectangle, we know the delta-y between the two opposing sides (taking horizontal-ish sides for a moment), so we just add this delta-y to the ys of the lower group of points and make the calculation).
The initial rectangular grid may be replaced with working by stripes (say, vertical). Then, at least half of the stripes will have two pronounced groupings of points (find them by dividing each stripe by horizontal division lines into cells).
1For a line Y = a*X+b, minimize the sum of squares of perpendicular distances of data points {xi,yi} to that line. This is directly solvable for a and b. For more vertical lines, flip the Xs and the Ys.
P.S. I interpret the problem as minimizing the sum of squares of perpendicular distances of each point to its nearest side of the rectangle, not to all the rectangle's sides.
I am not completely sure, but You might play around first 2 (3?) dimensions over the PCA from your points. it will work reasonably fast for the most cases.

Spatial Data Structures in C

I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are.
How best to approach this for your problem depends on several factors that you have not described:
- Will the same hypersphere arrangement be used for many particle collision calculations?
- Are the hyperspheres uniform size?
- What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres?
- Do you consider the particle to have zero volume?
I assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine).
If your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space.
However to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes.
Now that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube.
Efficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3.
Efficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL).
A description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4}
As others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your
hyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree.
{1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981)
{2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989)
{3} Convex hull generation, connected component labelling, and minimum distance
calculation for set-theoretically defined models, Dan Pidcock, 2000
{4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105.
It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository.
Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree.
You can either make your own, or use someone else's, like this one:
http://libkdtree.alioth.debian.org/
A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node.
An Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree
If you're doing N dimensional intersection tests, you could generalize this to an N tree.
Intersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects.
An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere.
Finding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides).
From memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection:
i. Is the centre within the cube (this gets the eponymous situation)
ii. Are any of the corners of the cube within radius r of the centre (corners within the sphere)
iii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic):
a. A normal of the surface that goes to the centre of the sphere
b. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube)
c. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube)
iv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere)
v. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges).
Slightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.

Resources