I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are.
How best to approach this for your problem depends on several factors that you have not described:
- Will the same hypersphere arrangement be used for many particle collision calculations?
- Are the hyperspheres uniform size?
- What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres?
- Do you consider the particle to have zero volume?
I assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine).
If your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space.
However to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes.
Now that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube.
Efficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3.
Efficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL).
A description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4}
As others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your
hyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree.
{1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981)
{2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989)
{3} Convex hull generation, connected component labelling, and minimum distance
calculation for set-theoretically defined models, Dan Pidcock, 2000
{4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105.
It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository.
Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree.
You can either make your own, or use someone else's, like this one:
http://libkdtree.alioth.debian.org/
A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node.
An Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree
If you're doing N dimensional intersection tests, you could generalize this to an N tree.
Intersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects.
An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere.
Finding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides).
From memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection:
i. Is the centre within the cube (this gets the eponymous situation)
ii. Are any of the corners of the cube within radius r of the centre (corners within the sphere)
iii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic):
a. A normal of the surface that goes to the centre of the sphere
b. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube)
c. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube)
iv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere)
v. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges).
Slightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.
Related
I have multiple faces in 3D space creating cells. All these faces lie within a predefined cube (e.g. of size 100x100x100).
Every face is convex and defined by a set of corner points and a normal vector. Every cell is convex. The cells are result of 3d voronoi tessellation, and I know the initial seed points of the cells.
Now for every integer coordinate I want the smallest distance to any face.
My current solution uses this answer https://math.stackexchange.com/questions/544946/determine-if-projection-of-3d-point-onto-plane-is-within-a-triangle/544947 and calculates for every point for every face for every possible triple of this faces points the projection of the point to the triangle created by the triple, checks if the projection is inside the triangle. If this is the case I return the distance between projection and original point. If not I calculate the distance from the point to every possible line segment defined by two points of a face. Then I choose the smallest distance. I repeat this for every point.
This is quite slow and clumsy. I would much rather calculate all points that lie on (or almost lie on) a face and then with these calculate the smallest distance to all neighbour points and repeat this.
I have found this Get all points within a Triangle but am not sure how to apply it to 3D space.
Are there any techniques or algorithms to do this efficiently?
Since we're working with a Voronoi tessellation, we can simplify the current algorithm. Given a grid point p, it belongs to the cell of some site q. Take the minimum over each neighboring site r of the distance from p to the plane that is the perpendicular bisector of qr. We don't need to worry whether the closest point s on the plane belongs to the face between q and r; if not, the segment ps intersects some other face of the cell, which is necessarily closer.
Actually it doesn't even matter if we loop r over some sites that are not neighbors. So if you don't have access to a point location subroutine, or it's slow, we can use a fast nearest neighbors algorithm. Given the grid point p, we know that q is the closest site. Find the second closest site r and compute the distance d(p, bisector(qr)) as above. Now we can prune the sites that are too far away from q (for every other site s, we have d(p, bisector(qs)) ≥ d(q, s)/2 − d(p, q), so we can prune s unless d(q, s) ≤ 2 (d(p, bisector(qr)) + d(p, q))) and keep going until we have either considered or pruned every other site. To do pruning in the best possible way requires access to the guts of the nearest neighbor algorithm; I know that it slots right into the best-first depth-first search of a kd-tree or a cover tree.
Imagine I have the following structure represented in an array:
The blue cells represent "boundaries" and the red cell represents the structures origin. I have a function that calculates the distances of each interior cell (cells which aren't boundaries) to its closest boundary and to the origin.
Currently I do this with a nested for loop which essentially tests all cell positions to my current position and selects the cell with the smallest distance which is also marked a boundary cell.
For small data-sets this is okay, but when you have a large array of possible points to iterate through this comes painfully slow.
I am looking for a solution which would be faster but trade accuracy. Currently I am able to return the exact closest boundary cell to any given interior cell but I only really need a close approximation of which cell is closest.
Each cell in the array already has the following information:
Arbitrary position (used for distance calculation)
Is a Boundary Cell
A list of neighbours (any cells which share an edge)
Things to note:
The structure does not necessarily conform to any type of specific polygon shape
The array isn't necessarily ordered in any logical way
The array is flat (i.e 1D)
Possible solutions I have thought of (but have otherwise untested):
An A* approach (as each cell knows its neighbour I could do something like this but I think it would be worse for performance than my current brute force method
A priority queue which sorts from smallest to largest distance from origin (but unsure of how to achieve approximate closest border)
I am assuming that the cells are irrelevant. Everything just depends on the distinguished points in the cells. Finding the distance to the origin is one calculation and cannot be improved. So your problem reduces to: You have red points and white points (to stick to your color scheme), and you want to find the closest blue point to each white point.
This is a version of nearest-neighbor search. There is extensive
literature on this problem, as well as variants such as approximate nearest-neighbor search. Here is one paper that could lead you to others:
Connor, Michael, and Piyush Kumar. "Practical Nearest Neighbor Search in the Plane." SEA. 2010. (Springer link.)
The bottom line is that with appropriate data structures, you can achieve
O(log n) query time per white point, which is much faster than the naive linear search.
jgrapht supports the idea of putting a wehight(a cost) on an edge/vertex between two nodes. This can be achieved using the class DefaultWeightedEdge.
In my graph I do have the requirement to not find the shortest path but the cheapest one. The cheapest path might be longer/have more hops nodes to travel then the shortest path.
Therefor, one can use the DijkstraShortestPath algorithm to achieve this.
However, my use case is a bit more complex: It needs to also evaluate costs on actions that need to be executed when arriving at a node.
Let's say, you have a graph like a chess board(8x8 fields, each field beeing a node). All the edges have a weight of 1. To move in a car from left bottom to the diagonal corner(right upper), there are many paths with the cost of 16. You can take a diagonal path in a zic zac style, or you can first travel all nodes to the right and then all nodes upwards.
The difference is: When taking a zic zac, you need to rotate yourself in the direction of moving. You rotate 16 times.
When moving first all to the right and then upwards, you need to rotate only once (maybe twice, depending on your start orientation).
So the zic zac path is, from a Djikstra point of view, perfect. From a logical point of view, it's the worst.
Long story short: How can I put some costs on a node or edge depending on the previous edge/node in that path? I did not find anything related in the source code of jgrapht.
Or is there a better algorithm to use?
This is not a JGraphT issue but a graph algorithm issue. You need to think about how to encode this problem and formalize that in more detail
Incorporating weights on vertices is in general easy. Say that every vertex represents visiting a customer, which takes a_i time. This can be encoded in the graph by adding a_i/2 to the cost of every incoming arc in node i, as well as a_i/2 to the cost of every outgoing arc.
A cost function where the cost of traveling from j to k dependents on the arc (i,j) you used to travel to j is more complicated.
Approach a.: Use a dynamic programming (labeling) algorithm. This is perhaps the easiest. You can define your cost function as a recursive function, where the cost of traversing an arc depends on the cost of the previous arc.
Approach b.: With some tricks you may be able to encode the costs in the graph by adding extra nodes to it. Here's an example:
Given a graph with vertices {a,b,c,d,e}, with arcs: (a,e), (e,b), (c,e), (e,d). This graph represents a crossroad with vertex e being in the middle. Going from a->e->b (straight) is free, however, a turn from a->e->d takes additional time. Similar for c->e->d (straight) is free and c->e->b (turning) should be penalized.
Decouple vertex e in 4 new vertices: e1,e2,e3,e4.
Add the following arcs:
(a,e1), (e3,b), (c,e2), (e4,d), (e2, e3), (e1, e3), (e1, e4), (e2, e4).
(e1,e4) and (e2,e3) can have a positive weight to penalize turning.
I'm trying to use path finding on a series of convex polygons, rather than waypoints. To even further complicate this, the polygons are made by the users, and may have inconsistent vertices. For example:
We know the object is X wide by Y deep, and that the polygons have vertices at certain locations. Is there a specific algorithm to find the fastest way to the goal while keeping the entire object in the polygons (If I understand correctly, A* only works on waypoints)? How do you handle the vertices not being the same object but being at the same location?
EDIT: The polygons are convex; It's 2 separate polygons with the edges on the line.
Also, how do you implement * pathfinding, as a node based system wouldn't work in a 'infinite' resolution polygon?
In general, all shortest-path segments will have, as end-points, either polygon vertices or the start and goal points. If you build a graph that includes all those segments (from the start to each "visible" polygon vertex, from the goal to each "visible" polygon vertex, and from each polygon vertex to each other polygon vertex) and run A* on that, you have your optimal path. The cost of building the graph for A* is:
For each vertex, a visibility-test to find visible vertices: the simple algorithm (for each pair of vertices, see if the segment from one to another lies inside the polygon) is O(n^3). Building convex polygons and processing them independently, or using a smarter "radial sweep" algorithm can greatly lower this, but I suspect it is still around O(n^2).
For each query (from a start-point to a goal-point), O(n) for the visibility-test to find all vertices that it can see.
If you are only going to apply A* once, then the price of building the fixed part of the A* graph for a single traversal may be somewhat steep. An alternative is to build the graph incrementally as you use it:
Java code implementing the above approach can be found here.
The polygons in your drawing are not convex. For convex polygons, you can place a way point in the middle of each each edge and then apply A*. And, of course, you need to fix inconsistent vertices.
I'm working on a school project that involves taking a lat/long point and finding the top five closest points in a known list of places. The list is to be stored in memory, with the caveat that we must choose an "appropriate data structure" -- that is, we cannot simply store all the places in an array and compare distances one-by-one in a linear fashion. The teacher suggested grouping the place data by US State to prevent calculating the distance for places that are obviously too far away. I think I can do better.
From my research online it seems like an R-Tree or one of its variants might be a neat solution. Unfortunately, that sentence is as far as I've gotten with understanding the actual technique, as the literature is simply too dense for my non-academic head.
Can somebody give me a really high overview of what the process is for populating an R-Tree with lat/long data, and then traversing the tree to find those 5 nearest neighbors of a given point?
Additionally the project is in C, and I don't have to reinvent the wheel on this, so if you've used an existing open source C implementation of an R Tree I'd be interested in your experiences.
UPDATE: This blog post describes a straightforward search algorithm for a regionally partitioned space (like a PR quadtree). Hope that helps a future reader.
Have you considered alternative data structures?
I believe, instead of R-tree a Point Quadtree would be more effective for your need.Spatial Index Demos provides some demos for a list of possible data structures including R-tree and Point Quadtree. Hope it gives an insight.
Quad Trees
A quad tree takes a square of space and divides it into four children with half the dimensions along the X and Y axis.
+---+---+
| | | Each square is a child
| | | of the parent; when you
+---+---+ get to leaves a node has
| | | a single point or a list
| | | of points.
+---+---+
This data structure is recursive and you search for points by checking which child holds the point until you get to the leaf. A leaf either has a single member (point with X,Y coords) or a list of members, depending on the implementation. If you fill up a node you split it into 4 and distribute the children. Essentially, the data structure is a generalisation of a binary tree, so it is not necessarily balanced.
Balancing a quad tree may not be necessary for your purposes and is left as an exercise for the reader - try searching on the web for 'balanced quad tree'
Note that this data structure cannot index items that can overlap, but if you're only storing points this won't be a problem.
Finding nearest neighbours in a quad tree
Off the top of my head, here's a quick and dirty algorithm for finding the 'n' nearest neighbours to your point. It's not necessarily optimially efficient, but it will be fairly simple to implement. If someone has a link to a better one, feel free to post it in a comment or answer.
Locate the quad tree node containing
your point, keeping a list of its
parents.
Push all of the points in the
node into a priority queue based on
their distance from your base point
(i.e. by the length of the hypotenuse
per Pythagoras' theorem). Depending
on the implementation there may be
one or more per node. For a simple
implementation of a priority queue
data structure, look up 'binary
heap'.
If any of the 'n' points are further away then the edges of the bounding box, add the contents of its neighbours. i.e. If your base point is close to the edge of the bounding box, it is possible that neighbouring tree nodes might contain points that are closer than the points found within your bounding box. You will need to back up the tree to do this, which is why you need to keep track of your parent nodes.
When all of the 'n' closest points are closer than the edges of your bounding box you know that there could not possibly be neighbours that you have missed. Therefore, the 'n' closest points within this box must be your 'n' closest neighbours.