How to split a map in catchment areas (polygons gathering the closest points of specific points) - maps

I want to create an algorithm to determine the catchment areas of recycling bins in a city.
The idea : I have several points on a map, and I want to trace the polygons of their catchment areas. I consider that the catchment area is the zone where this recycling bin is the closest one.
I found that the edges of these polygons are parts of the line segment bisectors between 2 recycling bins.
But I haven't found yet how to select mathematically which intersections of line segment bisectors are the vertexes of the polygons of catchment areas.
(all the intersections of line segment bisectors aren't interesting)
Here is a picture of what I want to do (crosses are recycling bins and lines are the edges that demarcate catchment areas).
Any idea ?

I found the answer : with the Voronoi diagram (https://en.wikipedia.org/wiki/Voronoi_diagram)

Related

What could be a good state space for dogs locating problem?

Suppose that we have a M*N maze and some and there are K dogs in different cells of this mase looking for their houses (their unique houses are also located in some cell in the maze). in each step, all of the dogs can stay at their location or move to an adjacent cell in the maze (the eligible moves are: up, down, right, left if possible). what could be a good state space for this problem?
Unique houses mean that each dog has its specific house located somewhere on the maze.
Two dogs can stand same cell too.
I personally think that the sum of manhattan distances for each dog from its house could be a good heuristic but I could not define a good state space myself.
Here is a link to a picture of a sample for k=2 and a 5*5 maze:
Example
Because all of the animals are independent (they don't block each other and they have unique individual goals), you shouldn't model the joint actions between all agents. You are really solving K independent pathfinding problems, where each one can use the manhattan distance heuristic individually, given 4-connected movement. If you solve them jointly you make the problem exponentially larger when it doesn't have to be.
There are lots of ways of building better heuristics or re-using search information, but that's a different question.

Efficient Boundary Approximation

Imagine I have the following structure represented in an array:
The blue cells represent "boundaries" and the red cell represents the structures origin. I have a function that calculates the distances of each interior cell (cells which aren't boundaries) to its closest boundary and to the origin.
Currently I do this with a nested for loop which essentially tests all cell positions to my current position and selects the cell with the smallest distance which is also marked a boundary cell.
For small data-sets this is okay, but when you have a large array of possible points to iterate through this comes painfully slow.
I am looking for a solution which would be faster but trade accuracy. Currently I am able to return the exact closest boundary cell to any given interior cell but I only really need a close approximation of which cell is closest.
Each cell in the array already has the following information:
Arbitrary position (used for distance calculation)
Is a Boundary Cell
A list of neighbours (any cells which share an edge)
Things to note:
The structure does not necessarily conform to any type of specific polygon shape
The array isn't necessarily ordered in any logical way
The array is flat (i.e 1D)
Possible solutions I have thought of (but have otherwise untested):
An A* approach (as each cell knows its neighbour I could do something like this but I think it would be worse for performance than my current brute force method
A priority queue which sorts from smallest to largest distance from origin (but unsure of how to achieve approximate closest border)
I am assuming that the cells are irrelevant. Everything just depends on the distinguished points in the cells. Finding the distance to the origin is one calculation and cannot be improved. So your problem reduces to: You have red points and white points (to stick to your color scheme), and you want to find the closest blue point to each white point.
This is a version of nearest-neighbor search. There is extensive
literature on this problem, as well as variants such as approximate nearest-neighbor search. Here is one paper that could lead you to others:
Connor, Michael, and Piyush Kumar. "Practical Nearest Neighbor Search in the Plane." SEA. 2010. (Springer link.)
The bottom line is that with appropriate data structures, you can achieve
O(log n) query time per white point, which is much faster than the naive linear search.

Connect points to plane/Draw Polygon

I'm currently working on a project where I want to draw different mathematical objects onto a 3D cube. It works as it should for Points and Lines given as a vector equation. Now I have a plane given as a parametric equation. This plane can be somewhere in the 3D space and may be visible on the screen, which is this 3D cube. The cube acts as an AABB.
First thing I needed to know was whether the plane intersects with the cube. To do this I made lines who are identical to the edges of this cube and then doing 12 line/plane intersections, calculating whether the line is hit inside the line segment(edge) which is part of the AABB. Doing this I will get a set of Points defining the visible part of the plane in the cube which I have to draw.
I now have up to 6 points A, B, C, D, E and F defining the polygon ABCDEF I would like to draw. To do this I want to split the polygon into triangles for example: ABC, ACD, ADE, AED. I would draw this triangles like described here. The problem I am currently facing is, that I (believe I) need to order the points to get correct triangles and then a correctly drawn polygon. I found out about convex hulls and found QuickHull which works in three dimensional space. There is just one problem with this algorithm: At the beginning I need to create a three dimensional simplex to have a starting point for the algorithm. But as all my points are in the same plane they simply form a two dimensional plane. Thus I think this algorithm won't work.
My question is now: How do I order these 3D points resulting in a polygon that should be a 2D convex hull of these points? And if this is a limitation: I need to do this in C.
Thanks for your help!
One approach is to express the coordinates of the intersection points in the space of the plane, which is 2D, instead of the global 3D space. Depending on how exactly you computed these points, you may already have these (say (U, V)) coordinates. If not, compute two orthonormal vectors that belong to the plane and take the dot products with the (X, Y, Z) intersections. Then you can find the convex hull in 2D.
The 8 corners of the cube can be on either side of the plane, and have a + or - sign when the coordinates are plugged in the implicit equation of the plane (actually the W coordinate of the vertices). This forms a maximum of 2^8=256 configurations (of which not all are possible).
For efficiency, you can solve all these configurations once for all, and for every case list the intersections that form the polygon in the correct order. Then for a given case, compute the 8 sign bits, pack them in a byte and lookup the table of polygons.
Update: direct face construction.
Alternatively, you can proceed by tracking the intersection points from edge to edge.
Start from an edge of the cube known to traverse the plane. This edge belongs to two faces. Choose one arbitrarily. Then the plane cuts this face in a triangle and a pentagon, or two quadrilaterals. Go to the other the intersection with an edge of the face. Take the other face bordered by this new edge. This face is cut in a triangle and a pentagon...
Continuing this process, you will traverse a set of faces and corresponding segments that define the section polygon.
In the figure, you start from the intersection on edge HD, belonging to face DCGH. Then move to the edge GC, also in face CGFB. From there, move to edge FG, also in face EFGH. Move to edge EH, also in face ADHE. And you are back on edge HD.
Complete discussion must take into account the case of the plane through one or more vertices of the cube. (But you can cheat by slightly translating the plane, constructing the intersection polygon and removing the tiny edges that may have been artificially created this way.)

AI Pathfinding using 2D polygons instead of waypoints - Is there a recommended algorithm?

I'm trying to use path finding on a series of convex polygons, rather than waypoints. To even further complicate this, the polygons are made by the users, and may have inconsistent vertices. For example:
We know the object is X wide by Y deep, and that the polygons have vertices at certain locations. Is there a specific algorithm to find the fastest way to the goal while keeping the entire object in the polygons (If I understand correctly, A* only works on waypoints)? How do you handle the vertices not being the same object but being at the same location?
EDIT: The polygons are convex; It's 2 separate polygons with the edges on the line.
Also, how do you implement * pathfinding, as a node based system wouldn't work in a 'infinite' resolution polygon?
In general, all shortest-path segments will have, as end-points, either polygon vertices or the start and goal points. If you build a graph that includes all those segments (from the start to each "visible" polygon vertex, from the goal to each "visible" polygon vertex, and from each polygon vertex to each other polygon vertex) and run A* on that, you have your optimal path. The cost of building the graph for A* is:
For each vertex, a visibility-test to find visible vertices: the simple algorithm (for each pair of vertices, see if the segment from one to another lies inside the polygon) is O(n^3). Building convex polygons and processing them independently, or using a smarter "radial sweep" algorithm can greatly lower this, but I suspect it is still around O(n^2).
For each query (from a start-point to a goal-point), O(n) for the visibility-test to find all vertices that it can see.
If you are only going to apply A* once, then the price of building the fixed part of the A* graph for a single traversal may be somewhat steep. An alternative is to build the graph incrementally as you use it:
Java code implementing the above approach can be found here.
The polygons in your drawing are not convex. For convex polygons, you can place a way point in the middle of each each edge and then apply A*. And, of course, you need to fix inconsistent vertices.

Spatial Data Structures in C

I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are.
How best to approach this for your problem depends on several factors that you have not described:
- Will the same hypersphere arrangement be used for many particle collision calculations?
- Are the hyperspheres uniform size?
- What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres?
- Do you consider the particle to have zero volume?
I assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine).
If your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space.
However to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes.
Now that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube.
Efficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3.
Efficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL).
A description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4}
As others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your
hyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree.
{1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981)
{2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989)
{3} Convex hull generation, connected component labelling, and minimum distance
calculation for set-theoretically defined models, Dan Pidcock, 2000
{4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105.
It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository.
Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree.
You can either make your own, or use someone else's, like this one:
http://libkdtree.alioth.debian.org/
A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node.
An Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree
If you're doing N dimensional intersection tests, you could generalize this to an N tree.
Intersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects.
An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere.
Finding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides).
From memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection:
i. Is the centre within the cube (this gets the eponymous situation)
ii. Are any of the corners of the cube within radius r of the centre (corners within the sphere)
iii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic):
a. A normal of the surface that goes to the centre of the sphere
b. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube)
c. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube)
iv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere)
v. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges).
Slightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.

Resources