Convex (or Curve) of N-sided shape in C [duplicate] - c

From the man page for XFillPolygon:
If shape is Complex, the path may self-intersect. Note that contiguous coincident points in the path are not treated as self-intersection.
If shape is Convex, for every pair of points inside the polygon, the line segment connecting them does not intersect the path. If known by the client, specifying Convex can improve performance. If you specify Convex for a path that is not convex, the graphics results are undefined.
If shape is Nonconvex, the path does not self-intersect, but the shape is not wholly convex. If known by the client, specifying Nonconvex instead of Complex may improve performance. If you specify Nonconvex for a self-intersecting path, the graphics results are undefined.
I am having performance problems with fill XFillPolygon and, as the man page suggests, the first step I want to take is to specify the correct shape of the polygon. I am currently using Complex to be on the safe side.
Is there an efficient algorithm to determine if a polygon (defined by a series of coordinates) is convex, non-convex or complex?

You can make things a lot easier than the Gift-Wrapping Algorithm... that's a good answer when you have a set of points w/o any particular boundary and need to find the convex hull.
In contrast, consider the case where the polygon is not self-intersecting, and it consists of a set of points in a list where the consecutive points form the boundary. In this case it is much easier to figure out whether a polygon is convex or not (and you don't have to calculate any angles, either):
For each consecutive pair of edges of the polygon (each triplet of points), compute the z-component of the cross product of the vectors defined by the edges pointing towards the points in increasing order. Take the cross product of these vectors:
given p[k], p[k+1], p[k+2] each with coordinates x, y:
dx1 = x[k+1]-x[k]
dy1 = y[k+1]-y[k]
dx2 = x[k+2]-x[k+1]
dy2 = y[k+2]-y[k+1]
zcrossproduct = dx1*dy2 - dy1*dx2
The polygon is convex if the z-components of the cross products are either all positive or all negative. Otherwise the polygon is nonconvex.
If there are N points, make sure you calculate N cross products, e.g. be sure to use the triplets (p[N-2],p[N-1],p[0]) and (p[N-1],p[0],p[1]).
If the polygon is self-intersecting, then it fails the technical definition of convexity even if its directed angles are all in the same direction, in which case the above approach would not produce the correct result.

This question is now the first item in either Bing or Google when you search for "determine convex polygon." However, none of the answers are good enough.
The (now deleted) answer by #EugeneYokota works by checking whether an unordered set of points can be made into a convex polygon, but that's not what the OP asked for. He asked for a method to check whether a given polygon is convex or not. (A "polygon" in computer science is usually defined [as in the XFillPolygon documentation] as an ordered array of 2D points, with consecutive points joined with a side as well as the last point to the first.) Also, the gift wrapping algorithm in this case would have the time-complexity of O(n^2) for n points - which is much larger than actually needed to solve this problem, while the question asks for an efficient algorithm.
#JasonS's answer, along with the other answers that follow his idea, accepts star polygons such as a pentagram or the one in #zenna's comment, but star polygons are not considered to be convex. As
#plasmacel notes in a comment, this is a good approach to use if you have prior knowledge that the polygon is not self-intersecting, but it can fail if you do not have that knowledge.
#Sekhat's answer is correct but it also has the time-complexity of O(n^2) and thus is inefficient.
#LorenPechtel's added answer after her edit is the best one here but it is vague.
A correct algorithm with optimal complexity
The algorithm I present here has the time-complexity of O(n), correctly tests whether a polygon is convex or not, and passes all the tests I have thrown at it. The idea is to traverse the sides of the polygon, noting the direction of each side and the signed change of direction between consecutive sides. "Signed" here means left-ward is positive and right-ward is negative (or the reverse) and straight-ahead is zero. Those angles are normalized to be between minus-pi (exclusive) and pi (inclusive). Summing all these direction-change angles (a.k.a the deflection angles) together will result in plus-or-minus one turn (i.e. 360 degrees) for a convex polygon, while a star-like polygon (or a self-intersecting loop) will have a different sum ( n * 360 degrees, for n turns overall, for polygons where all the deflection angles are of the same sign). So we must check that the sum of the direction-change angles is plus-or-minus one turn. We also check that the direction-change angles are all positive or all negative and not reverses (pi radians), all points are actual 2D points, and that no consecutive vertices are identical. (That last point is debatable--you may want to allow repeated vertices but I prefer to prohibit them.) The combination of those checks catches all convex and non-convex polygons.
Here is code for Python 3 that implements the algorithm and includes some minor efficiencies. The code looks longer than it really is due to the the comment lines and the bookkeeping involved in avoiding repeated point accesses.
TWO_PI = 2 * pi
def is_convex_polygon(polygon):
"""Return True if the polynomial defined by the sequence of 2D
points is 'strictly convex': points are valid, side lengths non-
zero, interior angles are strictly between zero and a straight
angle, and the polygon does not intersect itself.
NOTES: 1. Algorithm: the signed changes of the direction angles
from one side to the next side must be all positive or
all negative, and their sum must equal plus-or-minus
one full turn (2 pi radians). Also check for too few,
invalid, or repeated points.
2. No check is explicitly done for zero internal angles
(180 degree direction-change angle) as this is covered
in other ways, including the `n < 3` check.
"""
try: # needed for any bad points or direction changes
# Check for too few points
if len(polygon) < 3:
return False
# Get starting information
old_x, old_y = polygon[-2]
new_x, new_y = polygon[-1]
new_direction = atan2(new_y - old_y, new_x - old_x)
angle_sum = 0.0
# Check each point (the side ending there, its angle) and accum. angles
for ndx, newpoint in enumerate(polygon):
# Update point coordinates and side directions, check side length
old_x, old_y, old_direction = new_x, new_y, new_direction
new_x, new_y = newpoint
new_direction = atan2(new_y - old_y, new_x - old_x)
if old_x == new_x and old_y == new_y:
return False # repeated consecutive points
# Calculate & check the normalized direction-change angle
angle = new_direction - old_direction
if angle <= -pi:
angle += TWO_PI # make it in half-open interval (-Pi, Pi]
elif angle > pi:
angle -= TWO_PI
if ndx == 0: # if first time through loop, initialize orientation
if angle == 0.0:
return False
orientation = 1.0 if angle > 0.0 else -1.0
else: # if other time through loop, check orientation is stable
if orientation * angle <= 0.0: # not both pos. or both neg.
return False
# Accumulate the direction-change angle
angle_sum += angle
# Check that the total number of full turns is plus-or-minus 1
return abs(round(angle_sum / TWO_PI)) == 1
except (ArithmeticError, TypeError, ValueError):
return False # any exception means not a proper convex polygon

The following Java function/method is an implementation of the algorithm described in this answer.
public boolean isConvex()
{
if (_vertices.size() < 4)
return true;
boolean sign = false;
int n = _vertices.size();
for(int i = 0; i < n; i++)
{
double dx1 = _vertices.get((i + 2) % n).X - _vertices.get((i + 1) % n).X;
double dy1 = _vertices.get((i + 2) % n).Y - _vertices.get((i + 1) % n).Y;
double dx2 = _vertices.get(i).X - _vertices.get((i + 1) % n).X;
double dy2 = _vertices.get(i).Y - _vertices.get((i + 1) % n).Y;
double zcrossproduct = dx1 * dy2 - dy1 * dx2;
if (i == 0)
sign = zcrossproduct > 0;
else if (sign != (zcrossproduct > 0))
return false;
}
return true;
}
The algorithm is guaranteed to work as long as the vertices are ordered (either clockwise or counter-clockwise), and you don't have self-intersecting edges (i.e. it only works for simple polygons).

Here's a test to check if a polygon is convex.
Consider each set of three points along the polygon--a vertex, the vertex before, the vertex after. If every angle is 180 degrees or less you have a convex polygon. When you figure out each angle, also keep a running total of (180 - angle). For a convex polygon, this will total 360.
This test runs in O(n) time.
Note, also, that in most cases this calculation is something you can do once and save — most of the time you have a set of polygons to work with that don't go changing all the time.

To test if a polygon is convex, every point of the polygon should be level with or behind each line.
Here's an example picture:

The answer by #RoryDaulton
seems the best to me, but what if one of the angles is exactly 0?
Some may want such an edge case to return True, in which case, change "<=" to "<" in the line :
if orientation * angle < 0.0: # not both pos. or both neg.
Here are my test cases which highlight the issue :
# A square
assert is_convex_polygon( ((0,0), (1,0), (1,1), (0,1)) )
# This LOOKS like a square, but it has an extra point on one of the edges.
assert is_convex_polygon( ((0,0), (0.5,0), (1,0), (1,1), (0,1)) )
The 2nd assert fails in the original answer. Should it?
For my use case, I would prefer it didn't.

This method would work on simple polygons (no self intersecting edges) assuming that the vertices are ordered (either clockwise or counter)
For an array of vertices:
vertices = [(0,0),(1,0),(1,1),(0,1)]
The following python implementation checks whether the z component of all the cross products have the same sign
def zCrossProduct(a,b,c):
return (a[0]-b[0])*(b[1]-c[1])-(a[1]-b[1])*(b[0]-c[0])
def isConvex(vertices):
if len(vertices)<4:
return True
signs= [zCrossProduct(a,b,c)>0 for a,b,c in zip(vertices[2:],vertices[1:],vertices)]
return all(signs) or not any(signs)

I implemented both algorithms: the one posted by #UriGoren (with a small improvement - only integer math) and the one from #RoryDaulton, in Java. I had some problems because my polygon is closed, so both algorithms were considering the second as concave, when it was convex. So i changed it to prevent such situation. My methods also uses a base index (which can be or not 0).
These are my test vertices:
// concave
int []x = {0,100,200,200,100,0,0};
int []y = {50,0,50,200,50,200,50};
// convex
int []x = {0,100,200,100,0,0};
int []y = {50,0,50,200,200,50};
And now the algorithms:
private boolean isConvex1(int[] x, int[] y, int base, int n) // Rory Daulton
{
final double TWO_PI = 2 * Math.PI;
// points is 'strictly convex': points are valid, side lengths non-zero, interior angles are strictly between zero and a straight
// angle, and the polygon does not intersect itself.
// NOTES: 1. Algorithm: the signed changes of the direction angles from one side to the next side must be all positive or
// all negative, and their sum must equal plus-or-minus one full turn (2 pi radians). Also check for too few,
// invalid, or repeated points.
// 2. No check is explicitly done for zero internal angles(180 degree direction-change angle) as this is covered
// in other ways, including the `n < 3` check.
// needed for any bad points or direction changes
// Check for too few points
if (n <= 3) return true;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
// Get starting information
int old_x = x[n-2], old_y = y[n-2];
int new_x = x[n-1], new_y = y[n-1];
double new_direction = Math.atan2(new_y - old_y, new_x - old_x), old_direction;
double angle_sum = 0.0, orientation=0;
// Check each point (the side ending there, its angle) and accum. angles for ndx, newpoint in enumerate(polygon):
for (int i = 0; i < n; i++)
{
// Update point coordinates and side directions, check side length
old_x = new_x; old_y = new_y; old_direction = new_direction;
int p = base++;
new_x = x[p]; new_y = y[p];
new_direction = Math.atan2(new_y - old_y, new_x - old_x);
if (old_x == new_x && old_y == new_y)
return false; // repeated consecutive points
// Calculate & check the normalized direction-change angle
double angle = new_direction - old_direction;
if (angle <= -Math.PI)
angle += TWO_PI; // make it in half-open interval (-Pi, Pi]
else if (angle > Math.PI)
angle -= TWO_PI;
if (i == 0) // if first time through loop, initialize orientation
{
if (angle == 0.0) return false;
orientation = angle > 0 ? 1 : -1;
}
else // if other time through loop, check orientation is stable
if (orientation * angle <= 0) // not both pos. or both neg.
return false;
// Accumulate the direction-change angle
angle_sum += angle;
// Check that the total number of full turns is plus-or-minus 1
}
return Math.abs(Math.round(angle_sum / TWO_PI)) == 1;
}
And now from Uri Goren
private boolean isConvex2(int[] x, int[] y, int base, int n)
{
if (n < 4)
return true;
boolean sign = false;
if (x[base] == x[n-1] && y[base] == y[n-1]) // if its a closed polygon, ignore last vertex
n--;
for(int p=0; p < n; p++)
{
int i = base++;
int i1 = i+1; if (i1 >= n) i1 = base + i1-n;
int i2 = i+2; if (i2 >= n) i2 = base + i2-n;
int dx1 = x[i1] - x[i];
int dy1 = y[i1] - y[i];
int dx2 = x[i2] - x[i1];
int dy2 = y[i2] - y[i1];
int crossproduct = dx1*dy2 - dy1*dx2;
if (i == base)
sign = crossproduct > 0;
else
if (sign != (crossproduct > 0))
return false;
}
return true;
}

For a non complex (intersecting) polygon to be convex, vector frames obtained from any two connected linearly independent lines a,b must be point-convex otherwise the polygon is concave.
For example the lines a,b are convex to the point p and concave to it below for each case i.e. above: p exists inside a,b and below: p exists outside a,b
Similarly for each polygon below, if each line pair making up a sharp edge is point-convex to the centroid c then the polygon is convex otherwise it’s concave.
blunt edges (wronged green) are to be ignored
N.B
This approach would require you compute the centroid of your polygon beforehand since it doesn’t employ angles but vector algebra/transformations

Adapted Uri's code into matlab. Hope this may help.
Be aware that Uri's algorithm only works for simple polygons! So, be sure to test if the polygon is simple first!
% M [ x1 x2 x3 ...
% y1 y2 y3 ...]
% test if a polygon is convex
function ret = isConvex(M)
N = size(M,2);
if (N<4)
ret = 1;
return;
end
x0 = M(1, 1:end);
x1 = [x0(2:end), x0(1)];
x2 = [x0(3:end), x0(1:2)];
y0 = M(2, 1:end);
y1 = [y0(2:end), y0(1)];
y2 = [y0(3:end), y0(1:2)];
dx1 = x2 - x1;
dy1 = y2 - y1;
dx2 = x0 - x1;
dy2 = y0 - y1;
zcrossproduct = dx1 .* dy2 - dy1 .* dx2;
% equality allows two consecutive edges to be parallel
t1 = sum(zcrossproduct >= 0);
t2 = sum(zcrossproduct <= 0);
ret = t1 == N || t2 == N;
end

Related

Improving the performance of nested loops in C

Given a list of spheres described by (xi, yi, ri), meaning the center of sphere i is at the point (xi, yi, 0) in three-dimensional space and its radius is ri, I want to compute all zi where zi = max { z | (xi, yi, z) is a point on any sphere }. In other words, zi is the highest point over the center of sphere i that is in any of the spheres.
I have two arrays
int **vs = (int **)malloc(num * sizeof(int *));
double **vh = (double **)malloc(num * sizeof(double *));
for (int i = 0; i < num; i++){
vs[i] = (int *)malloc(2 * sizeof(int)); // x,y
vh[i] = (double *)malloc(2 * sizeof(double)); r,z
}
The objective is to calculate the maximum z for each point. Thus, we should check if there are larger spheres over each x,y point.
Initially we see vh[i][1]=vh[i][0] for all points, which means that z is the r of each sphere. Then, we check if these z values are inside larger spheres to maximize the z value.
for (int i = 0; i < v; i++) {
double a = vh[i][0] * vh[i][0]; // power of the radius of sphere #1
for (int j = 0; j < v; j++) {
if (vh[i][0] > vh[j][1]) { // check only if r of sphere #1 is larger than the current z of #2
double b = a - (vs[j][0] - vs[i][0]) * (vs[j][0] - vs[i][0])
- (vs[j][1] - vs[i][1]) * (vs[j][1] - vs[i][1]);
// calculating the maximum z value of sphere #2 crossing sphere #1
// (r of sphere #1)**2 = (z of x_j,y_j)**2 + (distance of two centers)**2
if (b > vh[j][1] * vh[j][1]) {
vh[j][1] = sqrt(b);// update the z value if it is larger than the current value
}
}
}
}
it works perfectly, but the nested loop is very slow when the number of points increases. I look for a way to speed up the process.
An illustration for the clarification of the task
When you say
The objective is to calculate the maximum z for each point.
I take you to mean, for the center C of each sphere, the maximum z coordinate among all the points lying directly above C (along the z axis) on any of the spheres. This is fundamentally an O(n2) problem -- there is nothing you can do to prevent the computational expense scaling with the square of the number of spheres.
But there may be some things you can do to reduce the scaling coeffcient. Here are some possibilities:
Use bona fide 2D arrays (== arrays of arrays) instead arrays of pointers. It's easier to implement, more memory-efficient, and better for locality of reference:
int (*vs)[2] = malloc(num * sizeof(*vs));
double (*vh)[2] = malloc(num * sizeof(*h));
// no other allocations needed
Alternatively, it may help to use an array of structures, one per sphere, instead of two 2D arrays of numbers. It would certainly make your code clearer, but it might also help give a slight speed boost by improving locality of reference:
struct sphere {
int x, y;
double r, z;
};
struct sphere *spheres = malloc(num * sizeof(*spheres));
Store z2 instead of z, at least for the duration of the computation. This will reduce the number of somewhat-expensive sqrt calls from O(v2) to O(v), supposing you make a single pass at the end to convert all the results to zs, and it will save you O(v2) multiplications, too. (More if you could get away without ever converting from z2 to z.)
Pre-initialize each vh[i][1] value to the radius of sphere i (or the square of the radius if you are exercising the previous option, too), and add j != i to the condition around the inner-loop body.
Sorting the spheres in decreasing order by radius may help you find larger provisional z values earlier, and therefore to make the radius test in the inner loop more effective at culling unnecessary computations.
You might get some improvement by checking each distinct pair only once. That is, for each unordered pair i, j, you can compute the inter-center distance once only, determine from the relative radii which height to check for a possible update, and go from there. The extra logic involved might or might not pay off through a reduction in other computations.
Additionally, if you are doing this for large enough inputs, then you might be able to reduce the wall time consumed by parallelizing the computation.
Note, by the way, that this comment is incorrect:
// (r of sphere #1)**2 = (r of sphere #2)**2 + (distance of two centers)**2
. However, it also not what you are relying upon. What you are relying upon is that if sphere 1 covers the center of sphere 2 at all, then its height, z, above the center of sphere 2 satisfies the relationship
r12 = z2 + d1,22
. That is, where you wrote r of sphere #2 in the comment, you appear to have meant z.

Determining if two line segments intersect with a tolerance

I'm using this code that was posted as an answer to this question: How do you detect where two line segments intersect?
It is my understanding that this function only returns an intersection point if the two line segments exactly intersect. I need to modify this function to include a tolerance so it returns an intersection point if the line segments nearly intersect (i.e. within a 0.01 range). I don't really understand the maths that underpins this function so I was hoping that someone could help.
Thanks
// Returns 1 if the lines intersect, otherwise 0. In addition, if the lines
// intersect the intersection point may be stored in the floats i_x and i_y.
char get_line_intersection(float p0_x, float p0_y, float p1_x, float p1_y,
float p2_x, float p2_y, float p3_x, float p3_y, float *i_x, float *i_y)
{
float s1_x, s1_y, s2_x, s2_y;
s1_x = p1_x - p0_x; s1_y = p1_y - p0_y;
s2_x = p3_x - p2_x; s2_y = p3_y - p2_y;
float s, t;
s = (-s1_y * (p0_x - p2_x) + s1_x * (p0_y - p2_y)) / (-s2_x * s1_y + s1_x * s2_y);
t = ( s2_x * (p0_y - p2_y) - s2_y * (p0_x - p2_x)) / (-s2_x * s1_y + s1_x * s2_y);
if (s >= 0 && s <= 1 && t >= 0 && t <= 1)
{
// Collision detected
if (i_x != NULL)
*i_x = p0_x + (t * s1_x);
if (i_y != NULL)
*i_y = p0_y + (t * s1_y);
return 1;
}
return 0; // No collision
}
EDIT: for clarification, the image below depicts the sort of scenario whereby two line segments would nearly intersect.
Nearly intersecting lines - image
I am not a graphics expert, but this is what I would do:
For each endpoint: See if a circle centered on the endpoint with a radius of threshold intersects the other line segment. Then the lines nearly intersect.
It's still not clear precisely what you want to count as a "near intersection". In your example image you could find an intersection by extending each line (in both directions) by the chosen tolerance amount, and then checking for an (exact) intersection between the extended lines.
If you want to detect intersections of lines at more acute angles, you could expand each line into a rectangular area (with a width determined by the tolerance) and then check for overlap between the rectangular regions.
In both cases the tolerance amount would mean something slightly different. You didn't define it in your question. What precisely does the tolerance value mean? Is it the amount by which one of the lines needs to be extended to form an intersection with the other? Is it a maximum distance between any point on each line that can be considered an intersection? etc.

find iso-cost points on a 3d grid efficiently with minimum costing of points

I have a 3d grid where in each point (x,y,z) on the grid is associated with a cost value. The cost of any point (x,y,z) is not known in advance. To know the cost, we need to make a complex query which is really expensive. One thing we know about the object is that cost is monotonically non-decreasing in all 3 dimensions.
Now given a cost C, I need to find the points (x,y,z) on the surface which have cost C. This has to be done by costing only bare minimum. How to solve my problem?
When I searched online, I am getting contour identification related techniques but all these techniques assume all point's cost is known in advance like Marching cubes method etc. In my case major metric is the number of points costed should be minimum.
It would be helpful if some one can suggest a way to get approximate locations at least if not exact.
Rewritten explanation:
(original text, in case it might clarify the idea to someone, is kept unchanged below the line)
We have some function f(x,y,z) in three dimensions, and we wish to find the surface f(x,y,z) = c. Since the function yields a single number, it defines a scalar field, and the surface we are looking for is the isosurface c.
In our case, evaluating the function f(x,y,z) is very costly, so we wish to minimize the number of times we use it. Unfortunately, most isosurface algorithms assume the opposite.
My suggestion is to use a similar isosurface walk as Fractint could use for two-dimensional fractals. Code-wise, it is complicated, but it should minimize the amount of function evaluations needed -- that was exactly the purpose it was implemented in Fractint.
Background / History:
In the late 1980s and early 1990s, I encoutered a fractal drawing suite Fractint. Computers were much slower then, and evaluating each point was painfully slow. A lot of effort was made in Fractint to make it display the fractals as fast as possible, but still accurately. (Some of you might remember the color-cycling it could do, by rotating the colors in the palette used. It was hypnotic; here is a Youtube clip from the 1995 documentary "Colors of Infinity", which both color-cycles and zooms in. Calculating a full-screen fractal could take hours (at high zoom factors, close to the actual fractal set), but then you could (save it as an image and) use the color-cycling to "animate" it.)
Some of those fractals were, or had regions, where the number of iterations needed was monotonically non-decreasing toward the actual fractal set fractal -- that is, no "islands" sticking out, just steady occasional increase in iteration steps --, one fast evaluation mode used edge tracing to locate the boundary where the number of iterations changed: in other words, the regions filled with a single color. After closing a region, it then traced towards the center of that region to find the next iteration edge; after that was closed too, it could just fill the donut- or C-shaped region between those boundaries with the correct constant color, without evaluating the function for those pixels!
Here, we have a very similar situation, except in three dimensions instead of two. Each isosurface is also two-dimensional by definition, so really, all that changes, is how we walk the boundary.
The walk itself is similar to flood fill algorithms, except that we walk in three dimensions, and our boundary is the isosurface we're tracing.
We sample the original function in a regular grid, say an N×N×N grid. (This is not the only possibility, but it is the easiest and most useful case, and what the OP is doing.)
In general, the isosurfaces will not pass through the grid points exactly, but between the grid points. Therefore, our task is to find the grid cells the isosurface passes through.
In an N×N×N regular grid, there are (N-1)×(N-1)×(N-1) cubic cells:
Each cell has eight corners at (x,y,z), (x+1,y,z), (x,y+1,z), (x+1,y+1,z), (x,y,z+1), (x+1,y,z+1), (x,y+1,z+1), and (x+1,y+1,z+1), where x,y,Z ∈ ℕ are the integer grid coordinates, 0 ≤ x,y,z ≤ N-2 are the integer grid coordinates.
Carefully note the integer grid coordinate limits. If you think about it, you'll realize that an N×N×N grid has only (N-1)×(N-1)×(N-1) cells, and since we use the grid coordinates for the corner closest to origin, the valid coordinate range for that corner is 0 to N-2, inclusive.
If f(x,y,z) increases monotonically in each dimension, then isosurface c passes through cell (x,y,z) if
f(x,y,z) ≤ c
and at least one of
f(x+1, y, z) > c
f(x, y+1, z) > c
f(x+1, y+1, z) > c
f(x, y, z+1) > c
f(x+1, y, z+1) > c
f(x, y+1, z+1) > c
f(x+1, y+1, z+1) > c
If f(x,y,z) is monotonically non-decreasing -- that is, its partial derivatives are either zero or positive at all points --, then the above locates two-dimensional isosurfaces, and the outer surface for isovolumes (volumes where f(x,y,z) is constant). The inner surface for isovolumes c are then those cells (x,y,z) for which
f(x,y,z) < c
and at least one of
f(x+1, y, z) ≥ c
f(x, y+1, z) ≥ c
f(x+1, y+1, z) ≥ c
f(x, y, z+1) ≥ c
f(x+1, y, z+1) ≥ c
f(x, y+1, z+1) ≥ c
f(x+1, y+1, z+1) ≥ c
Extension to any scalar function:
The approach shown here actually works for any f(x,y,z) that has only one maximum within the sampled region, say at (xMAX,yMAX,zMAX); and only one minimum, say at (xMIN,yMIN,zMIN); with no local maxima or minima within the sampled region.
In that case, the rule is that at least one of f(x,y,z), f(x+1,y,z), f(x,y+1,z), f(x+1,y+1,z), f(x,y,z), f(x+1,y,z), f(x,y+1,z), f(x+1,y+1,z) must be below or equal to c, and at least one above or equal to c, and not all equal to c.
Also, an initial cell an isosurface c passes through can then always be found using a binary search between (xMAX,yMAX,zMAX) and (xMIN,yMIN,zMIN), limiting the coordinates to 0 ≤ xMAX,yMAX,zMAX,xMIN,yMIN,zMIN ≤ N-2 (to only consider valid cells, in other words).
If the function is not monotonic, locating an initial cell the isosurface c passes through is more complicated. In that case, you need a different approach. (If you can find the grid coordinates for all local maxima and minima, then you can do binary searches from global minimum to local maxima above c, and from local minima below c to global maximum.)
Because we sample the function f(x,y,z) at intervals, we implicitly assume it to be continous. If that is not true -- and you need to show also the discontinuities -- you can augment the grid with discontinuity information at each point (seven boolean flags or bits per grid point, for "discontinuity from (x,y,z) to (x+,y+,z+)"). The surface walking then must also respect (not cross) such discontinuities.
In practice, I would use two arrays to describe the grid: one for cached samples, and one for two flags per grid point. One flag would describe that the cached value exists, and another that the walking routine has already walked the grid cell at that point. The structure I'd use/need for walking and constructing isosurfaces (for a monotonically non-decreasing function sampled in a regular grid) would be
typedef struct {
size_t xsize;
size_t ysize;
size_t zsize;
size_t size; /* xsize * ysize * zsize */
size_t xstride; /* [z][y][x] array = 1 */
size_t ystride; /* [z][y][x] array = xsize */
size_t zstride; /* [z][y][x] array = xsize * ysize */
double xorigin; /* Function x for grid coordinate x = 0 */
double yorigin; /* Function y for grid coordinate y = 0 */
double zorigin; /* Function z for grid coordinate z = 0 */
double xunit; /* Function x for grid coordinate x = 1 */
double yunit; /* Function y for grid coordinate y = 1 */
double zunit; /* Function z for grid coordinate z = 1 */
/* Function to obtain a new sample */
void *data;
double *sample(void *data, double x, double y, double z);
/* Walking stack */
size_t stack_size;
size_t stack_used;
size_t *stack;
unsigned char *cell; /* CELL_ flags */
double *cache; /* Cached samples */
} grid;
#define CELL_UNKNOWN (0U)
#define CELL_SAMPLED (1U)
#define CELL_STACKED (2U)
#define CELL_WALKED (4U)
double grid_sample(const grid *const g, const size_t gx, const size_t gy, const size_t gz)
{
const size_t i = gx * g->xstride + gy * g->ystride + gz * g->zstride;
if (!(g->cell[i] & CELL_SAMPLED)) {
g->cell[i] |= CELL_SAMPLED;
g->cache[i] = g->sample(g->data, g->xorigin + (double)gx * g->xunit,
g->yorigin + (double)gy * g->yunit,
g->zorigin + (double)gz * g->zunit);
}
return g->cache[i];
}
and the function to find the cell to start the walk on, using a binary search along the grid diagonal (assuming non-decreasing monotonic function, so all isosurfaces must cross the diagonal):
size_t grid_find(const grid *const g, const double c)
{
const size_t none = g->size;
size_t xmin = 0;
size_t ymin = 0;
size_t zmin = 0;
size_t xmax = g->xsize - 2;
size_t ymax = g->ysize - 2;
size_t zmax = g->zsize - 2;
double s;
s = grid_sample(g, xmin, ymin, zmin);
if (s > c) {
return none;
}
if (s == c)
return xmin*g->xstride + ymin*g->ystride + zmin*g->zstride;
s = grid_sample(g, xmax, ymax, zmax);
if (s < c)
return none;
if (s == c)
return xmax*g->xstride + ymax*g->ystride + zmax*g->zstride;
while (1) {
const size_t x = xmin + (xmax - xmin) / 2;
const size_t y = ymin + (ymax - ymin) / 2;
const size_t z = zmin + (zmax - zmin) / 2;
if (x == xmin && y == ymin && z == zmin)
return x*g->xstride + y*g->ystride + z*g->zstride;
s = grid_sample(g, x, y, z);
if (s < c) {
xmin = x;
ymin = y;
zmin = z;
} else
if (s > c) {
xmax = x;
ymax = y;
zmax = z;
} else
return x*g->xstride + y*g->ystride + z*g->zstride;
}
}
#define GRID_X(grid, index) (((index) / (grid)->xstride)) % (grid)->xsize)
#define GRID_Y(grid, index) (((index) / (grid)->ystride)) % (grid)->ysize)
#define GRID_Z(grid, index) (((index) / (grid)->zstride)) % (grid)->zsize)
The three macros above show how to convert the grid index back to grid coordinates.
To walk the isosurface, we cannot rely on recursion; the call chains would be too long. Instead, we have a walk stack for cell indexes we should examine:
static void grid_push(grid *const g, const size_t cell_index)
{
/* If the stack is full, remove cells already walked. */
if (g->stack_used >= g->stack_size) {
const size_t n = g->stack_used;
size_t *const s = g->stack;
unsigned char *const c = g->cell;
size_t i = 0;
size_t o = 0;
while (i < n)
if (c[s[i]] & CELL_WALKED)
i++;
else
s[o++] = s[i++];
g->stack_used = o;
}
/* Grow stack if still necessary. */
if (g->stack_used >= g->stack_size) {
size_t *new_stack;
size_t new_size;
if (g->stack_used < 1024)
new_size = 1024;
else
if (g->stack_used < 1048576)
new_size = g->stack_used * 2;
else
new_size = (g->stack_used | 1048575) + 1048448;
new_stack = realloc(g->stack, new_size * sizeof g->stack[0]);
if (new_stack == NULL) {
/* FATAL ERROR, out of memory */
}
g->stack = new_stack;
g->stack_size = new_size;
}
/* Unnecessary check.. */
if (!(g->cell[cell_index] & (CELL_STACKED | CELL_WALKED)))
g->stack[g->stack_used++] = cell_index;
}
static size_t grid_pop(grid *const g)
{
while (g->stack_used > 0 &&
g->cell[g->stack[g->stack_used - 1]] & CELL_WALKED)
g->stack_used--;
if (g->stack_used > 0)
return g->stack[--g->stack_used];
return g->size; /* "none" */
}
The function that verifies that the isosurface passes through the current cell, reports those to a callback function, and walks the isosurface, would be something like
int isosurface(grid *const g, const double c,
int (*report)(grid *const g,
const size_t x, const size_t y, const size_t z,
const double c,
const double x0y0z0,
const double x1y0z0,
const double x0y1z0,
const double x1y1z0,
const double x0y0z1,
const double x1y0z1,
const double x0y1z1,
const double x1y1z1))
{
const size_t xend = g->xsize - 2; /* Since we examine x+1, too */
const size_t yend = g->ysize - 2; /* Since we examine y+1, too */
const size_t zend = g->zsize - 2; /* Since we examine z+1, too */
const size_t xstride = g->xstride;
const size_t ystride = g->ystride;
const size_t zstride = g->zstride;
unsigned char *const cell = g->cell;
double x0y0z0, x1y0z0, x0y1z0, x1y1z0,
x0y0z1, x1y0z1, x0y1z1, x1y1z1; /* Cell corner samples */
size_t x, y, z, i;
int r;
/* Clear walk stack. */
g->stack_used = 0;
/* Clear walked and stacked flags from the grid cell map. */
i = g->size;
while (i-->0)
g->cell[i] &= ~(CELL_WALKED | CELL_STACKED);
i = grid_find(g, c);
if (i >= g->size)
return errno = ENOENT; /* No isosurface c */
x = (i / g->xstride) % g->xsize;
y = (i / g->ystride) % g->ysize;
z = (i / g->zstride) % g->zsize;
/* We need to limit x,y,z to the valid *cell* coordinates. */
if (x > xend) x = xend;
if (y > yend) y = yend;
if (z > zend) z = zend;
i = x*g->xstride + y*g->ystride + z*g->zstride;
if (x > xend || y > yend || z > zend)
return errno = ENOENT; /* grid_find() returned an edge cell */
grid_push(g, i);
while ((i = grid_pop) < g->size) {
x = (i / g->xstride) % g->xsize;
y = (i / g->ystride) % g->ysize;
z = (i / g->zstride) % g->zsize;
cell[i] |= CELL_WALKED;
x0y0z0 = grid_sample(g, x, y, z);
if (x0y0z0 > c)
continue;
x1y0z0 = grid_sample(g, 1+x, y, z);
x0y1z0 = grid_sample(g, x, 1+y, z);
x1y1z0 = grid_sample(g, 1+x, 1+y, z);
x0y0z1 = grid_sample(g, x, y, 1+z);
x1y0z1 = grid_sample(g, 1+x, y, 1+z);
x0y1z1 = grid_sample(g, x, 1+y, 1+z);
x1y1z1 = grid_sample(g, 1+x, 1+y, 1+z);
/* Isosurface does not pass through this cell?!
* (Note: I think this check is unnecessary.) */
if (x1y0z0 < c && x0y1z0 < c && x1y1z0 < c &&
x0y0z1 < c && x1y0z1 < c && x0y1z1 < c &&
x1y1z1 < c)
continue;
/* Report the cell. */
if (report) {
r = report(g, x, y, z, c, x0y0z0, x1y0z0,
x0y1z0, x1y1z0, x0y0z1, x1y0z1,
x0y1z1, x1y1z1);
if (r) {
errno = 0;
return r;
}
}
/* Could the surface extend to -x? */
if (x > 0 &&
!(cell[i - xstride] & (CELL_WALKED | CELL_STACKED)) &&
( x0y1z0 >= c || x0y0z1 >= c ))
grid_push(g, i - xstride);
/* Could the surface extend to -y? */
if (y > 0 &&
!(cell[i - ystride] & (CELL_WALKED | CELL_STACKED)) &&
( x0y0z1 >= c || x1y0z0 >= c ))
grid_push(g, i - ystride);
/* Could the surface extend to -z? */
if (z > 0 &&
!(cell[i - zstride] & (CELL_WALKED | CELL_STACKED)) &&
( x1y0z0 >= c || x0y1z0 >= c ))
grid_push(g, i - zstride);
/* Could the surface extend to +x? */
if (x < xend &&
!(cell[i + xstride] & (CELL_WALKED | CELL_STACKED)) &&
( x0y1z0 >= c || x0y0z1 >= c ))
grid_push(g, i + xstride);
/* Could the surface extend to +y? */
if (y < xend &&
!(cell[i + ystride] & (CELL_WALKED | CELL_STACKED)) &&
( x1y0z0 >= c || x0y0z1 >= c ))
grid_push(g, i + ystride);
/* Could the surface extend to +z? */
if (z < xend &&
!(cell[i + zstride] & (CELL_WALKED | CELL_STACKED)) &&
( x1y0z0 >= c || x0y1z0 >= c ))
grid_push(g, i + zstride);
}
/* All done. */
errno = 0;
return 0;
}
In this particular case, I do believe the isosurfaces are best visualized/described using a polygon mesh, with samples within a cell linearly interpolated. Then, each report() call produces one polygon (or one or more flat triangles).
Note that the cell has 12 edges, and the isosurface must cross at least three of these. Let's assume we have two samples at corners c0 and c1, spanned by an edges, with the two corners having coordinates p0=(x0,y0,z0) and p1=(x1,y1,z1) respectively:
if (c0 == c && c1 == c)
/* Entire edge is on the isosurface */
else
if (c0 == c)
/* Isosurface intersects edge at p0 */
else
if (c1 == c)
/* Isosurface intersects edge at p1 */
else
if (c0 < c && c1 > c)
/* Isosurface intersects edge at p0 + (p1-p0)*(c-c0)/(c1-c0) */
else
if (c0 > c && c1 < c)
/* Isosurface intersects edge at p1 + (p0-p1)*(c-c1)/(c0-c1) */
else
/* Isosurface does not intersect the edge */
The above check is valid for any kind of continuous function f(x,y,z); for non-monotonic functions the problem is just finding the relevant cells. The isosurface() function needs some changes (the checks wrt. x0y0z0..x1y1z1), according to the rules outlined earlier in this post, but it too can be made to work for any continuous function f(x,y,z) with little effort.
Constructing the polygon/triangle(s) when the samples at the cell corners are known, especially using linear interpolation, is very simple as you can see.
Note that there is usually no reason to worry about the order in which the edges of a cell are checked, as you will almost certainly use vector calculus and cross product in particular to orient the points and polygons. Or, if you like, you can do Delaunay triangulation on the points (3 to 12 for any function, although more than 6 points indicates there are two separate surfaces, I believe) to construct flat polygons.
Questions? Comments?
We have a scalar field f(x,y,z) in three dimensions. The field is costly to sample/evaluate, and we do so only at integer coordinates 0 ≤ x,y,z ∈ ℕ. To visualize the scalar field, we wish to locate one or more isosurfaces (surfaces with a specific f(x,y,z) value), using the minimum number of samples/evaluations.
The approach I'll try to describe here is a variant of the algorithm used in fractint, to minimize the number of iterations needed to draw certain fractals. Some fractals have large areas with the same "value", so instead of sampling every point within the area, certain drawing mode traced the edges of those areas.
In other words, instead of locating individual points of the isosurface c, f(x,y,z) = c, you can locate just one point, and then walk the isosurface. The walk part is a bit complicated to visualize, but it really is just a 3D variant of the flood fill algorithm used in simple computer graphics. (Actually, given the field is monotonically non-decreasing along each dimension, it'll actually be a mostly 2D walk, with typically just a few grid points other than those relevant to the isosurface c sampled. This should be really efficient.)
I'm pretty sure there are good peer-reviewed papers describing this very technique (probably in more than one problem domain), but since I'm too lazy to do a better search than a couple of minutes of Google searches, I leave it to others to find good references. Apologies.
For simplicity, for now, let's assume that the field is continuous and monotonically increasing along each dimension. Within an axis-oriented box of size N×N×N, the field will have a minimum at one corner at origin (0,0,0), a maximum at the far corner from origin, at (N,N,N), with all possible values between the minimum and maximum found along the diagonal from (0,0,0) to (N,N,N). In other words, that every possible isosurface exists and is a continuous 2D surface, excluding points (0,0,0) and (N,N,N), and every such surface intersects the diagonal.
If the field is actually non-continuous, we won't be able to tell, because of our sampling method. In practice, our sampling means we implicitly assume the scalar field is continuous; we will treat is as continuous, whether or not it really is!
If the function is actually monotonically increasing along each dimension, then it is possible to map f(x,y,z)=c to X(y,z)=x, Y(x,z)=y, Z(x,y)=z, although any one of the three is sufficient to define the isosurface c. This is because the isosurface can only cross any line spanning the box in at most one point.
If the function is monotonically non-decreasing instead, the isosurface can intersect any line spanning the box still only once, but the intersection can be wider (than a point) along the line. In practice, you can handle this by considering only the lower or upper surfaces of the isovolumes (volumes with a static field); i.e. only the transition from-lower-than-c-to-c-or-greater, or the transition from-c-or-lower-to-greater-than-c.
In all cases, you're not really looking for the isosurface value c, but trying to locate where a pair of the field samples crosses c.
Because we sample the field at regular grid points, and the isosurface rarely (if ever) intersects those grid points exactly, we divide the original box into N×N×N unit-sized cubes, and try to find the cubes the desired isosurface intersects.
Here is a simple illustration of one such cube, at (x,y,z) to (x+1,y+1,z+1):
When the isosurface intersects a cube, it intersects at least one of the edges marked X, Y, or Z, and/or the diagonal marked D. In particular, we'll have f(x,y,z) ≤ c, and one or more of:
f(x+1,y,z) > c (isosurface c crosses the cube edge marked with X)
(Note: In this case, we wish to walk along the y and z dimensions)
f(x,y+1,z) > c (isosurface c crosses the cube edge marked with Y)
(Note: In this case, we wish to walk along the x and z dimensions)
f(x,y,z+1) > c (isosurface c crosses the cube edge marked with Z)
(Note: In this case, we wish to walk along the x and y dimensions)
f(x+1,y+1,z+1) > c (isosurface c crosses the cube diagonal, marked with D)
(Note: In this case, we may need to examine all directly connected grid points, to see which direction we need to walk to.)
Instead of doing a complete search of the original volume, we can just find one such cube, and walk along the cubes to discover the cubes the isosurface intersects.
Since all isosurfaces have to intersect the diagonal from (0,0,0) to (N,N,N), we can find such a cube using just 2+ceil(log2(N)) samples, using a binary search over the cubes on the diagonal. The target cube (i,i,i) is the one for which f(i,i,i) ≤ c and f(i+1,i+1,i+1) > c. (For monotonically non-decreasing fields with isovolumes, this shows the isovolume surface closer to origin as the isosurface.)
When we know that the isosurface c intersects a cube, we can use basically three approaches to convert that knowledge to a point (that we consider the isosurface to intersect):
The cube has eight corners, each at a grid point. We can pick the corner/grid point with the field value closest to c.
We can interpolate -- choose an approximate point -- where the isosurface c intersects the edge/diagonal. We can do linear interpolation without any extra samples, since we already know the samples at the ends of the crossed edge/diagonal.
If u = f(x,y,z) < c, and v > c is the sample at the other end, the linearly interpolated intersection point along that line occurs at (c-u)/(v-u), with 0 being at (x,y,z), and 1 being at the other end of the edge/diagonal (at (x+1,y,z), (x,y+1,z), (x,y,z+1), or (x+1,y+1,z+1)).
You can use a binary search along the edge/diagonal, to find the intersection point.
This needs n extra samples per edge/diagonal, to get the intersection point at n-bit accuracy along the edge/diagonal. (As the original grid cannot be too coarse compared to the details in the field, or the details will not be visible anyway, you normally use something like n=2, n=3, n=4, or n=5 at most.)
The intersection points for the isosurface c thus obtained can be used for fitting some surface function, but I have not seen that in real life. Typically, Delaunay triangulation is used to convert the point set to a polygon mesh, which is then easy to visualize.
Another option is to remember which cube ((x,y,z)) and edge/diagonal (X, Y, or Z edge, or D for diagonal) each point is related to. Then, you can form a polygon mesh directly. Voxel techniques can also be used to quickly draw partially transparent isosurfaces; each view ray examines each cube once, and if the isosurface is present, the isosurface intersection points can be used to interpolate a surface normal vector, producing very smooth and accurate-looking isosurfaces with raycasting/raytracing methods (without creating any polygon mesh).
It seems to me I this answer is in need of editing -- at minimum, some sleep and further thought, and clarifications. Questions, suggestions, and even edits are welcome!
If there is interest from more than just the OP, I could try and see if I can cobble together a simple example C program for this. I've toyed with visualizing simulated electronic structures, and those fields are not even monotonic (although sampling is cheap).
You should look into this article which talks about the 2-dimensional case and gives you a great insight into the different methodologies:
http://leetcode.com/2010/10/searching-2d-sorted-matrix.html
In my opinion, the step-wise linear search (in part II there) would be a great first step for you because it's very easy to apply to the 3-d case and it really doesn't require a lot of experience to understand.
Because this is so straightforward and still very efficient, I would go with this and see if it fits your needs for the kind of data you're working with in 3-d.
However, if your only goal is performance, then you should apply the binary partition to 3-d. This gets a little bit more complex because the 'binary partition' he talks about essentially becomes a 'binary plane partition'.
So you don't have a line partitioning your matrix into 2 possible smaller matrices.
Instead you have a plane partitioning your cube into 2 possible smaller cubes.
To make the search in that plane (or matrix) efficient, you would first have to implement one of his methods :). Then you repeat everything with the smaller cubes.
Keep in mind that implementing this in a very efficient way (i.e. keeping memory access in mind) is not trivial.
I'll give this answer in an effort to try to minimize the number of costs calculated. Matt Ko links to a good solution, but it assumes a cheap cost function and a matrix-based data, which you don't seem to have either of. The approach I give requires much closer to O(log N + k) calls to the cost function, where k is the number of points with the desired cost. Note that this algorithm with some performance optimiztions could be made to be O(N) on a 3D matrix with little chance to performance cost function call wise, though it's a fair bit more complicated.
The psudeocode, which is based on techniques used in quickselect looks like this:
While there are still points under considerations:
Find the ideal pivot point and calculate it's cost
Remove the pivot from the point set
If the cost is the desired cost then:
Add the pivot to the solution set
Else:
Separate the points into 3 groups:
G1. Those that are in in the pivot's octant `VII`
G2. Those have the same x, y, or z of the pivot
G3. Those that are not in the pivot's octant `VII`
# Note this can be done in O(N)
If all points are in group 2:
Use 1D binary searches in each dimension to find points with the desired cost
Else:
Compute the cost of the pivot
Keep all points in group 2
If the pivot cost is greater than desired:
Keep only the points in group 1
Else:
Keep only the points in group 3
The pivot selected based on the points inside and outside of octant VII from that line. Points on the any of the 3 lines that form the octants are dealt with later if needed (G2).
The ideal pivot point is the such that the number of points in group 1 (G1) and group 3 (G3) are as close to equal as possible. To look at it mathematically would be along the lines of maximizing the larger of the two over the smaller of the two, or maximize(max(|G1|,|G3|) / min(|G1|,|G3|) ). Even a fairly naive algorithm looking for the ideal pivot point can find it in O(N^2) (an O(N log N) algorithm likely exists), but it takes O(N^3) to compute the cost of the ideal pivot after it's found.
After the ideal pivot is found and it's cost computed, each iteration should see on average roughly half the remaining points discarded, which again, results in only O(log N + k) calls to the cost function.
Final Note:
In retrospect, I'm not sure special consideration for group 2 is actually required as it's probably in group 3, but I'm not 100% sure. However, separating it out doesn't seem to change the Big O, so I didn't see a need to change it, though doing so would simplify the algorithm slightly.
This is not an answer per se, just slightly generalized example C code.
(The code was too long to include verbatim.)
The basic implementation is in grid.h (pastebin link).
In it, I've tried to make a distinction between grid coordinates (0 ≤ x, y, z ≤ size-1) and cell coordinates (0 ≤ x, y, z ≤ size-2). In particular, note the span type. Each cell spans a range of values: either interpolated, or the discrete set of the samples at the eight corners of the cell. Because this example uses linear interpolation to determine where within each cell the isosurface intersects the edges or a diagonal, I assume continuous spans.
I didn't realize how important cells spanning values is for edge cases, before I implemented this example code. That is why the OP and I discussed the edge cases in the comments to my other answer, and why the logic outlined in my other answer alone does not handle the edge cases correctly.
Since OP's particular case is not that common/interesting, this example is much more generic (and therefore quite unoptimized for the OP's case). In fact, this example only requires that the function has no local minima or maxima (saddle points and constant regions are allowed); just one minimum and one maximum within the gridded region. Minimum and maximum do not need to be point-like; they can be continous regions.
As such, at grid generation time, we do not know which cells contain the minimum and maximum. (In OP's case, the scalar field is monotonically non-decreasing and limited to the positive octant, so the minimum is at 0,0,0 and maximum at size-1,size-1,size-1.)
To find the minimum and maximum, I implemented two functions, that start from the best corner in the grid (having the smallest or greatest sample value). grid_maximum_cell() walks non-decreasing cells, and grid_minimum_cell() walks non-increasing cells. Since the scalar field is sampled, we implicitly assume it is continuous. As long as there are no local maxima or minima where the walk might stop, the walk will reach the correct cell in relatively few samples. (This search could be optimized much further, though. Consider these two functions just starting points for your own implementation. The OP does not need these at all, of course.)
(Actually, the requirement for the sampled scalar field is that each isosurface is continous, and that all isosurfaces intersect the line drawn from the minimum and maximum cells found using the above two functions.)
The function grid_isosurface() can be used to locate the cells the desired isosurface (field value) passes through. The last parameter is a function pointer. That function is called once for each cell the isosurface passes through. (Note the indexing order for the corner samples, [x][y][z].)
grid_isosurface() locates an initial cell the desired isosurface passes through using a binary search (on the line from the cell containing the minimum sample, to the cell containg the maximum sample). It then traces the surface, using the flood-fill-like algorithm outlined in my answer.
For an example, grid.c (pastebin link) uses the above include file, to evaluate the scalar field
f(x, y, z) = x3 + y3 + z3 + x + y - 0.125·(x·y + x·z + y·z + x·y·z).
On my Linux machine, I compiled and ran the example using
gcc -Wall -std=c99 -Wno-unused -O2 grid.c -o isosurface
./isosurface 50 -1.0 1.0 0.0 > out-0.0
./isosurface 50 -1.0 1.0 0.5 > out-0.5
./isosurface 50 -1.0 1.0 1.0 > out-1.0
and used Gnuplot to plot out the three isosurfaces:
splot "out-0.0" u 1:2:3 notitle w dots, "out-0.5" u 1:2:3 notitle w dots, "out-1.0" u notitle w dots
which leads to this pretty nice point cloud (rotatable in Gnuplot):
When the grid is initially generated, 14 samples are taken to locate the maximum and minimum cells. Tracing the isosurfaces required additional 18024, 18199, and 16953 samples, respectively; note that much fewer samples are needed for the second and further isosurfaces, if you do them consecutively on the same grid.
The total grid above contains 51×51×51 = 132651 samples, so tracing one isosurface required about 13% of the grid points to be sampled. For a 101×101×101 grid, the samples needed drops down to about 7%; for a 201×201×201 grid, down to 3.5%; for a 501x501x501 grid, to 1.4% (1.7M out of 125.75M samples).
None of this code is optimized for OP's case, nor optimized in general. A sample cache is used to minimize the number of samples needed in general, but the grid_isosurface() isosurface walking function, and the initial grid_minimum_cell() and grid_maximum_cell() functions can be modified to require slightly fewer samples. For larger grids, I don't expect the optimizations to make much of a difference, but for very small grids and very slow functions to evaluate, it might be worthwhile.
If the intent is to generate a polygon mesh for each isosurface, I recommend generating each polygon in the callback function, not from the overall generated point cloud. Using the edge/diagonal intersections like in the above example program, you get all the vertices for the polygon spanning that cell (no caches or such are needed). All you need is to order the edge intersection points correctly.
Questions? Comments? Bug fixes? Suggestions?

How to calculate where bullet hits

I have been trying to write an FPS in C/X11/OpenGL, but the issue that I have encountered is with calculating where the bullet hits. I have used a horrible technique, and it only sometimes works:
pos size, p;
size.x = 0.1;
size.z = 0.1; // Since the game is technically top-down (but in a 3D perspective)
// Positions are in X/Z, no Y
float f; // Counter
float d = FIRE_MAX + 1 /* Shortest Distance */, d1 /* Distance being calculated */;
x = 0; // Index of object to hit
for (f = 0.0; f < FIRE_MAX; f += .01) {
// Go forwards
p.x = player->pos.x + f * sin(toRadians(player->rot.x));
p.z = player->pos.z - f * cos(toRadians(player->rot.x));
// Get all objects that collide with the current position of the bullet
short* objs = _colDetectGetObjects(p, size, objects);
for (i = 0; i < MAX_OBJECTS; i++) {
if (objs[i] == -1) {
continue;
}
// Check the distance between the object and the player
d1 = sqrt(
pow((objects[i].pos.x - player->pos.x), 2)
+ pow((objects[i].pos.z - player->pos.z),
2));
// If it's closer, set it as the object to hit
if (d1 < d) {
x = i;
d = d1;
}
}
// If there was an object, hit it
if (x > 0) {
hit(&objects[x], FIRE_DAMAGE, explosions, currtime);
break;
}
}
It just works by making a for-loop and calculating any objects that might collide with where the bullet currently is. This, of course, is very slow, and sometimes doesn't even work.
What would be the preferred way to calculate where the bullet hits? I have thought of making a line and seeing if any objects collide with that line, but I have no idea how to do that kind of collision detection.
EDIT: I guess my question is this: How do I calculate the nearest object colliding in a line (that might not be a straight 45/90 degree angle)? Or are there any simpler methods of calculating where the bullet hits? The bullet is sort of like a laser, in the sense that gravity does not affect it (writing an old-school game, so I don't want it to be too realistic)
For every object you want to be hit-able, define a bounding object.
Simple examples would be a sphere or a box.
Then you have to implement a ray-sphere or ray-box intersection.
For exaple have a look at line-sphere intersection.
For boxes, you can either test aginast the four bounding lines, but there are algorithms optimsed for axis aligned boxes.
With this, proceed as you already do. For every object in the scene, check for intersection, if intersects, compare distance to previous intersected objects, take the one that is hit first.
The intersection algorithems give you the ray parameter as a result (the value t for which hit_position = ray_origin + t * ray_direction) which you can use to compare the distances.
You can organise all scene objects in the BSP tree then hit\collide detection will be pretty easy to implement. Also you can use BSP to detect invisible objects and discard them before rendering.

Finding the squares in a plane given n points

Given n points in a plane , how many squares can be formed ...??
I tried this by calculating the distances between each 2 points , then sort them , and look for the squares in the points with four or more equal distances after verifying the points and slopes.
But this looks like an approach with very high complexity . Any other ideas ...??
I thought dynamic programming for checking for line segments of equal distances might work ... but could not get the idea quite right ....
Any better ideas???
P.S : The squares can be in any manner . They can overlap , have a common side, one square inside another ...
If possible please give a sample code to perform the above...
Let d[i][j] = distances between points i and j. We are interested in a function count(i, j) that returns, as fast as possible, the number of squares that we can draw by using points i and j.
Basically, count(i, j) will have to find two points x and y such that d[i][j] = d[x][y] and check if these 4 points really define a square.
You can use a hash table to solve the problem in O(n^2) on average. Let H[x] = list of all points (p, q) that have d[p][q] = x.
Now, for each pair of points (i, j), count(i, j) will have to iterate H[ d[i][j] ] and count the points in that list that form a square with points i and j.
This should run very fast in practice, and I don't think it can ever get worse than O(n^3) (I'm not even sure it can ever get that bad).
This problem can be solved in O(n^1.5) time with O(n) space.
The basic idea is to group the points by X or Y coordinate, being careful to avoid making groups that are too large. The details are in the paper Finding squares and rectangles in sets of points. The paper also covers lots of other cases (allowing rotated squares, allowing rectangles, and working in higher dimensions).
I've paraphrased their 2d axis-aligned square finding algorithm below. Note that I changed their tree set to a hash set, which is why the time bound I gave is not O(n^1.5 log(n)):
Make a hash set of all the points. Something you can use to quickly check if a point is present.
Group the points by their X coordinate. Break any groups with more than sqrt(n) points apart, and re-group those now-free points by their Y coordinate. This guarantees the groups have at most sqrt(n) points and guarantees that for each square there's a group that has two of the square's corner points.
For every group g, for every pair of points p,q in g, check whether the other two points of the two possible squares containing p and q are present. Keep track of how many you find. Watch out for duplicates (are the two opposite points also in a group?).
Why does it work? Well, the only tricky thing is the regrouping. If either the left or right columns of a square are in groups that are not too large, the square will get found when that column group gets iterated. Otherwise both its top-left and top-right corners get regrouped, placed into the same row group, and the square will be found when that row group gets iterated.
I have a O(N^2) time, O(N) space solution:
Assume given points is an array of object Point, each Point has x,y.
First iterate through the array and add each item into an HashSet: This action de-duplicate and give us an O(1) access time. The whole process takes O(N) time
Using Math, Say vertices A, B, C, D can form a square, AC is known and it's a diagonal line, then the corresponding B, D is unique. We could write a function to calculate that. This process is O(1) time
Now Let's get back to our thing. write a for-i-loop and a for-j-inner-loop. Say input[i] and input[j] form a diagonal line, find its anti-diagonal line in the set or not: If exist, counter ++; This process take O(N^2) time.
My code in C#:
public int SquareCount(Point[] input)
{
int count = 0;
HashSet<Point> set = new HashSet<Point>();
foreach (var point in input)
set.Add(point);
for (int i = 0; i < input.Length; i++)
{
for (int j = 0; j < input.Length; j++)
{
if (i == j)
continue;
//For each Point i, Point j, check if b&d exist in set.
Point[] DiagVertex = GetRestPints(input[i], input[j]);
if (set.Contains(DiagVertex[0]) && set.Contains(DiagVertex[1]))
{
count++;
}
}
}
return count;
}
public Point[] GetRestPints(Point a, Point c)
{
Point[] res = new Point[2];
int midX = (a.x + c.y) / 2;
int midY = (a.y + c.y) / 2;
int Ax = a.x - midX;
int Ay = a.y - midY;
int bX = midX - Ay;
int bY = midY + Ax;
Point b = new Point(bX,bY);
int cX = (c.x - midX);
int cY = (c.y - midY);
int dX = midX - cY;
int dY = midY + cX;
Point d = new Point(dX,dY);
res[0] = b;
res[1] = d;
return res;
}
It looks like O(n^3) to me. A simple algo might be something like:
for each pair of points
for each of 3 possible squares which might be formed from these two points
test remaining points to see if they coincide with the other two vertices
Runtime: O(nlog(n)^2), Space: θ(n), where n is the number of points.
For each point p
Add it to the existing arrays sorted in the x and y-axis respectively.
For every pair of points that collide with p in the x and y-axis respectively
If there exists another point on the opposite side of p, increment square count by one.
The intuition is counting how many squares a new point creates. All squares are created on the creation of its fourth point. A new point creates a new square if it has any colliding points on concerned axes and there exists the "fourth" point on the opposite side that completes the square. This exhausts all the possible distinct squares.
The insertion into the arrays can be done binary, and checking for the opposite point can be done by accessing a hashtable hashing the points' coordinates.
This algorithm is optimal for sparse points since there will be very little collision points to check. It is pessimal for dense-squares points for the opposite of the reason for that of optimal.
This algorithm can be further optimized by tracking if points in the axis array have a collision in the complementary axis.
Just a thought: if a vertex A is one corner of a square, then there must be vertices B, C, D at the other corners with AB = AD and AC = sqrt(2)AB and AC must bisect BD. Assuming every vertex has unique coordinates, I think you can solve this in O(n^2) with a hash table keying on (distance, angle).
This is just an example implementation in Java - any comments welcome.
import java.util.Arrays;
import java.util.NoSuchElementException;
import java.util.Map;
import java.util.HashMap;
import java.util.List;
import java.util.ArrayList;
public class SweepingLine {
public static void main(String[] args) {
Point[] points = {
new Point(1,1),
new Point(1,4),
new Point(4,1),
new Point(4,4),
new Point(7,1),
new Point(7,4)
};
int max = Arrays.stream(points).mapToInt(p -> p.x).max().orElseThrow(NoSuchElementException::new);
int count = countSquares(points, max);
System.out.println(String.format("Found %d squares in %d x %d plane", count, max, max));
}
private static int countSquares(Point[] points, int max) {
int count = 0;
Map<Integer, List<Integer>> map = new HashMap<>();
for (int x=0; x<max; x++) {
for (int y=0; y<max; y++) {
for(Point p: points) {
if (p.x == x && p.y == y) {
List<Integer> ys = map.computeIfAbsent(x, _u -> new ArrayList<Integer>());
ys.add(y);
Integer ley = null;
for (Integer ey: ys) {
if (ley != null) {
int d = ey - ley;
for (Point p2: points) {
if (x + d == p2.x && p2.y == ey){
count++;
}
}
}
ley = ey;
}
}
}
}
}
return count;
}
private static class Point {
public final int x;
public final int y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
}
}
Here is a complete implemention of finding the diagonal points in C++!
Given points a and c, return b and d, which lie on the opposite diagonal
If b or d are not integer points, dicard them (optional)
To find all squares generated by n points, can check out this C++ implementation
Idea credited to Kevman. Hope it can help!
vector<vector<int>> createDiag(vector<int>& a, vector<int>& c){
double midX = (a[0] + c[0])/2.0;
double midY = (a[1] + c[1])/2.0;
double bx = midX - (a[1] - midY);
double by = midY + (a[0] - midX);
double dx = midX - (c[1] - midY);
double dy = midY + (c[0] - midX);
// discard the non-integer points
double intpart;
if(modf(bx, &intpart) != 0 or modf(by, &intpart) != 0 or modf(dx, &intpart) != 0 or modf(dy, &intpart) != 0){
return {{}};
}
return {{(int)bx, (int)by}, {(int)dx, (int)dy}};
}

Resources