// Arg0 - Map, Arg1 - X, Arg2 - Y, Arg3 - Distance, Arg4 - MaxDistance
var xx,yy,dist, x1, y1, dir, maxdist, obj, res, map;
map = argument0
x1 = argument1
y1 = argument2
dir = argument3
maxdist = argument4
dist = 0
do {
dist+=1
xx = x1+round(lengthdir_x(dist,dir))
yy = y1+round(lengthdir_y(dist,dir))
} until(block_isSolid(map_get_block(map,xx,yy)) or dist>maxdist)
if !block_isSolid(map_get_block(map,xx,yy)) {
return false
} else {
res = ds_list_create()
ds_list_add(res,xx)
ds_list_add(res,yy)
return res
}
There's the function. lengthdir_x/y is sin/cos(dir)*dist.
Don't yell at me for putting the C tag on there. The languages are very very similar, to the point where I could almost copy this straight in.
Right, formalities done:
This current algorithm will sometimes go diagonally (Where both x and y change by one in either sign), but I wish it not to do this.
EG:
Current: (Where X is the ray casted)
xoooo
oxooo
ooxoo
oooxo
oooox
Wanted:
xxooo
oxxoo
ooxxo
oooxx
oooox
Make sense?
Please help.
delta is a float and is the x-distance of an virtual secound "ray"(should be around 1.0f - 2.0f, just experimentate)
Delta should not be less than the size of a single pixel in the map.
do {
dist+=1
xx = x1+round(lengthdir_x(dist,dir))
yy = y1+round(lengthdir_y(dist,dir))
} until(block_isSolid(map_get_block(map,xx,yy)) || block_isSolid(map_get_block(map,xx + delta,yy)) or dist>maxdist)
Related
I'm making a mode 7/perspective projection project in Godot. When I run it, it produces the expected effect, displaying a 2d image as if it were a 3d plane.
Code:
func _ready():
map.load("res://map2.png")
perspective.load("res://map2.png")
for px in 1:
self.texture = display
for y in map.get_height():
_y = (y + py - 1)/z
for x in map.get_width():
_x = (x + px)/z
map.lock()
pix = map.get_pixel(_x, _y)
map.unlock()
perspective.lock()
perspective.set_pixel(x, y, pix)
perspective.unlock()
display.create_from_image(perspective)
z += 1
Image:
However, I have a problem. I have the code in the ready function, in a for loop. I want it to be called every frame, but when I increase the number of repeats from one to two, it turns the entire image red. I don't know what's causing this. one guess was that I wasn't locking and unlocking the images properly, but that is most likely not the case. Another guess was that the x and y variables were not resetting each time, but that was also working fine. I don't think the loop itself is the problem, but I have no idea what's wrong.
I struggled to make your code run. I half way gave up, and implemented the logic from my prior answer using lock bits instead. This is the code:
extends Sprite
export(Transform) var matrix:Transform
var sampler:Image
var buffer:Image
var size:Vector2
var center:Vector2
func _ready():
sampler = texture.get_data()
var err = sampler.decompress()
if err != OK:
push_error("Failed to decompress texture")
return
size = Vector2(texture.get_width(), texture.get_height())
center = size * 0.5
buffer = Image.new()
buffer.create(int(size.x), int(size.y), false, Image.FORMAT_RGBA8)
func _process(_delta):
#matrix = matrix.rotated(Vector3.RIGHT, 0.01)
sampler.lock()
buffer.lock()
for y in size.x:
for x in size.y:
var uv:Vector3 = matrix * Vector3(x - center.x, y - center.y, 1.0)
if uv.z <= 0.0:
buffer.set_pixel(x, y, Color.transparent)
continue
var _x = (uv.x / uv.z) + center.x
var _y = (uv.y / uv.z) + center.y
if _x < 0.0 or _x >= size.x or _y < 0.0 or _y >= size.y:
buffer.set_pixel(x, y, Color.transparent)
continue
#buffer.set_pixel(x, y, Color(_x / size.x, y / size.y, 0.0))
buffer.set_pixel(x, y, sampler.get_pixel(_x, _y))
buffer.unlock()
sampler.unlock()
var display = ImageTexture.new()
display.create_from_image(buffer, 0)
self.texture = display
As you can see, I'm exporting a Transfrom to be available on the editor. That is a proper 3D Transform. There is a commented line on _process that does a rotation, try it out.
The sampler Image is a copy of the Texture of the Sprite (the copy is made on _ready). And the buffer Image is where what is to be displayed is constructed.
The code is creating an ImageTexture from buffer and replacing the current texture with it, each frame (on _process). I'm setting flags to 0, because FLAG_REPEAT plus FLAG_FILTER blurred the border to the opposite side of the Sprite.
The vector Vector2 size holds the size of the texture. And the Vector2 Center is the coordinates of the center.
As I said at the start, this is the logic from my prior answer. This line:
vec3 uv = matrix * vec3(UV - 0.5, 1.0);
Is equivalent to (except I'm not scaling the coordinates to the range from 0 to 1):
var uv:Vector3 = matrix * Vector3(x - center.x, y - center.y, 1.0)
Then I had this line:
if (uv.z < 0.0) discard;
Which turned out like this:
if uv.z <= 0.0:
buffer.set_pixel(x, y, Color.transparent)
continue
I'm setting transparent because I do not recreate the buffer, nor clear it before hand.
Finally this line:
COLOR = texture(TEXTURE, (uv.xy / uv.z) + 0.5);
Turned out like this:
var _x = (uv.x / uv.z) + center.x
var _y = (uv.y / uv.z) + center.y
if _x < 0.0 or _x >= size.x or _y < 0.0 or _y >= size.y:
buffer.set_pixel(x, y, Color.transparent)
continue
buffer.set_pixel(x, y, sampler.get_pixel(_x, _y))
As per the result, here is the Godot Icon "rotating in 3D" (not really, but that is the idea):
Please disregard visual artifact due to GIF encoding.
I'm not sure if you want to stay with the logic of my prior answer. However, I believe this one should not be too hard to modify for your needs.
Addendum
I used a Transform because there is no convenient Matrix type available. However, the Transform uses a Matrix internally. See also Transformation matrix.
The Mode 7 formula according to Wikipedia works with a 2 by 2 Matrix, which is simpler that what I have here. However, you are going to need the product of a Matrix and a Vector anyway. You cannot compute the components independently.
This is the formula according to Wikipedia:
r' = M*(r - r_0) + r_0
That is:
var rp = mult(M, r - r_0) + r_0
Where mult would look like this:
func mult(matrix, vector:Vector2) -> Vector2:
var x = vector.x * matrix.a + vector.y * matrix.b
var y = vector.x * matrix.c + vector.y * matrix.d
return Vector2(x, y)
However, as I said, there is no convenient matrix type. If we export a, b, c, and d, we have:
var rp = mult(a, b, c, d, r - r_0) + r_0
And mult looks like this:
func mult(a:float, b:float, c:float, d:float, vector:Vector2) -> Vector2:
var x = vector.x * a + vector.y * b
var y = vector.x * c + vector.y * d
return Vector2(x, y)
We can easily use modify the code to do that. First export a, b, c and d as I said:
export(float) var a:float
export(float) var b:float
export(float) var c:float
export(float) var d:float
And this is _process modified:
func _process(_delta):
sampler.lock()
buffer.lock()
for y in size.x:
for x in size.y:
var rp = mult(a, b, c, d, Vector2(x, y) - center) + center
if rp.x < 0.0 or rp.x >= size.x or rp.y < 0.0 or rp.y >= size.y:
buffer.set_pixel(x, y, Color.transparent)
continue
buffer.set_pixel(x, y, sampler.get_pixel(rp.x, rp.y))
buffer.unlock()
sampler.unlock()
var display = ImageTexture.new()
display.create_from_image(buffer, 6)
self.texture = display
Of course, mult is the one I showed above. I'm assuming here that r_0 is what I called center.
I'm not sure how to interpret a, b, c and d, so here is a = 1, b = 2, c = 3 and d = 4:
So I have a program. And I am trying to simulate tons of moving particles with intricate moment logic that i would not want to have going on the CGP for many reasons. Of course I am then going to draw this all on the GPU.
Now originally I thought that when simulating TONS of particles that GPU delay would be a problem not the CPU. Unfortunately I am running 500 particles at a whopping 6fps :(.
I have tracked the latency down to how I send the vertices to the particle simulator. And not even the buffer creation, simply how I build the arrays. Basically I have arrays I clear every frame, and then go through for each particle in an array of particles and create arrays for each of them. And this leads to around 17500 append calls (with 500 particles). So I need a different way to do this because without building these arrays it runs at 60fps no cpu latency. Most of these append calls call a member of a struct.
Currently each particle is made based off of a class object. And it has things like position and color that are stored in structs. Would it be wroth my while to switch structs to arrays? Or perhaps I should switch everything to arrays? Obviously doing any of that would make things much harder to program. But would it be worth it?
A big problem is that I need each particle to be drawn as a capsule. Which I would make out of two dots and a thick line. Unfortunately OpenGL es 2.0 doesn't support thick lines so I have to draw it with two dots and two triangles :(. As you can see the function "calculateSquare" makes these two triangles based off to the two points. It is also very laggy, however it isn't the only problem, I will try to find a different way later.
What are your thoughts?
Note: According to xcode ram usage is only at 10 mb. However the cpu frame time is 141 ms.
Here is the code BTW:
func buildParticleArrays()
{
lineStrip = []
lineStripColors = []
lineStripsize = []
s_vertes = []
s_color = []
s_size = []
for cparticle in particles
{
let pp = cparticle.lastPosition
let np = cparticle.position
if (cparticle.frozen == true)
{
addPoint(cparticle.position, color: cparticle.color, size: cparticle.size)
}
else
{
let s = cparticle.size / 2.0
//Add point merely adds the data in array format
addPoint(cparticle.position, color: cparticle.color, size: cparticle.size)
addPoint(cparticle.lastPosition, color: cparticle.color, size: cparticle.size)
lineStrip += calculateSquare(pp, pp2: np, size: s)
for var i = 0; i < 6; i++
{
let rgb = hsvtorgb(cparticle.color)
lineStripColors.append(GLfloat(rgb.r))
lineStripColors.append(GLfloat(rgb.g))
lineStripColors.append(GLfloat(rgb.b))
lineStripColors.append(GLfloat(rgb.a))
lineStripsize.append(GLfloat(cparticle.size))
}
}
}
}
func addPoint(theObject: point, color: colorhsv, size: CGFloat)
{
let rgb = hsvtorgb(color)
s_vertes += [GLfloat(theObject.x), GLfloat(theObject.y), GLfloat(theObject.z)]
s_color += [GLfloat(rgb.r), GLfloat(rgb.g), GLfloat(rgb.b), GLfloat(rgb.a)]
s_size.append(GLfloat(size))
}
func calculateSquare(pp1: point, pp2: point, size: CGFloat) -> [GLfloat]
{
let p1 = pp1
var p2 = pp2
var s1 = point()
var s2 = point()
let center = CGPointMake((p1.x + p2.x) / 2.0, (p1.y + p2.y) / 2.0)
var angle:CGFloat = 0
if ((p1.x == p2.x) && (p1.y == p2.y))
{
//They are ontop of eachother
angle = CGFloat(M_PI) / 2.0
p2.x += 0.0001
p2.y += 0.0001
}
else
{
if(p1.x == p2.x)
{
//UH OH x axis is equal
if (p1.y < p2.y)
{
//RESULT: p1 is lower so should be first
s1 = p1
s2 = p2
}
else
{
//RESULT: p2 is lower and should be first
s1 = p2
s2 = p1
}
}
else
{
//We could be all good
if (p1.y == p2.y)
{
//Uh oh y axis is equal
if (p1.x < p2.x)
{
//RESULT: p1 is left so should be first
s1 = p1
s2 = p2
}
else
{
//RESULT: p2 is to the right so should be first
s1 = p2
s2 = p1
}
}
else
{
//Feuf everything is ok
if ((p1.x < p2.x) && (p1.y < p2.y)) //First point is left and below
{
//P1 should be first
s1 = p1
s2 = p2
}
else //First point is right and top
{
//P2 should be first
s1 = p2
s2 = p1
}
}
}
angle = angle2p(s1, p2: s2)
}
if (angle < 0)
{
angle += CGFloat(M_PI) * 2.0
}
let yh = size / 2.0
let distance = dist(p1, p2: p2)
let xh = distance / 2.0
let tl = rotateVector(CGPointMake(-xh, yh), angle: angle) + center
let tr = rotateVector(CGPointMake(xh, yh), angle: angle) + center
let bl = rotateVector(CGPointMake(-xh, -yh), angle: angle) + center
let br = rotateVector(CGPointMake(xh, -yh), angle: angle) + center
let c1:[GLfloat] = [GLfloat(bl.x), GLfloat(bl.y), 0]
let c2:[GLfloat] = [GLfloat(tl.x), GLfloat(tl.y), 0]
let c3:[GLfloat] = [GLfloat(br.x), GLfloat(br.y), 0]
let c4:[GLfloat] = [GLfloat(tr.x), GLfloat(tr.y), 0]
let part1 = c1 + c2 + c3
let part2 = c2 + c3 + c4
return part1 + part2
}
Do you really need all particles in system RAM? e.g. for some physics collision calculation in relation to other objects in the scene? Otherwise you could just create one particle, send it to the GPU and do the calculations in a GPU shader.
Ok so after hours of tweaking the code for small bits in efficiency I have it running 500 particles at a fps of 28 which looks pretty smooth! I still have some ways to go. The best piece of advice had to do with allocating memory instead appending it. That saved tons of problems.
Special thanks to #Darko, #Marcelo_Cantos for coming up with the ideas that would ultimately optimize my code!
I'm trying to generate Craig interpolants using the C API but I get incorrect results.
However, when I dump the same problem to a file via Z3_write_interpolation_problem and call iZ3 I get the expected interpolant.
I attach the code to be able to reproduce the same results.
I'm using z3 4.1
#include<stdio.h>
#include<stdlib.h
#include<assert.h>
#include<stdarg.h>
#include<memory.h>
#include<setjmp.h>
#include<iz3.h>
Z3_ast mk_var(Z3_context ctx, const char * name, Z3_sort ty)
{
Z3_symbol s = Z3_mk_string_symbol(ctx, name);
return Z3_mk_const(ctx, s, ty);
}
Z3_ast mk_int_var(Z3_context ctx, const char * name)
{
Z3_sort ty = Z3_mk_int_sort(ctx);
return mk_var(ctx, name, ty);
}
void interpolation_1(){
// Create context
Z3_config cfg = Z3_mk_config();
Z3_context ctx = Z3_mk_interpolation_context(cfg);
// Build formulae
Z3_ast x0,x1,x2;
x0 = mk_int_var(ctx, "x0");
x1 = mk_int_var(ctx, "x1");
x2 = mk_int_var(ctx, "x2");
Z3_ast zero = Z3_mk_numeral(ctx, "0", Z3_mk_int_sort(ctx));
Z3_ast two = Z3_mk_numeral(ctx, "2", Z3_mk_int_sort(ctx));
Z3_ast ten = Z3_mk_numeral(ctx, "10", Z3_mk_int_sort(ctx));
Z3_ast c2_operands[2] = { x0, two };
Z3_ast c1 = Z3_mk_eq(ctx, x0, zero);
Z3_ast c2 = Z3_mk_eq(ctx, x1, Z3_mk_add(ctx, 2, c2_operands));
Z3_ast c3_operands[2] = { x1, two };
Z3_ast c3 = Z3_mk_eq(ctx, x2, Z3_mk_add(ctx, 2, c3_operands));
Z3_ast c4 = Z3_mk_gt(ctx, x2, ten);
Z3_ast A_operands[3] = { c1, c2, c3};
Z3_ast AB[2] = { Z3_mk_and(ctx,3, A_operands), c4 };
// Generate interpolant
Z3_push(ctx);
Z3_ast interps[1];
Z3_lbool status = Z3_interpolate(ctx, 2, AB, NULL, NULL, interps);
assert(status == Z3_L_FALSE && "A and B should be unsat");
printf("Interpolant: %s\n",Z3_ast_to_string(ctx, interps[0]));
// To dump the interpolation into a SMT file
// execute "iz3 tmp.smt" to compare
Z3_write_interpolation_problem(ctx, 2, AB, NULL, "tmp.smt");
Z3_pop(ctx,1);
}
int main() {
interpolation_1();
}
I generate an executable using the command:
g++ -fopenmp -o interpolation interpolation.c
-I/home/jorge/Systems/z3/include -I/home/jorge/Systems/z3/iz3/include -L/home/jorge/Systems/z3/lib -L/home/jorge/Systems/z3/iz3/lib -L/home/jorge/Systems/libfoci-1.1 -lz3 -liz3 -lfoci
Note that the constraints are basically:
A = (x=0 and x1 = x0+2 and x2 = x1 + 2),
and B = (x2 > 10)
which are clearly unsat. Moreover, it's also easy to see that the only common variable is x2. Thus, any valid interpolant can only include x2.
If I run the executable ./interpolation I obtain the nonsense interpolant:
(and (>= (+ x0 (* -1 x1)) -2) (>= (+ x1 (* -1 x3)) -2) (<= x0 0))
However, If I run "iz3 tmp.smt" (where tmp.smt is the file generated using Z3_write_interpolation_problem) I obtain a valid interpolant:
unsat interpolant: (<= x2 10)
Is this a bug? or am I missing some important precondition when I call Z3_interpolate?
P.S. I could not find any example using iZ3 with the C API.
Cheers,
Jorge
iZ3 was not built against version 4+ and the enumeration types and other features from the
headers from the different versions have changed. You cannot yet use iZ3 against the latest
versions of Z3. We hope to address this soon, most likely by placing the iZ3 stack along with
the rest of Z3 sources, but in the meanwhile use the previous release where iZ3 was built for.
I saw the below algorithm works to check if a point is in a given polygon from this link:
int pnpoly(int nvert, float *vertx, float *verty, float testx, float testy)
{
int i, j, c = 0;
for (i = 0, j = nvert-1; i < nvert; j = i++) {
if ( ((verty[i]>testy) != (verty[j]>testy)) &&
(testx < (vertx[j]-vertx[i]) * (testy-verty[i]) / (verty[j]-verty[i]) + vertx[i]) )
c = !c;
}
return c;
}
I tried this algorithm and it actually works just perfect. But sadly I cannot understand it well after spending some time trying to get the idea of it.
So if someone is able to understand this algorithm, please explain it to me a little.
Thank you.
The algorithm is ray-casting to the right. Each iteration of the loop, the test point is checked against one of the polygon's edges. The first line of the if-test succeeds if the point's y-coord is within the edge's scope. The second line checks whether the test point is to the left of the line (I think - I haven't got any scrap paper to hand to check). If that is true the line drawn rightwards from the test point crosses that edge.
By repeatedly inverting the value of c, the algorithm counts how many times the rightward line crosses the polygon. If it crosses an odd number of times, then the point is inside; if an even number, the point is outside.
I would have concerns with a) the accuracy of floating-point arithmetic, and b) the effects of having a horizontal edge, or a test point with the same y-coord as a vertex, though.
Edit 1/30/2022: I wrote this answer 9 years ago when I was in college. People in the chat conversation are indicating it's not accurate. You should probably look elsewhere. 🤷♂️
Chowlett is correct in every way, shape, and form.
The algorithm assumes that if your point is on the line of the polygon, then that is outside - for some cases, this is false. Changing the two '>' operators to '>=' and changing '<' to '<=' will fix that.
bool PointInPolygon(Point point, Polygon polygon) {
vector<Point> points = polygon.getPoints();
int i, j, nvert = points.size();
bool c = false;
for(i = 0, j = nvert - 1; i < nvert; j = i++) {
if( ( (points[i].y >= point.y ) != (points[j].y >= point.y) ) &&
(point.x <= (points[j].x - points[i].x) * (point.y - points[i].y) / (points[j].y - points[i].y) + points[i].x)
)
c = !c;
}
return c;
}
I changed the original code to make it a little more readable (also this uses Eigen). The algorithm is identical.
// This uses the ray-casting algorithm to decide whether the point is inside
// the given polygon. See https://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm
bool pnpoly(const Eigen::MatrixX2d &poly, float x, float y)
{
// If we never cross any lines we're inside.
bool inside = false;
// Loop through all the edges.
for (int i = 0; i < poly.rows(); ++i)
{
// i is the index of the first vertex, j is the next one.
// The original code uses a too-clever trick for this.
int j = (i + 1) % poly.rows();
// The vertices of the edge we are checking.
double xp0 = poly(i, 0);
double yp0 = poly(i, 1);
double xp1 = poly(j, 0);
double yp1 = poly(j, 1);
// Check whether the edge intersects a line from (-inf,y) to (x,y).
// First check if the line crosses the horizontal line at y in either direction.
if ((yp0 <= y) && (yp1 > y) || (yp1 <= y) && (yp0 > y))
{
// If so, get the point where it crosses that line. This is a simple solution
// to a linear equation. Note that we can't get a division by zero here -
// if yp1 == yp0 then the above if will be false.
double cross = (xp1 - xp0) * (y - yp0) / (yp1 - yp0) + xp0;
// Finally check if it crosses to the left of our test point. You could equally
// do right and it should give the same result.
if (cross < x)
inside = !inside;
}
}
return inside;
}
To expand on the "too-clever trick". We want to iterate over all adjacent vertices, like this (imagine there are 4 vertices):
i
j
0
1
1
2
2
3
3
0
My code above does it the simple obvious way - j = (i + 1) % num_vertices. However this uses integer division which is much much slower than all other operations. So if this is performance critical (e.g. in an AAA game) you want to avoid it.
The original code changes the order of iteration a bit:
i
j
0
3
1
0
2
1
3
2
This is still totally valid since we're still iterating over every vertex pair and it doesn't really matter whether you go clockwise or anticlockwise, or where you start. However now it lets us avoid the integer division. In easy-to-understand form:
int i = 0;
int j = num_vertices - 1; // 3
while (i < num_vertices) { // 4
{body}
j = i;
++i;
}
Or in very terse C style:
for (int i = 0, j = num_vertices - 1; i < num_vertices; j = i++) {
{body}
}
This might be as detailed as it might get for explaining the ray-tracing algorithm in actual code. It might not be optimized but that must always come after a complete grasp of the system.
//method to check if a Coordinate is located in a polygon
public boolean checkIsInPolygon(ArrayList<Coordinate> poly){
//this method uses the ray tracing algorithm to determine if the point is in the polygon
int nPoints=poly.size();
int j=-999;
int i=-999;
boolean locatedInPolygon=false;
for(i=0;i<(nPoints);i++){
//repeat loop for all sets of points
if(i==(nPoints-1)){
//if i is the last vertex, let j be the first vertex
j= 0;
}else{
//for all-else, let j=(i+1)th vertex
j=i+1;
}
float vertY_i= (float)poly.get(i).getY();
float vertX_i= (float)poly.get(i).getX();
float vertY_j= (float)poly.get(j).getY();
float vertX_j= (float)poly.get(j).getX();
float testX = (float)this.getX();
float testY = (float)this.getY();
// following statement checks if testPoint.Y is below Y-coord of i-th vertex
boolean belowLowY=vertY_i>testY;
// following statement checks if testPoint.Y is below Y-coord of i+1-th vertex
boolean belowHighY=vertY_j>testY;
/* following statement is true if testPoint.Y satisfies either (only one is possible)
-->(i).Y < testPoint.Y < (i+1).Y OR
-->(i).Y > testPoint.Y > (i+1).Y
(Note)
Both of the conditions indicate that a point is located within the edges of the Y-th coordinate
of the (i)-th and the (i+1)- th vertices of the polygon. If neither of the above
conditions is satisfied, then it is assured that a semi-infinite horizontal line draw
to the right from the testpoint will NOT cross the line that connects vertices i and i+1
of the polygon
*/
boolean withinYsEdges= belowLowY != belowHighY;
if( withinYsEdges){
// this is the slope of the line that connects vertices i and i+1 of the polygon
float slopeOfLine = ( vertX_j-vertX_i )/ (vertY_j-vertY_i) ;
// this looks up the x-coord of a point lying on the above line, given its y-coord
float pointOnLine = ( slopeOfLine* (testY - vertY_i) )+vertX_i;
//checks to see if x-coord of testPoint is smaller than the point on the line with the same y-coord
boolean isLeftToLine= testX < pointOnLine;
if(isLeftToLine){
//this statement changes true to false (and vice-versa)
locatedInPolygon= !locatedInPolygon;
}//end if (isLeftToLine)
}//end if (withinYsEdges
}
return locatedInPolygon;
}
Just one word about optimization: It isn't true that the shortest (and/or the tersest) code is the fastest implemented. It is a much faster process to read and store an element from an array and use it (possibly) many times within the execution of the block of code than to access the array each time it is required. This is especially significant if the array is extremely large. In my opinion, by storing each term of an array in a well-named variable, it is also easier to assess its purpose and thus form a much more readable code. Just my two cents...
The algorithm is stripped down to the most necessary elements. After it was developed and tested all unnecessary stuff has been removed. As result you can't undertand it easily but it does the job and also in very good performance.
I took the liberty to translate it to ActionScript-3:
// not optimized yet (nvert could be left out)
public static function pnpoly(nvert: int, vertx: Array, verty: Array, x: Number, y: Number): Boolean
{
var i: int, j: int;
var c: Boolean = false;
for (i = 0, j = nvert - 1; i < nvert; j = i++)
{
if (((verty[i] > y) != (verty[j] > y)) && (x < (vertx[j] - vertx[i]) * (y - verty[i]) / (verty[j] - verty[i]) + vertx[i]))
c = !c;
}
return c;
}
This algorithm works in any closed polygon as long as the polygon's sides don't cross. Triangle, pentagon, square, even a very curvy piecewise-linear rubber band that doesn't cross itself.
1) Define your polygon as a directed group of vectors. By this it is meant that every side of the polygon is described by a vector that goes from vertex an to vertex an+1. The vectors are so directed so that the head of one touches the tail of the next until the last vector touches the tail of the first.
2) Select the point to test inside or outside of the polygon.
3) For each vector Vn along the perimeter of the polygon find vector Dn that starts on the test point and ends at the tail of Vn. Calculate the vector Cn defined as DnXVn/DN*VN (X indicates cross product; * indicates dot product). Call the magnitude of Cn by the name Mn.
4) Add all Mn and call this quantity K.
5) If K is zero, the point is outside the polygon.
6) If K is not zero, the point is inside the polygon.
Theoretically, a point lying ON the edge of the polygon will produce an undefined result.
The geometrical meaning of K is the total angle that the flea sitting on our test point "saw" the ant walking at the edge of the polygon walk to the left minus the angle walked to the right. In a closed circuit, the ant ends where it started.
Outside of the polygon, regardless of location, the answer is zero.
Inside of the polygon, regardless of location, the answer is "one time around the point".
This method check whether the ray from the point (testx, testy) to O (0,0) cut the sides of the polygon or not .
There's a well-known conclusion here: if a ray from 1 point and cut the sides of a polygon for a odd time, that point will belong to the polygon, otherwise that point will be outside the polygon.
To expand on #chowlette's answer where the second line checks if the point is to the left of the line,
No derivation is given but this is what I worked out:
First it helps to imagine 2 basic cases:
the point is left of the line . / or
the point is right of the line / .
If our point were to shoot a ray out horizontally where would it strike the line segment. Is our point to the left or right of it? Inside or out? We know its y coordinate because it's by definition the same as the point. What would the x coordinate be?
Take your traditional line formula y = mx + b. m is the rise over the run. Here, instead we are trying to find the x coordinate of the point on that line segment that has the same height (y) as our point.
So we solve for x: x = (y - b)/m. m is rise over run, so this becomes run over rise or (yj - yi)/(xj - xi) becomes (xj - xi)/(yj - yi). b is the offset from origin. If we assume yi as the base for our coordinate system, b becomes yi. Our point testy is our input, subtracting yi turns the whole formula into an offset from yi.
We now have (xj - xi)/(yj - yi) or 1/m times y or (testy - yi): (xj - xi)(testy - yi)/(yj - yi) but testx isn't based to yi so we add it back in order to compare the two ( or zero testx as well )
I think the basic idea is to calculate vectors from the point, one per edge of the polygon. If vector crosses one edge, then the point is within the polygon. By concave polygons if it crosses an odd number of edges it is inside as well (disclaimer: although not sure if it works for all concave polygons).
This is the algorithm I use, but I added a bit of preprocessing trickery to speed it up. My polygons have ~1000 edges and they don't change, but I need to look up whether the cursor is inside one on every mouse move.
I basically split the height of the bounding rectangle to equal length intervals and for each of these intervals I compile the list of edges that lie within/intersect with it.
When I need to look up a point, I can calculate - in O(1) time - which interval it is in and then I only need to test those edges that are in the interval's list.
I used 256 intervals and this reduced the number of edges I need to test to 2-10 instead of ~1000.
Here's a php implementation of this:
<?php
class Point2D {
public $x;
public $y;
function __construct($x, $y) {
$this->x = $x;
$this->y = $y;
}
function x() {
return $this->x;
}
function y() {
return $this->y;
}
}
class Point {
protected $vertices;
function __construct($vertices) {
$this->vertices = $vertices;
}
//Determines if the specified point is within the polygon.
function pointInPolygon($point) {
/* #var $point Point2D */
$poly_vertices = $this->vertices;
$num_of_vertices = count($poly_vertices);
$edge_error = 1.192092896e-07;
$r = false;
for ($i = 0, $j = $num_of_vertices - 1; $i < $num_of_vertices; $j = $i++) {
/* #var $current_vertex_i Point2D */
/* #var $current_vertex_j Point2D */
$current_vertex_i = $poly_vertices[$i];
$current_vertex_j = $poly_vertices[$j];
if (abs($current_vertex_i->y - $current_vertex_j->y) <= $edge_error && abs($current_vertex_j->y - $point->y) <= $edge_error && ($current_vertex_i->x >= $point->x) != ($current_vertex_j->x >= $point->x)) {
return true;
}
if ($current_vertex_i->y > $point->y != $current_vertex_j->y > $point->y) {
$c = ($current_vertex_j->x - $current_vertex_i->x) * ($point->y - $current_vertex_i->y) / ($current_vertex_j->y - $current_vertex_i->y) + $current_vertex_i->x;
if (abs($point->x - $c) <= $edge_error) {
return true;
}
if ($point->x < $c) {
$r = !$r;
}
}
}
return $r;
}
Test Run:
<?php
$vertices = array();
array_push($vertices, new Point2D(120, 40));
array_push($vertices, new Point2D(260, 40));
array_push($vertices, new Point2D(45, 170));
array_push($vertices, new Point2D(335, 170));
array_push($vertices, new Point2D(120, 300));
array_push($vertices, new Point2D(260, 300));
$Point = new Point($vertices);
$point_to_find = new Point2D(190, 170);
$isPointInPolygon = $Point->pointInPolygon($point_to_find);
echo $isPointInPolygon;
var_dump($isPointInPolygon);
I modified the code to check whether the point is in a polygon, including the point is on an edge.
bool point_in_polygon_check_edge(const vec<double, 2>& v, vec<double, 2> polygon[], int point_count, double edge_error = 1.192092896e-07f)
{
const static int x = 0;
const static int y = 1;
int i, j;
bool r = false;
for (i = 0, j = point_count - 1; i < point_count; j = i++)
{
const vec<double, 2>& pi = polygon[i);
const vec<double, 2>& pj = polygon[j];
if (fabs(pi[y] - pj[y]) <= edge_error && fabs(pj[y] - v[y]) <= edge_error && (pi[x] >= v[x]) != (pj[x] >= v[x]))
{
return true;
}
if ((pi[y] > v[y]) != (pj[y] > v[y]))
{
double c = (pj[x] - pi[x]) * (v[y] - pi[y]) / (pj[y] - pi[y]) + pi[x];
if (fabs(v[x] - c) <= edge_error)
{
return true;
}
if (v[x] < c)
{
r = !r;
}
}
}
return r;
}
I want to divide the Google map display into 200 parts , I have this code
bounds = map.getBounds();
southWest = bounds.getSouthWest();
northEast = bounds.getNorthEast();
tileWidth = (northEast.lng() - southWest.lng()) / 10;
tileHeight = (northEast.lat() - southWest.lat()) / 20;
for (x=0; x < 20 ; x++)
{
for (y=0; y < 10 ; y++)
{
var x1 = southWest.lat()+ (tileHeight * x);
var y1 = southWest.lng()+ (tileWidth * y);
var x2 = x1 + tileHeight;
var y2 = y1 + tileWidth;
var tempCell = new GLatLngBounds(new GLatLng(x1, y1), new GLatLng(x2, y2));
}
}
I just cant figure out what is wrong with it...
Any Idea ??
I tried the code you posted - it seems to work just fine. The problem is probably elsewhere in your code. Can you post more details?
It is worthwhile to note, however, that this code will fail in spectacular fashion if the bounds include the international date line. Let us know if this is the problem.
I can't help but notice you use tempCell to hold the result, but what is done after that? do you ever refer to those bounded regions again?