How to find distance between geofenced area and a point outside of this area? - maps

enter image description here
I want to check the distance of a point from the geofenced area.

Related

react-tsparticles moving circles but not overlap can any one know about it

enter image description here
I am using particle js for circle moving but circles are overlapping to each other. how can i able to move circles but not overlap.

ARKit: project a feature point found in the ARPointCloud to image space and check to see if it's contained in a CGRect on screen?

So, I am using ARKit to display feature points in the session. I am able to get the current frame, then its rawFeaturePoints and place geometries in the world space so the user can see them on screen. That is working great.
In the app I then have a quadrant on screen. My objective is to show in screen coordinates feature points that projected would fall inside the 2D quadrant on screen. To do that, I tried this:
get feature points as an array of vector_float3
for each of those points I then get a SCNVector3 setting the Z component to 0 (near plane)
I then call on the ARSCNView:
public func projectPoint(_ point: SCNVector3) -> SCNVector3
This approach does give me 2D points back, but, depending on where the camera is they seem to be way off.
So then, since in ARKit the camera keeps moving around, do I need to take that into account to achieve what I explained?
EDIT:
About flipping the Y of the CGPoint retrieved from the projectPoint call on the camera:
/**
Project a 3D point in world coordinate system into 2D viewport space.
#param point 3D point in world coordinate system.
#param orientation Viewport orientation.
#param viewportSize Viewport (or image) size.
#return 2D point in viewport coordinate system with origin at top-left.
*/
open func projectPoint(_ point: vector_float3, orientation: UIInterfaceOrientation, viewportSize: CGSize) -> CGPoint
Remy San mentioned flipping the Y. I tried that and it does seem to work. One difference between what he's doing and what I am doing is that I am not using an SKScene, but I am using SCNScene. Looking at the docs it says:
...The projection of the specified point into a 2D pixel coordinate space
whose origin is in the upper left corner...
So, what throws me off is that if I don't flip the Y it seems like it's not really working properly. (I'll try to post images to show what I mean). But then if flipping the Y though makes things look better, it goes against the docs. No?
I get you are using the intrinsics matrix for you projection. ARkit technology may also give you some extra information. These are the cameraPoseARFrame, the projectionMatrix and the transformToWorldMap matrices. Are you taking them into consideration when transforming from world coordinates to pixel coordinates?
If anyone has a methodology for applying these matrices to the point cloud coordinates to convert them into screen coordinates, could you contribute to my answer please? I think they may provide more precision and accuracy to the final result.
Thank you!

Dividing up a map into polygon regions based on a list of single point coordinates

I'm trying to divide up a city map into polygon regions based on a list of single point coordinates.
The idea is that a polygon region would extend outwards from a single point in all directions until it bordered with polygon regions extending out from nearby / adjacent points. I don't want to use a fixed radius because I want the end result to be complete coverage of the map. So the regions will be irregularly shapes and sized, extending their "territory" as far as possible before bumping up against other territories or the map boundary.
Does anyone know of an algorithm, library or program that can generate such a list of polygons given a list of single point coordinates and a map boundary?
Perhaps you want delaunay-triangulation or a voronoi diagram.
Example page from JSTS
delaunay triangulation
voronoi diagram

Counting/Finding the boundary points in a 2-D point cloud

There are finite number of points on a 2-D plane each on integer coordinates (x,y) such that 0<=x,y<100.
Now what could be done to find out all the boundary points of this set? (Algorithm)
Visualisation of boundary points:
Imagine a rubber band in your hand.
Imagine pins sticked at all the points on the plane.
Now if you release the rubber band such that all the points/pins are inside it, after releasing it will contract.
After contraction, the points that deviate the rubber band(make perfect corners) will be the boundary points.
Image Visualisation: Image (Posting link to image coz i do not have enough reputation points)
Consider the yellow line. The image is not perfect visualisation. There's a point at ~(0.8,1) which should lie inside the boundary.

Calculating distance using a single camera

I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.

Resources