2d geometry : angle+power prediction for shot - artificial-intelligence

before any talking, please look at this :
http://i.stack.imgur.com/pQDAC.jpg
I want to know how can predict right angle and power for ai to make shot meet player . both of them have static position and doesn't move .
Thanks for any help...

You would need to take a look at horizontal projectiles for starters. The problem is that different power will require different angles of launch, so you would need to try out various power ranges or angle ranges.
EDIT: The image you have attached describes the path of a given projectile (bullet, bomb, any item you throw) across a horizontal plane (parallel to the ground). These particular type of problems usually require a variation of the equations of linear motion which is what you have there on the website.
Besides the equations of motion, the website I linked should give you some simple problems and how you can solve them to make sure that you are following.
As per your question, the targets will be static thus the distance component of the equation will be known and will not change. The other components you will need to find is the angle of launch and the initial velocity of the round (denoted by the power you use).
An approach would be to take a range of angles [1,89] degree inclusive and see what initial velocity you would need to make the projectile travel distance.
If you will be dealing with situations identical to the image, that is, there will be no obstacles in the middle, you can also assume that the angle of launch will always be 45 degrees since that will always give you the maximum range for a constant initial velocity. If you take this approach you will simply need to find the initial velocity require to make a projectile travel distance at an angle of 45 degrees.

Related

How to order 3d points in clockwise order?

I have bunch of 3d points (an array) not ordered in some particular order and not restricted to some axis/plane. Based on the coordinates of these points I want to order the array in clockwise order, like in the image. At moment I am clueless where to start. One idea is to find for each the closest point and somehow figure out the direction.
3Dave has already said this, but it completely depends on where the camera is.
There is no answer unless you specify the frustrum.
Note that circles are 2D, not 3D objects. "Clockwise" relates to circles.
Assuming that you mean on a plane:
This is a problem with two parts.
The first part is incredibly difficult.
The second part is relatively easy.
First part: indeed, you are doing object recognition: you have to find a circle.
For this, investigate the existing technology for shape recognition, or read up on stuff like https://link.springer.com/article/10.1007/s11042-018-6167-2
For the second part: (which is almost irrelevant after the first part). Just get the coords of each point relative to the center of the circle you found, simply calculate the angle of each from the top, and sort them.
Cheap game-type solution
If you want the cheap solution, which you can use if the points are "reasonable" ..
find the centroid of all the points (it's just the average of all)
write each point as a vector from the centroid to the point
pick any one point as being the "top"
use something like this https://docs.unity3d.com/ScriptReference/Vector3.Angle.html to get the angle of each from the "top" one
voila! just put them in order
In practice you'll likely need these things also:
find the "plane" the points are on (find the "average plane" they are on, it's relatively easy to do this, look it up!)
make an axis through the centroid which is perpendicular to the plane

shadow and shading

I have read lots of ray tracer algorithm on the web. But, I have no clear understanding of the shading and shadow. Is below pseudocode correct written according to my understanding ?
for each primitive
check for intersection
if there is one
do color be half of the background color
Ishadow = true
break
for each ambient light in environment
calculate light contribution to the color
if ( Ishadow == false )
for each point light
calculate diffuse shading
calculate reflection direction
calculate specular light
trace for reflection ray // (i)
add color returned from i after multiplied by some coefficient
trace for refraction ray // (ii)
add color returned from ii after multiplied by some coefficient
return color value calculated until this point
You should integrate your shadows with the normal ray-tracing path:
For every screen-pixel you send a ray through the scene and you eventually determine the closest object-intersection: at the point of the closest object-intersection you would at first read out the pixel color (texture of the object at that point), aside from calculating reflection-vector etc (using the normal-vector) you would now additionally cast a ray from that intersection-point to each of the light-sources in your scene: if these rays intersect other objects before hitting the light-sources then the intersection-point is in shadow and you can adapt the final color of that point accordingly.
The trouble with pseudocode is that it is easy to get "pseudo" enough that it becomes the same well of ambiguity that we are trying to avoid by getting away from natural languages. "Color be half of the background color?" The fact that this line appears before you iterate through your light sources is confusing. How can you be setting Ishadow before you iterate over light sources?
Maybe a better description would be:
given a ray in space
find nearest object with which ray intersects
for each point light
if normal at surface of intersected object points toward light (use dot product for this)
cast a ray into space from the surface toward the light
if ray intersection is closer than light* light is shadowed at this point
*If you're seeing strange artifacts in your shadows, there is a mistake that is made by every single programmer when they write their first ray tracer. Floating point (or double-precision) math is imprecise and you will frequently (about half the time) re-intersect yourself when doing a shadow trace. The explanation is a bit hard to describe without diagrams, but let me see what I can do.
If you have an intersection point on the surface of a sphere, under most circumstances, that point's representation in a floating point register is not mathematically exact. It is either slightly inside or slightly outside the sphere. If it is inside the sphere and you try to run an intersection test to a light source, the nearest intersection will be the sphere itself. The intersection distance will be very small, so you can simply reject any shadow ray intersection that is closer than, say .000001 units. If your geometry is all convex and incapable of legitimately shadowing itself, then you can simply skip testing the sphere when doing shadow tests.

Blob detection in C (not with OPENCV)

I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Even if is something written inside the paper. I need to detect the paper and is corner, because what i really want is to draw a opengl polygon over the paper in each corner of the paper will be a corner of the polygon. Then i need the coordinates of the paper to do other stuffs.
So i need to:
- detect a square white blob.
- get the coordinates of the cornes
- draw a polygon over the white sheet.
Any ideias how can i do that?
Much depends on context. For example, suppose that you:
know that the paper is always roughly centered (i.e. W/2, Y/2 is always inside the blob), and no more rotated than 45 degrees (30 would be better)
have a suitable border around the sheet so that the corners never touch the edges of the FOV
are able (through analysis of local variance, or if you're lucky, check of background color or luminance) to say whether a point is inside or outside the blob
the inside/outside function never fails (except possibly in the close vicinity of a border)
then you could walk a line from a point on the border (surely outside) and the center (surely inside), even through bisection, and find a point - an areal - on the edge.
Two edge points give a rect (two areals give a beam), two rects give an intersection (two beams give a larger areal) - and there's your corner. You should carry along the detection uncertainty (areal radius) in order to validate corners (another less elegant approach is to roughly calculate where the corner is, and pinpoint it with a spiral search or drunkard's walk).
This algorithm is amenable to parallelization and, as long as the hypotheses hold, should be really fast.
All that said, it remains a hack -- I agree with unwind, why reinvent the wheel? If you have memory or CPU constraints (embedded systems, etc.), I believe there ought to be OpenCV and e-Vision "lite" ports also for ARM and embedded platforms.
(Sorry for my terminology - I'm monkey-translating from Italian. "Areal" is likely to correspond to your "blob", a beam is the family of lines joining all couples of points in two different blobs, line intensity being the product of distance from a point from its areal's center)
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Your first shot could be a simple flood-fill. That is, select a good threshold to binarize the image and apply the algorithm. The threshold can be fixed if you know the paper is always brighter than X and the background is always darker than this. Or this can be an adaptive threshold, for example Otsu's method. OpenCV offers this for free.
If you'd need to speed it up you could use a union-find data structure.
Finally you'd need to come up with some heuristic how to identify the corners (e.g. the four extreme values in x/y direction).
Then i need [...] the coordinates of the cornes [...]
Then you don't need blob detection, but corner detection or contour detection in the first place. OpenCV has some nice functionality for exactly this.
If you can't use it, I would suggest to binarize the image as above and use a harris-detector to find the corners of the object.
OpenCV's TBB support could also come quite handy if you'd use it and you have problems to meet your real-time requirements.

Calculating distance using a single camera

I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.

How can I test if a point lies within a 3d shape with its surface defined by a point cloud?

I have a collection of points which describe the surface of a shape that should be roughly spherical, and I need a method with which to determine if any other given point lies within this shape. I've previously been approximating the shape as an exact sphere, but this has proven too inaccurate and I need a more accurate method. Simplicity and speed is favourable over complete accuracy, a good approximation will suffice.
I've come across techniques for converting a point cloud to a 3d mesh, but most things I have found have been very complicated, and I am looking for something as simple as possible.
Any ideas?
What if you computed the centroid of the cloud, and converted its coordinates to a polar system whose origin is that centroid.
Then, convert the point you want to examine to the same coordinate system.
Assuming the surface is representable by a Delaunay triangulation, determine the three points with the smallest difference in angle from the point you're examining.
Project the point you're examining onto the triangle determined by those three points, and see if the distance of the projected point from the centroid is larger than the distance of the actual point.
Essentially, you're constructing a triangular mesh of the convex hull, but as-needed one triangle at a time. If execution speed really matters, you might cache the resulting triangles as you go.
Steven Sudit has also suggested a useful optimization that I'd recommend if you go down this path.
I think Bill Carey's method is on the right track, but I do want to suggest a possible optimization.
Since the shape is roughly spherical, you can pre-calculate the radius of the sphere bound by it and of the sphere that bounds it. This way, if the distance of the point is within the smaller sphere, it's a definite hit and if it's outside the outer sphere, it's a definite miss.
This ought to let you resolve the easy cases very quickly. For the harder ones, Carey's method takes over.
Use a kd-tree.
http://en.wikipedia.org/wiki/Kd-tree
The article provides a good explanation.
I can clear up any further misunderstandings.

Resources