I'm doing pattern matching with OpenCv. I have a model and I compare targets with the function cvMatchShapes.
It works but I want to know the orientation of the target. How can I do it?
Are bounding rotated rectangle suited for case when contour orientation differs 180 degrees for example?
Another way for solving your problem is to calculate contours moments (I suppose you are ussing contours in cvMatchShapes, you can compute image moments in similar way too) OpenCV Contours Moments?, then calculating principal axes angle from formula:
atan2((float)(-2)*Ixy,Ix - Iy)/2
This angle says about rotation. More theory about this issue: http://farside.ph.utexas.edu/teaching/336k/newton/node67.html
Related
I'm trying to get an array of weights that represent the influence a polygon's vertices have on an arbitrary position inside of it. With which I can interpolate the vertices of a deformed version of the polygon and get the corresponding deformed position.
Mean Value and Harmonic warping:
It seems that Harmonic coordinates would do this? My mesh goal:
I don't have easy time reading math papers. I found this Mathlab article, but still not grasping how to process each sampled position relative to the polygon's vertices
Meshlab article
Thanks!
You could try to create a Delaunay triangulation of the polygon and then use Barycentric coordinates within each triangle. This mapping is well defined and continuous, but in most cases probably not smooth (i.e. the derivative is not continuous).
In Cartesian coordinates I have a rectangle with a know height h, width w and 4 corners (x,y). If i have some value r that is the fixed radius of circles, how do I calculate the center points of the smallest number of circles that will totally cover the rectangle?
I think you should refer to existing approaches and choose one, you think is more suitable for you.
I recommend to start from this list of solutions for similar task - Circles Covering Squares
And, as you understand, because this optimization problem is more a mathematical than programmer, my second recommendation is to read related posts at mathematics forum
before any talking, please look at this :
http://i.stack.imgur.com/pQDAC.jpg
I want to know how can predict right angle and power for ai to make shot meet player . both of them have static position and doesn't move .
Thanks for any help...
You would need to take a look at horizontal projectiles for starters. The problem is that different power will require different angles of launch, so you would need to try out various power ranges or angle ranges.
EDIT: The image you have attached describes the path of a given projectile (bullet, bomb, any item you throw) across a horizontal plane (parallel to the ground). These particular type of problems usually require a variation of the equations of linear motion which is what you have there on the website.
Besides the equations of motion, the website I linked should give you some simple problems and how you can solve them to make sure that you are following.
As per your question, the targets will be static thus the distance component of the equation will be known and will not change. The other components you will need to find is the angle of launch and the initial velocity of the round (denoted by the power you use).
An approach would be to take a range of angles [1,89] degree inclusive and see what initial velocity you would need to make the projectile travel distance.
If you will be dealing with situations identical to the image, that is, there will be no obstacles in the middle, you can also assume that the angle of launch will always be 45 degrees since that will always give you the maximum range for a constant initial velocity. If you take this approach you will simply need to find the initial velocity require to make a projectile travel distance at an angle of 45 degrees.
Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple
I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.