I have a robot that uses an optical mouse as a position track. Basically, as the robot moves it is able to track change in X and Y directions using the mouse. The mouse also tracks which direction you are moving - ie negative X or positive X. These values are summed into separate X and Y registers.
Now, the robot rotates in place and moves forward only. So the movement of the robot is ideally in straight lines (although the mouse tracking can pickup deviations if you veer off) at particular angles. A particular set of movements of the robot would be like:
A: Rotate 45 degrees, move 3 inches
B: Rotate 90 degrees, move 10 inches
C: Rotate -110 degrees, move 5 inches
D: Rotate 10 degrees, move 1 inch
But each time the mouse X and mouse Y registers give the real distances you moved in each direction.
Now, if I want to repeat the movement set going from A to D only, how can I do this using the information I have already gathered. I know I can basically sum all the angles and distances I feed into it already, but this would prove to be inaccurate if there were large errors in each movement orders. How can I use the raw information from my mouse? A friend provided an idea that I could continuously sine and cosine the mouse values and calculate the final vector but I'm not really sure how this would work.
The problem is that the mouse only gives relative readings so rotating or moving backwards, you are potentially erasing information. So yeah, what I am wondering is how you can implement the algorithm so it can continually track changes to give you a shortest path if you moved in zigzags to get there originally.
I think the basic algorithm you need to do is this:
currentX = currentY = 0;
heading = 0; // radians
while (true)
{
deltas = SampleMouseDeltas();
heading += deltas.Heading;
currentX += Math.Cos(heading) * deltas.Distance;
currentY += Math.Sin(heading) * deltas.Distance;
}
You are right in your idea that this won't be precise. It is called "dead reckoning" for a reason.
Where you can get your deltas.Heading based on the "X" coordinate (the formula will be (deltax in inches) / (mouse sensor distance in inches from center of rotation). Also, the deltas.Distance would come from the "Y" sensor, after you convert it from pixels to inches.
Then to perform the steps, you could do something like:
robot.RotateLeft();
heading = 0;
while (heading < 45 degrees)
heading += SampleMouseDeltas.Heading;
robot.StopRotateLeft();
... etc ...
Not an answer to your question, but perhaps a cautionary tale...
I did exactly this kind of robot as a school project a year back. It was an utter failure, though I learnt quite a bit while doing it.
As for using the mouse for tracking how far you have driven: It did not work well for us at all, or any of the other groups. Probably because the camera in the mouse was out of focus due to the fact that we needed to have the mouse a few mm above the floor. The following year no group doing the same project used this methid. They instead put markings on the weels and used a simple ir-sensor to calculate how many revolutions the wheels made.
I know I'm somewhat necroing this thread, but if you wanted more accurate angle tracking, two optical mice would be ideal. Basically if you cancelled out the motion in the same direction, you would be left with the motion that the mice made relative to eachother. From there, it would just be some simple math to accurately determine how far the 'bot has turned.
Related
I am developing some shaders for WPF, and so far I managed to get fadeout, and swipe shader working, but for this one, I have no idea where to start.
Could someone please hand me out a few tips on how to approach this problem?
What I am trying to achieve is the following:
Thank you
In my opinion the easiest way to build any complicated effect is to decompose the original effect into small parts and add them together. In your case the effect consists of 3 parts:
5 rings filled after each other
each ring is filled counterclockwise from the left
the filling of a ring is a circle at each end
This in mind you can build a solution for each part separately and add the results together.
I assume that there will be a variable float progress; rolling from 0 to 1 which determines the progress of the transistion.
In following some starting points for each part:
For 1. you check the distance of the texture coordinate of the fragment to the center of the screen and divide the maximum distance into 5 parts. While the progress is 0 <= progress < 0.2, the first ring is visible, while 0.2 <= progress < 0.4 the second and so on.
For 2. you check the angle of the difference vector between the fragment and the center to the left vector, e.g. using atan2. Within each part (such as 0.0-0.2) you compare the local progress of the stage to the angle to determine the visibility, making the fragments appear in an angle dependent way.
The 3. might be the most tricky part, as you will have to construct the center of the progress ring ends and compute the distance to the fragment. If it is within the current ring thickness is is visible.
Hopefully this quick thoughts give you a rough starting point for your effect!
Imagine a 3D rectangle at origin. It is first rotated along Y-axis. So good so far. Now, it is rotated around X-axis. However, OpenGL (API: glrotatef) interprets the X-axis to be the global X-axis. How can I ensure that the "axes move with the object"?
This is very much like an airplane. For example, if yaw (Y rotation) is applied first, and then pitch (X-rotation), a correct pitch would be X-rotation along the plane's local axes.
EDIT: I have seen this called gimbal lock problem, but I don't think it is though.
You cannot consistently describe an aeroplane's orientation as one x rotation and one y rotation. Not even if you also store and one z rotation. That's exactly the gimbal lock problem.
The crux of it is that you have to apply the rotations in some order. Say it's x then y then z for the sake of argument. Then what happens if the x rotation is by 90 degrees? That folds the y axis onto where the z axis was. Then say the y rotation is also by 90 degrees. That's now bent the z axis onto where the x axis was. So now what effect does any z rotation have?
That's just an easy to grasp example. It's not a special case. You can't wave your hands out of it by saying "oh, I'll detect when to do z rotations first" or "I'll do 90 degree rotations with a special pathway" or any other little hack. Trying to store and update orientations as three independent scalars doesn't work.
In classic OpenGL, a call to glRotatef means "... and then rotate the current matrix like this". It's not relative to world coordinates or to model coordinates or to any other space that you're thinking in.
before any talking, please look at this :
http://i.stack.imgur.com/pQDAC.jpg
I want to know how can predict right angle and power for ai to make shot meet player . both of them have static position and doesn't move .
Thanks for any help...
You would need to take a look at horizontal projectiles for starters. The problem is that different power will require different angles of launch, so you would need to try out various power ranges or angle ranges.
EDIT: The image you have attached describes the path of a given projectile (bullet, bomb, any item you throw) across a horizontal plane (parallel to the ground). These particular type of problems usually require a variation of the equations of linear motion which is what you have there on the website.
Besides the equations of motion, the website I linked should give you some simple problems and how you can solve them to make sure that you are following.
As per your question, the targets will be static thus the distance component of the equation will be known and will not change. The other components you will need to find is the angle of launch and the initial velocity of the round (denoted by the power you use).
An approach would be to take a range of angles [1,89] degree inclusive and see what initial velocity you would need to make the projectile travel distance.
If you will be dealing with situations identical to the image, that is, there will be no obstacles in the middle, you can also assume that the angle of launch will always be 45 degrees since that will always give you the maximum range for a constant initial velocity. If you take this approach you will simply need to find the initial velocity require to make a projectile travel distance at an angle of 45 degrees.
Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple
I would like to calculate distance to certain objects in the scene, I know that I can only calculate relative distance when using a single camera but I know the coordinates of some objects in the scene so in theory it should be possible to calculate actual distance. According to the opencv mailing list archives,
http://tech.groups.yahoo.com/group/OpenCV/message/73541
cvFindExtrinsicCameraParams2 is the function to use, but I can't find information on how to use it?
PS. Assuming camera is properly calibrated.
My guess would be, you know the width of an object, such as a ball is 6 inches across and 6 inches tall, you can also see that it is 20 pixels tall and 25 pixels wide. You also know the ball is 10 feet away. This would be your start.
Extrinsic parameters wouldn't help you, I don't think, because that is the camera's location and rotation in space relative to another camera or an origin. For a one camera system, the camera is the origin.
Intrinsic parameters might help. I'm not sure, I've only done it using two cameras.