With reference to this programming game I am currently building.
alt text http://img12.imageshack.us/img12/2089/shapetransformationf.jpg
To translate a Canvas in WPF, I am using two Forms: TranslateTransform (to move it), and RotateTransform (to rotate it) [children of the same TransformationGroup]
I can easily get the top left x,y coordinates of a canvas when its not rotated (or rotated at 90deg, since it will be the same), but the problem I am facing is getting the top left (and the other 3 points) coordinates.
This is because when a RotateTransform is applied, the TranslateTransform's X and Y properties are not changed (and thus still indicate that the top-left of the square is like the dotted-square (from the image)
The Canvas is being rotated from its center, so that is its origin.
So how can I get the "new" x and y coordinates of the 4 points after a rotation?
[UPDATE]
alt text http://img25.imageshack.us/img25/8676/shaperotationaltransfor.jpg
I have found a way to find the top-left coordinates after a rotation (as you can see from the new image) by adding the OffsetX and OffsetY from the rotation to the starting X and Y coordinates.
But I'm now having trouble figuring out the rest of the coordinates (the other 3).
With this rotated shape, how can I figure out the x and y coordinates of the remaining 3 corners?
[EDIT]
The points in the 2nd image ARE NOT ACCURATE AND EXACT POINTS. I made the points up with estimates in my head.
[UPDATE] Solution:
First of all, I would like to thank Jason S for that lengthy and Very informative post in which he describes the mathematics behind the whole process; I certainly learned a lot by reading your post and trying out the values.
But I have now found a code snippet (thanks to EugeneZ's mention of TransformBounds) that does exactly what I want:
public Rect GetBounds(FrameworkElement of, FrameworkElement from)
{
// Might throw an exception if of and from are not in the same visual tree
GeneralTransform transform = of.TransformToVisual(from);
return transform.TransformBounds(new Rect(0, 0, of.ActualWidth, of.ActualHeight));
}
Reference: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/86350f19-6457-470e-bde9-66e8970f7059/
If I understand your question right:
given:
shape has corner (x1,y1), center (xc,yc)
rotated shape has corner (x1',y1') after being rotated about center
desired:
how to map any point of the shape (x,y) -> (x',y') by that same rotation
Here's the relevant equations:
(x'-xc) = Kc*(x-xc) - Ks*(y-yc)
(y'-yc) = Ks*(x-xc) + Kc*(y-yc)
where Kc=cos(theta) and Ks=sin(theta) and theta is the angle of counterclockwise rotation. (to verify: if theta=0 this leaves the coordinates unchanged, otherwise if xc=yc=0, it maps (1,0) to (cos(theta),sin(theta)) and (0,1) to (-sin(theta), cos(theta)) . Caveat: this is for coordinate systems where (x,y)=(1,1) is in the upper right quadrant. For yours where it's in the lower right quadrant, theta would be the angle of clockwise rotation rather than counterclockwise rotation.)
If you know the coordinates of your rectangle aligned with the x-y axes, xc would just be the average of the two x-coordinates and yc would just be the average of the two y-coordinates. (in your situation, it's xc=75,yc=85.)
If you know theta, you now have enough information to calculate the new coordinates.
If you don't know theta, you can solve for Kc, Ks. Here's the relevant calculations for your example:
(62-75) = Kc*(50-75) - Ks*(50-85)
(40-85) = Ks*(50-75) + Kc*(50-85)
-13 = -25*Kc + 35*Ks = -25*Kc + 35*Ks
-45 = -25*Ks - 35*Kc = -35*Kc - 25*Ks
which is a system of linear equations that can be solved (exercise for the reader: in MATLAB it's:
[-25 35;-35 -25]\[-13;-45]
to yield, in this case, Kc=1.027, Ks=0.3622 which does NOT make sense (K2 = Kc2 + Ks2 is supposed to equal 1 for a pure rotation; in this case it's K = 1.089) so it's not a pure rotation about the rectangle center, which is what your drawing indicates. Nor does it seem to be a pure rotation about the origin. To check, compare distances from the center of rotation before and after the rotation using the Pythagorean theorem, d2 = deltax2 + deltay2. (for rotation about xc=75,yc=85, distance before is 43.01, distance after is 46.84, the ratio is K=1.089; for rotation about the origin, distance before is 70.71, distance after is 73.78, ratio is 1.043. I could believe ratios of 1.01 or less would arise from coordinate rounding to integers, but this is clearly larger than a roundoff error)
So there's some missing information here. How did you get the numbers (62,40)?
That's the basic gist of the math behind rotations, however.
edit: aha, I didn't realize they were estimates. (pretty close to being realistic, though!)
I use this method:
Point newPoint = rotateTransform.Transform(new Point(oldX, oldY));
where rotateTransform is the instance on which I work and set Angle...etc.
Look at GeneralTransform.TransformBounds() method.
I'm not sure, but is this what you're looking for - rotation of a point in Cartesian coordinate system:
link
You can use Transform.Transform() method on your Point with the same transformations to get a new point to which these transformations were applied.
Related
So the idea is quite simple: given the sun's position (azimuth and elevation) I want my app to be able to display a shape using augmented reality when the camera is pointing at the sun.
So there is a few steps:
Convert azimuth and elevation into radians, then into cartesian coordinates to get a simple vector {x, y, z}.
Get the phone's gyroscope data to get its orientation in space as a 3D vector {x, y, z}.
Calculate new coordinates for the sun regarding the phone orientation.
Display a random shape using Three.js at these coordinates.
1 and 2 are quite easy. There are a lot of APIs out there giving the sun's position depending on a location. Then I used a formula to convert the sun's spherical coordinates into cartesian ones:
x = R * cos(ϕ) * sin(θ)
y = R * cos(ϕ) * cos(θ)
z = R * sin(ϕ)
with R, the distance of the point from the origin, θ (the azimuth) and ϕ (the elevation).
I got the device's orientation in space with Expo.io, using their Device Motion API. Documentation here
I'm really struggling with the third step. I don't know how to combine sun and device coordinates in space, and project the whole thing through Three.js perspective camera.
I found this post the other day: Compare device 3D orientation with the sun position but I've found the explanations a bit confusing.
Let's say I want to display a cube with Three:
const geometry = new THREE.BoxGeometry(0.07, 0.07, 0.07);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
cube.position.x = 0;
cube.position.y = 0;
cube.position.z = -1;
The final goal here will be to find the correct {x, y, z} so the cube can be displayed at the sun's location. This vector will be of course updated every time the user moves his phone in space.
For the sake of illumination analysis, based on this document, I am trying to determine three things for an array of lights and a series of points on a solid surface:
(Image key: big blue points are lights with illumination direction shown, small points are the points on my surface)
1) The distances between each of the lights and each of the points,
2) the angles between the direction each light is facing and the normal vectors of all of the points:
Note in this image I have replicated the normal vector and moved it to more clearly show the angle.
3) the angles between the direction each light is facing, and the vector from that light to all of the points on the solid:
Originally I had nested for loops iterating through all of the lights and points on the solid, but am now doing my best to do it in true MATLAB style with matrices:
I have found the distances between all the points with the pdist2 function, but have not managed to find a similar method to find the angles between the lights and all the points, nor the lights and the normal vectors of the points. I would prefer to do this with matrix methods rather than with iteration as I have been using.
Considering I have data set out, where each column of Lmat has my x,y,z position vectors of my lights; Dmat gives x,y,z directions of each light, thus the combination of each row from both of these matrices fully define the light and the direction it is facing. Similarly, Omega and nmat do the same for the points on the surface.
I am fairly sure that to get angles I want to do something along the lines of:
distMatrix = pdist2(Omega, Lmat);
LmatNew = zeros(numPoints, numLights, 3);
DmatNew = zeros(numPoints, numLights, 3);
OmegaNew = zeros(numPoints, numLights, 3);
nmatNew = zeros(numPoints, numLights, 3);
for i = 1:numLights
LmatNew(:,i,1) = Lmat(i,1);
LmatNew(:,i,2) = Lmat(i,2);
LmatNew(:,i,3) = Lmat(i,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
for j = 1:numPoints
OmegaNew(j,:,1) = Omega(j,1);
OmegaNew(j,:,2) = Omega(j,2);
OmegaNew(j,:,3) = Omega(j,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
angleMatrix = -dot(LmatNew-OmegaNew, DmatNew, 3);
angleMatrix = atand(angleMatrix);
angleMatrix = angleMatrix.*(angleMatrix > 0);
But I am getting conceptually stuck trying to get my head around what to do after my dot product.
Am I on the right track? Is there an inbuilt angle equivalent of pdist2 that I am overlooking?
Thanks all for your help, and sorry for the paint images!
Context: This image shows my lights (big blue points), the directions the lights are facing (little black traces), and my model.
According to MathWorks, there is no built-in function to calculate the angle between vectors. However, you can use trigonometry to calculate the angles.
Inputs
Since you unfortunately didn't explain your input data in great detail, I'm going to assume that you have a matrix Lmat containing a location vector of a light source in each row and a matrix Dmat containing the directional vectors for the light sources, both of size n×3, where n is the number of light sources in your scene.
The matrices Omega and Nmat supposedly are of size m×3 and contain the location vectors and normal vectors of all m surface points. The desired result are the angles between all light direction vectors and surface normal vectors, of which there are n⋅m, and the angles between the light direction vectors and the vectors connecting the light to each point on the surface, of which there are n⋅m as well.
To get results for all combinations of light sources and surface points, the input matrices have to be repeated vertically:
Lmat = repmat(Lmat, size(Omega,1), 1);
Dmat = repmat(Dmat, size(Omega,1), 1);
Omega = repmat(Omega, size(Lmat,1), 1);
Nmat = repmat(Nmat, size(Lmat,1), 1);
Using the inner product / dot product
The definition of the inner product of two vectors is
where θ is the angle between the two vectors. Reordering the equation yields
You can therefore calculate the angles between your directional vectors Dmat and your normal vectors Nmat like this:
normProd = sqrt(sum(Dmat.^2,2)).*sqrt(sum(Nmat.^2,2));
anglesInDegrees = acos(dot(Dmat.',Nmat.')' ./ normProd) * 180 / pi;
To calculate the angles between the light-to-point vectors and the directional vectors, just replace Nmat with Omega - Lmat.
Using the vector product / cross product
It has been mentioned that the above method will have problems with accuracy for very small (θ ≈ 0°) or very large (θ ≈ 180°) angles. The suggested solution is calculating the angles using the cross product and the inner product.
The norm of the vector product of two vectors is
You can combine this with the above definition of the inner product to get
which can obviously be reordered to this:
The corresponding MATLAB code looks like this:
normCross = sqrt(sum(cross(Dmat,Nmat,2).^2,2));
anglesInDegrees = atan2(normCross,dot(Dmat.',Nmat.')') * 180/pi;
I posted this on twitter a while ago but seeing how none of my followers appears to be a math/programming genius, I'll try my luck here as well. I got here because I found this which might contain part of my solution.
I described my problem in the following pdf document, containing a picture of what I'm trying to achieve.
To give some more details, I divided the pentagon's of a dodecahedron (12 pentagons) into triangles (5/pentagon, 60 triangles in total), then collected a set of data points relative to each of these triangles.
The idea is to generate terrain meshes for each individual triangle.
To do so, the data must be represented flat, in a 32K x 32K square (idTech4 Megatexture)
I have vaguely heard of transformation matrices, which when set up properly, could do the trick of passing all the data points trough them to have them show up in the right place.
I looked at this source code here but I don't understand how I'm supposed to get the points in and/or out of there, not to mention how to do the setup so I can present each point in turn and get the result point back.
I got as fas as identifying the point that belongs in the back right corner. All my 3D points are originally stored in latitude / longitude pairs. I retrieve the 3D vectors this way:
coord getcoord(point* p)
{
coord c;
c.x=cos(p->lat*pi/180.l) * cos(p->lon*pi/180.l);
c.y=cos(p->lat*pi/180.l) * sin(p->lon*pi/180.l);
c.z=sin(p->lat*pi/180.l);
return c;
};
My thought is that if I can find the center of my triangle, and discover how to offset my angles so the vector from the center of my sphere to the middle of the triangle moves to 90N then my points would already be in the right plane if I rotated them all along the same angles. If I then convert them all to 3d and subtracti the radius from y, they'll be at the correct y position as well.
Then all I'd need to do is the rotation, the scaling, and the moving to the final position.
There are several kinds of 'centers' for a triangle, I think the one I need is the one that is equidistant to the corners of the triangle (Circumcenter?)
But then there might be an easier approach to the whole problem so while I continue my own research, perhaps some of you can help pointing me in the right direction.
It appears as if some sample data is in order, here are a few of these triangles in obj file format:
v 0.000000 0.000000 3396.000000
v 2061.582356 0.000000 2698.646733
v 637.063983 1960.681333 2698.646733
f 1 2 3
And another:
v -938.631230 2888.810129 1518.737455
v 637.063983 1960.681333 2698.646733
v 1030.791271 3172.449325 637.064076
f 1 2 3
You will notice that each point is at a distance of 3396 from 0,0,0
I mentioned 'on the sphere' meaning that the face away from the center of the sphere is the face that needs to become the 'top' when translated into the square.
Theoretically all these triangles should in fact have identical sizes, but due to rounding errors in the math that generated them, this might not be entirely true.
If I'm not mistaken I already took measures to ensure that the first point you see here is always the one opposite the longest border, so it's the one that should go in the far left corner (testing the above 2 samples confirms this, but I'm measuring anyway just to be sure)
Both legs leading away from this point should theoretically have the same length as well, but again rounding errors might slightly offset that.
If I've done it correctly then the longer side is 1,113587 times longer than the 2 shorter sides. Assuming those are identical, then doing some goal seeking in excel, I can deduct that the final points, assuming I was just translating this triangle, should look like:
v 16384.000000 0.000000 16384.000000
v -16384.000000 0.000000 9916.165306
v 9916.165306 0.000000 -16384.000000
f 1 2 3
So I need to setup the matrix to do this transformation, preferably using the 4x4 matrix as explained below.
I would recommend using transform matrices. The 3d transform matrix is a 4x4 data structure which describes a translation and rotation (and possibly a scale). Once you have a matrix you can transform a point like so
result.x = (tmp->pt.x * m->element[0][0]) +
(tmp->pt.y * m->element[1][0]) +
(tmp->pt.z * m->element[2][0]) +
m->element[3][0];
result.y = (tmp->pt.x * m->element[0][1]) +
(tmp->pt.y * m->element[1][1]) +
(tmp->pt.z * m->element[2][1]) +
m->element[3][1];
result.z = (tmp->pt.x * m->element[0][2]) +
(tmp->pt.y * m->element[1][2]) +
(tmp->pt.z * m->element[2][2]) +
m->element[3][2];
int w = (tmp->pt.x * m->element[0][3]) + (tmp->pt.y * m->element[1][3])
+ (tmp->pt.z * m->element[2][3]) + m->element[3][3];
if (w!=0 || w!=1)
result.x/=w; result.y/=w; result.z/=w;
This will transform the 3D point pt by the matrix m. If you now a little matrix math you'll see i'm just multiplying my origin point as a vector against the matrix (and doing a little normalization if it is a skew matrix.) Matrices can be multiplied together to form complicated transformations so they are very useful.
For details on making matrices suggest reading this link.
http://en.wikipedia.org/wiki/Transformation_matrix
I have not marked this question Answered yet.
The current accepted answer got accepted automatically because of the Bounty Time-Limit
With reference to this programming game I am currently building.
As you can see from the above link, I am currently building a game in where user-programmable robots fight autonomously in an arena.
Now, I need a way to detect if a robot has detected another robot in a particular angle (depending on where the turret may be facing):
alt text http://img21.imageshack.us/img21/7839/robotdetectionrg5.jpg
As you can see from the above image, I have drawn a kind of point-of-view of a tank in which I now need to emulate in my game, as to check each point in it to see if another robot is in view.
The bots are just canvases that are constantly translating on the Battle Arena (another canvas).
I know the heading the turret (the way it will be currently facing), and with that, I need to find if there are any bots in its path(and the path should be defined in kind of 'viewpoint' manner, depicted in the image above in the form of the red 'triangle'. I hope the image makes things more clear to what I am trying to convey.
I hope that someone can guide me to what math is involved in achieving this problem.
[UPDATE]
I have tried the calculations that you have told me, but it's not working properly, since as you can see from the image, bot1 shouldn't be able to see Bot2 . Here is an example :
alt text http://img12.imageshack.us/img12/7416/examplebattle2.png
In the above scenario, Bot 1 is checking if he can see Bot 2. Here are the details (according to Waylon Flinn's answer):
angleOfSight = 0.69813170079773179 //in radians (40 degrees)
orientation = 3.3 //Bot1's current heading (191 degrees)
x1 = 518 //Bot1's Center X
y1 = 277 //Bot1's Center Y
x2 = 276 //Bot2's Center X
y2 = 308 //Bot2's Center Y
cx = x2 - x1 = 276 - 518 = -242
cy = y2 - y1 = 308 - 277 = 31
azimuth = Math.Atan2(cy, cx) = 3.0141873380511295
canHit = (azimuth < orientation + angleOfSight/2) && (azimuth > orientation - angleOfSight/2)
= (3.0141873380511295 < 3.3 + 0.349065850398865895) && (3.0141873380511295 > 3.3 - 0.349065850398865895)
= true
According to the above calculations, Bot1 can see Bot2, but as you can see from the image, that is not possible, since they are facing different directions.
What am I doing wrong in the above calculations?
The angle between the robots is arctan(x-distance, y-distance) (most platforms provide this 2-argument arctan that does the angle adjustment for you. You then just have to check whether this angle is less than some number away from the current heading.
Edit 2020: Here's a much more complete analysis based on the updated example code in the question and a now-deleted imageshack image.
Atan2: The key function you need to find an angle between two points is atan2. This takes a Y-coordinate and X-coordinate of a vector and returns the angle between that vector and the positive X axis. The value will always be wrapped to lie between -Pi and Pi.
Heading vs Orientation: atan2, and in general all your math functions, work in the "mathematical standard coordinate system", which means an angle of "0" corresponds to directly east, and angles increase counterclockwise. Thus, an "mathematical angle" of Pi / 2 as given by atan2(1, 0) means an orientation of "90 degrees counterclockwise from due east", which matches the point (x=0, y=1). "Heading" is a navigational idea that expresses orientation is a clockwise angle from due north.
Analysis: In the now-deleted imageshack image, your "heading" of 191 degrees corresponded to a south-south-west direction. This actually an trigonometric "orientation" of -101 degrees, or -1.76. The first issue in the updated code is therefore conflating "heading" and "orientation". you can get the latter from the former by orientation_degrees = 90 - heading_degrees or orientation_radians = Math.PI / 2 - heading_radians, or alternatively you could specify input orientations in the mathematical coordinate system rather than the nautical heading coordinate system.
Checking that an angle lies between two others: Checking that an vector lies between two other vectors is not as simple as checking that the numeric angle value is between, because of the way the angles wrap at Pi/-Pi.
Analysis: in your example, the orientation is 3.3, the right edge of view is orientation 2.95, the left edge of view is 3.65. The calculated azimith is 3.0141873380511295, which happens to be correct (it does lie between). However, this would fail for azimuth values like -3, which should be calculated as "hit". See Calculating if an angle is between two angles for solutions.
Calculate the relative angle and distance of each robot relative to the current one. If the angle is within some threshold of the current heading and within the max view range, then it can see it.
The only tricky thing will be handling the boundary case where the angle goes from 2pi radians to 0.
Something like this within your bot's class (C# code):
/// <summary>
/// Check to see if another bot is visible from this bot's point of view.
/// </summary>
/// <param name="other">The other bot to look for.</param>
/// <returns>True iff <paramref name="other"/> is visible for this bot with the current turret angle.</returns>
private bool Sees(Bot other)
{
// Get the actual angle of the tangent between the bots.
var actualAngle = Math.Atan2(this.X - other.X, this.Y - other.Y) * 180/Math.PI + 360;
// Compare that angle to a the turret angle +/- the field of vision.
var minVisibleAngle = (actualAngle - (FOV_ANGLE / 2) + 360);
var maxVisibleAngle = (actualAngle + (FOV_ANGLE / 2) + 360);
if (this.TurretAngle >= minVisibleAngle && this.TurretAngle <= maxVisibleAngle)
{
return true;
}
return false;
}
Notes:
The +360's are there to force any negative angles to their corresponding positive values and to shift the boundary case of angle 0 to somewhere easier to range test.
This might be doable using only radian angles but I think they're dirty and hard to read :/
See the Math.Atan2 documentation for more details.
I highly recommend looking into the XNA Framework, as it's created with game design in mind. However, it doesn't use WPF.
This assumes that:
there are no obstacles to obstruct the view
Bot class has X and Y properties
The X and Y properties are at the center of the bot.
Bot class a TurretAngle property which denotes the turret's positive angle relative to the x-axis, counterclockwise.
Bot class has a static const angle called FOV_ANGLE denoting the turret's field of vision.
Disclaimer: This is not tested or even checked to compile, adapt it as necessary.
A couple of suggestions after implementing something similar (a long time ago!):
The following assumes that you are looping through all bots on the battlefield (not a particularly nice practice, but quick and easy to get something working!)
1) Its a lot easier to check if a bot is in range then if it can currently be seen within the FOV e.g.
int range = Math.sqrt( Math.abs(my.Location.X - bots.Location.X)^2 +
Math.abs(my.Location.Y - bots.Location.Y)^2 );
if (range < maxRange)
{
// check for FOV
}
This ensures that it can potentially short-cuircuit a lot of FOV checking and speed up the process of running the simulation. As a caveat, you could have some randomness here to make it more interesting, such that after a certain distance the chance to see is linearly proportional to the range of the bot.
2) This article seems to have the FOV calculation stuff on it.
3) As an AI graduate ... nave you tried Neural Networks, you could train them to recognise whether or not a robot is in range and a valid target. This would negate any horribly complex and convoluted maths! You could have a multi layer perceptron [1], [2] feed in the bots co-ordinates and the targets cordinates and recieve a nice fire/no-fire decision at the end. WARNING: I feel obliged to tell you that this methodology is not the easiest to achieve and can be horribly frustrating when it goes wrong. Due to the (simle) non-deterministic nature of this form of algorithm, debugging can be a pain. Plus you will need some form of learning either Back Propogation (with training cases) or a Genetic Algorithm (another complex process to perfect)! Given the choice I would use Number 3, but its no for everyone!
It can be quite easily achieved with the use of a concept in vector math called dot product.
http://en.wikipedia.org/wiki/Dot_product
It may look intimidating, but it's not that bad. This is the most correct way to deal with your FOV issue, and the beauty is that the same math works whether you are dealing with 2D or 3D (that's when you know the solution is correct).
(NOTE: If anything is not clear, just ask in the comment section and I will fill in the missing links.)
Steps:
1) You need two vectors, one is the heading vector of the main tank. Another vector you need is derived from the position of the tank in question and the main tank.
For our discussion, let's assume the heading vector for main tank is (ax, ay) and vector between main tank's position and target tank is (bx, by). For example, if main tank is at location (20, 30) and target tank is at (45, 62), then vector b = (45 - 20, 62 - 30) = (25, 32).
Again, for purpose of discussion, let's assume main tank's heading vector is (3,4).
The main goal here is to find the angle between these two vectors, and dot product can help you get that.
2) Dot product is defined as
a * b = |a||b| cos(angle)
read as a (dot product) b since a and b are not numbers, they are vectors.
3) or expressed another way (after some algebraic manipulation):
angle = acos((a * b) / |a||b|)
angle is the angle between the two vectors a and b, so this info alone can tell you whether one tank can see another or not.
|a| is the magnitude of the vector a, which according to the Pythagoras Theorem, is just sqrt(ax * ax + ay * ay), same goes for |b|.
Now the question comes, how do you find out a * b (a dot product b) in order to find the angle.
4) Here comes the rescue. Turns out that dot product can also be expressed as below:
a * b = ax * bx + ay * by
So angle = acos((ax * bx + ay * by) / |a||b|)
If the angle is less than half of your FOV, then the tank in question is in view. Otherwise it's not.
So using the example numbers above:
Based on our example numbers:
a = (3, 4)
b = (25, 32)
|a| = sqrt(3 * 3 + 4 * 4)
|b| = sqrt(25 * 25 + 32 * 32)
angle = acos((20 * 25 + 30 * 32) /|a||b|
(Be sure to convert the resulting angle to degree or radian as appropriate before comparing it to your FOV)
This will tell you if the center of canvas2 can be hit by canvas1. If you want to account for the width of canvas2 it gets a little more complicated. In a nutshell, you would have to do two checks, one for each of the relevant corners of canvas2, instead of one check on the center.
/// assumming canvas1 is firing on canvas2
// positions of canvas1 and canvas2, respectively
// (you're probably tracking these in your Tank objects)
int x1, y1, x2, y2;
// orientation of canvas1 (angle)
// (you're probably tracking this in your Tank objects, too)
double orientation;
// angle available for firing
// (ditto, Tank object)
double angleOfSight;
// vector from canvas1 to canvas2
int cx, cy;
// angle of vector between canvas1 and canvas2
double azimuth;
// can canvas1 hit the center of canvas2?
bool canHit;
// find the vector from canvas1 to canvas2
cx = x2 - x1;
cy = y2 - y1;
// calculate the angle of the vector
azimuth = Math.Atan2(cy, cx);
// correct for Atan range (-pi, pi)
if(azimuth < 0) azimuth += 2*Math.PI;
// determine if canvas1 can hit canvas2
// can eliminate the and (&&) with Math.Abs but this seems more instructive
canHit = (azimuth < orientation + angleOfSight) &&
(azimuth > orientation - angleOfSight);
Looking at both of your questions I'm thinking you can solve this problem using the math provided, you then have to solve many other issues around collision detection, firing bullets etc. These are non trivial to solve, especially if your bots aren't square. I'd recommend looking at physics engines - farseer on codeplex is a good WPF example, but this makes it into a project way bigger than a high school dev task.
Best advice I got for high marks, do something simple really well, don't part deliver something brilliant.
Does your turret really have that wide of a firing pattern? The path a bullet takes would be a straight line and it would not get bigger as it travels. You should have a simple vector in the direction of the the turret representing the turrets kill zone. Each tank would have a bounding circle representing their vulnerable area. Then you can proceed the way they do with ray tracing. A simple ray / circle intersection. Look at section 3 of the document Intersection of Linear and Circular Components in 2D.
Your updated problem seems to come from different "zero" directions of orientation and azimuth: an orientation of 0 seems to mean "straight up", but an azimuth of 0 "straight right".
I have a simple 3D cube that I can rotate using the following code:
void mui3D_MouseDown(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
RotateTransform3D rotation = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0), mui.Model.Bounds.Location);
DoubleAnimation rotateAnim = new DoubleAnimation(0, 130d TimeSpan.FromMilliseconds(3000));
rotateAnim.Completed += new EventHandler(rotateAnim_Completed);
mui.Transform = rotation;
rotation.Rotation.BeginAnimation(AxisAngleRotation3D.AngleProperty, rotateAnim);
}
Each time it executes, this code rotates the cube using an animation around the Y axis from an angle of 0 to 130 degrees.
However I would like to apply the rotation "cumulatively" so that the any previous rotation is taken into account and the cube commences each rotation from the angle it finished the previous rotation.
For example: the animation constructor, instead of requiring a "from" and "to" value for the angle, simply rotates the cube an an additional 130 degrees based on whatever the current rotation angle is.
I could easily use a member variable that contains the current angle, pass it to the animation and then update it when the animation has completed. But I'm wondering if there is a standard approach using WPF to achieve this.
I'm sure there's a method for retrieving the current Euler rotation angle in degrees from the object's transformation matrix. You could then use that as the "from" value and animate to the "to" value.
Failing that, simply create a variable somewhere in your application that remembers the number of degrees the cube as been rotated. Each time the function is run just add the number of degrees you'd like it to rotate and then store the result back in your variable.
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle