How to efficiently evaluate or approximate a road Clothoid? - c

I'm facing the problem of computing values of a clothoid in C in real-time.
First I tried using the Matlab coder to obtain auto-generated C code for the quadgk-integrator for the Fresnel formulas. This essentially works great in my test scnearios. The only issue is that it runs incredibly slow (in Matlab as well as the auto-generated code).
Another option was interpolating a data-table of the unit clothoid connecting the sample points via straight lines (linear interpolation). I gave up after I found out that for only small changes in curvature (tiny steps along the clothoid) the results were obviously degrading to lines. What a surprise...
I know that circles may be plotted using a different formula but low changes in curvature are often encountered in real-world-scenarios and 30k sampling points in between the headings 0° and 360° didn't provide enough angular resolution for my problems.
Then I tried a Taylor approximation around the R = inf point hoping that there would be significant curvatures everywhere I wanted them to be. I soon realized I couldn't use more than 4 terms (power of 15) as the polynom otherwise quickly becomes unstable (probably due to numerical inaccuracies in double precision fp-computation). Thus obviously accuracy quickly degrades for large t values. And by "large t values" I'm talking about every point on the clothoid that represents a curve of more than 90° w.r.t. the zero curvature point.
For instance when evaluating a road that goes from R=150m to R=125m while making a 90° turn I'm way outside the region of valid approximation. Instead I'm in the range of 204.5° - 294.5° whereas my Taylor limit would be at around 90° of the unit clothoid.
I'm kinda done randomly trying out things now. I mean I could just try to spend time on the dozens of papers one finds on that topic. Or I could try to improve or combine some of the methods described above. Maybe there even exists an integrate function in Matlab that is compatible with the Coder and fast enough.
This problem is so fundamental it feels to me I shouldn't have that much trouble solving it. any suggetions?

about the 4 terms in Taylor series - you should be able to use much more. total theta of 2pi is certainly doable, with doubles.
you're probably calculating each term in isolation, according to the full formula, calculating full factorial and power values. that is the reason for losing precision extremely fast.
instead, calculate the terms progressively, the next one from the previous one. Find the formula for the ratio of the next term over the previous one in the series, and use it.
For increased precision, do not calculate in theta by rather in the distance, s (to not lose the precision on scaling).
your example is an extremely flat clothoid. if I made no mistake, it goes from (25/22) pi =~ 204.545° to (36/22) pi =~ 294.545° (why not include these details in your question?). Nevertheless it should be OK. Even 2 pi = 360°, the full circle (and twice that), should pose no problem.
given: r = 150 -> 125, 90 degrees turn :
r s = A^2 = 150 s = 125 (s+x)
=> 1+(x/s) = 150/125 = 1 + 25/125 x/s = 1/5
theta = s^2/2A^2 = s^2 / (300 s) = s / 300 ; = (pi/2) * (25/11) = 204.545°
theta2 = (s+x)^2/(300 s) = (6/5)^2 s / 300 ; = (pi/2) * (36/11) = 294.545°
theta2 - theta = ( 36/25 - 1 ) s / 300 == pi/2
=> s = 300 * (pi/2) * (25/11) = 1070.99749554 x = s/5 = 214.1994991
A^2 = 150 s = 150 * 300 * (pi/2) * (25/11)
a = sqrt (2 A^2) = 300 sqrt ( (pi/2) * (25/11) ) = 566.83264608
The reference point is at r = Infinity, where theta = 0.
we have x = a INT[u=0..(s/a)] cos(u^2) d(u) where a = sqrt(2 r s) and theta = (s/a)^2. write out the Taylor series for cos, and integrate it, term-by-term, to get your Taylor approximation for x as function of distance, s, along the curve, from the 0-point. that's all.
next you have to decide with what density to calculate your points along the clothoid. you can find it from a desired tolerance value above the chord, for your minimal radius of 125. these points will thus define the approximation of the curve by line segments, drawn between the consecutive points.

I am doing my thesis in the same area right now.
My approach is the following.
at each point on your clothoid, calculate the following (change in heading / distance traveled along your clothoid), by this formula you can calculate the curvature at each point by this simple equation.
you are going to plot each curvature value, your x-axis will be the distance along the clothoid, the y axis will be the curvature. By plotting this and applying very easy linear regression algorithm (search for Peuker algorithm implementation in your language of choice)
you can easily identify where are the curve sections with value of zero (Line has no curvature), or linearly increasing or decreasing (Euler spiral CCW/CW), or constant value != 0 (arc has constant curvature across all points on it).
I hope this will help you a little bit.
You can find my code on github. I implemented some algorithms for such problems like Peuker Algorithm.

Related

Using each element of a vector in a series of calculations

I am trying to write MATLAB code that uses a set of variables in a vector in a calculation. I am trying to run the same formula with each value in the vector and then store each result in a new vector.
The goal is to calculate and plot the cost of constructing a water tank based on various radius sizes. In the calculations I have a cylindrical tank and a hemispherical top. The exact value for the volume of the tank is 500m^3. The cost of the tank is $400/m^2 surface area for the hemispherical top and $300/m^2 surface area for the cylindrical body. I know I need to use element wise operators, however I am getting strange, unrealistic results which leads me to believe I am using these incorrectly.
rTank = 2:0.5:10;
h = ((250./(pi.*rTank(:)))-((rTank(:).^2)./3));
cost = ((2*pi*400.*(rTank(:).^2))+(2*pi*h(:).*300.*rTank(:)));
plot(rTank,cost)
I am expecting a curve of all positive values between radii 2m and 10m, with positive values for cost. For some reason I am getting negative values for results, and according to the resulting plot, the cost of the water tank is free when the radius is 8m, which makes no sense.
Filter out h<0
rTank = 2:0.5:10;
h = 250./(pi*rTank)-1/3*rTank.^2;
good_h=h(h>0);
good_rTank=rTank(h>0);
cost = 2*pi*400*good_rTank.^2 + 2*pi*good_h*300.*good_rTank;
plot(good_rTank, cost)

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

Uniformly sampling on hyperplanes

Given the vector size N, I want to generate a vector <s1,s2, ..., sn> that s1+s2+...+sn = S.
Known 0<S<1 and si < S. Also such vectors generated should be uniformly distributed.
Any code in C that helps explain would be great!
The code here seems to do the trick, though it's rather complex.
I would probably settle for a simpler rejection-based algorithm, namely: pick an orthonormal basis in n-dimensional space starting with the hyperplane's normal vector. Transform each of the points (S,0,0,0..0), (0,S,0,0..0) into that basis and store the minimum and maximum along each of the basis vectors. Sample uniformly each component in the new basis, except for the first one (the normal vector), which is always S, then transform back to the original space and check if the constraints are satisfied. If they are not, sample again.
P.S. I think this is more of a maths question, actually, could be a good idea to ask at http://maths.stackexchange.com or http://stats.stackexchange.com
[I'll skip "hyper-" prefix for simplicity]
One of possible ideas: generate many uniformly distributed points in some enclosing volume and project them on the target part of plane.
To get uniform distribution the volume must be shaped like the part of plane but with added margins along plane normal.
To uniformly generate points in such volumewe can enclose it in a cube and reject everything outside of the volume.
select margin, let's take margin=S for simplicity (once margin is positive it affects only performance)
generate a point in cube [-M,S+M]x[-M,S+M]x[-M,S+M]
if distance to the plane is more than M, reject the point and go to #2
project the point on the plane
check that projection falls into [0,S]x[0,S]x[0,S], if not - reject and go to #2
add this point to the resulting set and go to #2 is you need more points
The problem can be mapped to that of sampling on linear polytopes for which the common approaches are Monte Carlo methods, Random Walks, and hit-and-run methods (see https://www.jmlr.org/papers/volume19/18-158/18-158.pdf for examples a short comparison). It is related to linear programming, and can be extended to manifolds.
There is also the analysis of polytopes in compositional data analysis, e.g. https://link.springer.com/content/pdf/10.1023/A:1023818214614.pdf, which provide an invertible transformation between the plane and the polytope that can be used for sampling.
If you are working on low dimensions, you can use also rejection sampling. This means you first sample on the plane containing the polytope (defined by your inequalities). This later method is easy to implement (and wasteful, of course), the GNU Octave (I let the author of the question re-implement in C) code below is an example.
The first requirement is to get vector orthogonal to the hyperplane. For a sum of N variables this is n = (1,...,1). The second requirement is a point on the plane. For your example that could be p = (S,...,S)/N.
Now any point on the plane satisfies n^T * (x - p) = 0
we assume also that x_i >= 0
With these given you compute an orthonormal basis on the plane (the nullity of the vector n) and then create random combination on that bases. Finally you map back to the original space and apply your constraints on the generated samples.
# Example in 3D
dim = 3;
S = 1;
n = ones(dim, 1); # perpendicular vector
p = S * ones(dim, 1) / dim;
# null-space of the perpendicular vector (transposed, i.e. row vector)
# this generates a basis in the plane
V = null (n.');
# These steps are just to reduce the amount of samples that are rejected
# we build a tight bounding box
bb = S * eye(dim); # each column is a corner of the constrained region
# project on the null-space
w_bb = V \ (bb - repmat(p, 1, dim));
wmin = min (w_bb(:));
wmax = max (w_bb(:));
# random combinations and map back
nsamples = 1e3;
w = wmin + (wmax - wmin) * rand(dim - 1, nsamples);
x = V * w + p;
# mask the points inside the polytope
msk = true(1, nsamples);
for i = 1:dim
msk &= (x(i,:) >= 0);
endfor
x_in = x(:, msk); # inside the polytope (your samples)
x_out = x(:, !msk); # outside the polytope
# plot the results
scatter3 (x(1,:), x(2,:), x(3,:), 8, double(msk), 'filled');
hold on
plot3(bb(1,:), bb(2,:), bb(3,:), 'xr')
axis image

Calculate initial velocity to move a set distance with inertia

I want to move something a set distance. However in my system there is inertia/drag/negative accelaration. I'm using a simple calculation like this for it:
v = oldV + ((targetV - oldV) * inertia)
Applying that over a number of frames makes the movement 'ramp up' or decay, eg:
v = 10 + ((0 - 10) * 0.25) = 7.5 // velocity changes from 10 to 7.5 this frame
So I know the distance I want to travel and the acceleration, but not the initial velocity that will get me there. Maybe a better explanation is I want to know how hard to hit a billiard ball so that it stops on a certain point.
I've been looking at Equations of motion (http://en.wikipedia.org/wiki/Equations_of_motion) but can't work out what the correct one for my problem is...
Any ideas? Thanks - I am from a design not science background.
Update: Fiirhok has a solution with a fixed acceleration value; HTML+jQuery demo:
http://pastebin.com/ekDwCYvj
Is there any way to do this with a fractional value or an easing function? The benefit of that in my experience is that fixed acceleration and frame based animation sometimes overshoots the final point and needs to be forced, creating a slight snapping glitch.
This is a simple kinematics problem.
At some time t, the velocity (v) of an object under constant acceleration is described by:
v = v0 + at
Where v0 is the initial velocity and a is the acceleration. In your case, the final velocity is zero (the object is stopped) so we can solve for t:
t = -v0/a
To find the total difference traveled, we take the integral of the velocity (the first equation) over time. I haven't done an integral in years, but I'm pretty sure this one works out to:
d = v0t + 1/2 * at^2
We can substitute in the equation for t we developed ealier:
d = v0^2/a + 1/2 * v0^2 / a
And the solve for v0:
v0 = sqrt(-2ad)
Or, in a more programming-language format:
initialVelocity = sqrt( -2 * acceleration * distance );
The acceleration in this case is negative (the object is slowing down), and I'm assuming that it's constant, otherwise this gets more complicated.
If you want to use this inside a loop with a finite number of steps, you'll need to be a little careful. Each iteration of the loop represents a period of time. The object will move an amount equal to the average velocity times the length of time. A sample loop with the length of time of an iteration equal to 1 would look something like this:
position = 0;
currentVelocity = initialVelocity;
while( currentVelocity > 0 )
{
averageVelocity = currentVelocity + (acceleration / 2);
position = position + averageVelocity;
currentVelocity += acceleration;
}
If you want to move a set distance, use the following:
Distance travelled is just the integral of velocity with respect to time. You need to integrate your expression with respect to time with limits [v, 0] and this will give you an expression for distance in terms of v (initial velocity).

WPF: Finding an element along a path

I have not marked this question Answered yet.
The current accepted answer got accepted automatically because of the Bounty Time-Limit
With reference to this programming game I am currently building.
As you can see from the above link, I am currently building a game in where user-programmable robots fight autonomously in an arena.
Now, I need a way to detect if a robot has detected another robot in a particular angle (depending on where the turret may be facing):
alt text http://img21.imageshack.us/img21/7839/robotdetectionrg5.jpg
As you can see from the above image, I have drawn a kind of point-of-view of a tank in which I now need to emulate in my game, as to check each point in it to see if another robot is in view.
The bots are just canvases that are constantly translating on the Battle Arena (another canvas).
I know the heading the turret (the way it will be currently facing), and with that, I need to find if there are any bots in its path(and the path should be defined in kind of 'viewpoint' manner, depicted in the image above in the form of the red 'triangle'. I hope the image makes things more clear to what I am trying to convey.
I hope that someone can guide me to what math is involved in achieving this problem.
[UPDATE]
I have tried the calculations that you have told me, but it's not working properly, since as you can see from the image, bot1 shouldn't be able to see Bot2 . Here is an example :
alt text http://img12.imageshack.us/img12/7416/examplebattle2.png
In the above scenario, Bot 1 is checking if he can see Bot 2. Here are the details (according to Waylon Flinn's answer):
angleOfSight = 0.69813170079773179 //in radians (40 degrees)
orientation = 3.3 //Bot1's current heading (191 degrees)
x1 = 518 //Bot1's Center X
y1 = 277 //Bot1's Center Y
x2 = 276 //Bot2's Center X
y2 = 308 //Bot2's Center Y
cx = x2 - x1 = 276 - 518 = -242
cy = y2 - y1 = 308 - 277 = 31
azimuth = Math.Atan2(cy, cx) = 3.0141873380511295
canHit = (azimuth < orientation + angleOfSight/2) && (azimuth > orientation - angleOfSight/2)
= (3.0141873380511295 < 3.3 + 0.349065850398865895) && (3.0141873380511295 > 3.3 - 0.349065850398865895)
= true
According to the above calculations, Bot1 can see Bot2, but as you can see from the image, that is not possible, since they are facing different directions.
What am I doing wrong in the above calculations?
The angle between the robots is arctan(x-distance, y-distance) (most platforms provide this 2-argument arctan that does the angle adjustment for you. You then just have to check whether this angle is less than some number away from the current heading.
Edit 2020: Here's a much more complete analysis based on the updated example code in the question and a now-deleted imageshack image.
Atan2: The key function you need to find an angle between two points is atan2. This takes a Y-coordinate and X-coordinate of a vector and returns the angle between that vector and the positive X axis. The value will always be wrapped to lie between -Pi and Pi.
Heading vs Orientation: atan2, and in general all your math functions, work in the "mathematical standard coordinate system", which means an angle of "0" corresponds to directly east, and angles increase counterclockwise. Thus, an "mathematical angle" of Pi / 2 as given by atan2(1, 0) means an orientation of "90 degrees counterclockwise from due east", which matches the point (x=0, y=1). "Heading" is a navigational idea that expresses orientation is a clockwise angle from due north.
Analysis: In the now-deleted imageshack image, your "heading" of 191 degrees corresponded to a south-south-west direction. This actually an trigonometric "orientation" of -101 degrees, or -1.76. The first issue in the updated code is therefore conflating "heading" and "orientation". you can get the latter from the former by orientation_degrees = 90 - heading_degrees or orientation_radians = Math.PI / 2 - heading_radians, or alternatively you could specify input orientations in the mathematical coordinate system rather than the nautical heading coordinate system.
Checking that an angle lies between two others: Checking that an vector lies between two other vectors is not as simple as checking that the numeric angle value is between, because of the way the angles wrap at Pi/-Pi.
Analysis: in your example, the orientation is 3.3, the right edge of view is orientation 2.95, the left edge of view is 3.65. The calculated azimith is 3.0141873380511295, which happens to be correct (it does lie between). However, this would fail for azimuth values like -3, which should be calculated as "hit". See Calculating if an angle is between two angles for solutions.
Calculate the relative angle and distance of each robot relative to the current one. If the angle is within some threshold of the current heading and within the max view range, then it can see it.
The only tricky thing will be handling the boundary case where the angle goes from 2pi radians to 0.
Something like this within your bot's class (C# code):
/// <summary>
/// Check to see if another bot is visible from this bot's point of view.
/// </summary>
/// <param name="other">The other bot to look for.</param>
/// <returns>True iff <paramref name="other"/> is visible for this bot with the current turret angle.</returns>
private bool Sees(Bot other)
{
// Get the actual angle of the tangent between the bots.
var actualAngle = Math.Atan2(this.X - other.X, this.Y - other.Y) * 180/Math.PI + 360;
// Compare that angle to a the turret angle +/- the field of vision.
var minVisibleAngle = (actualAngle - (FOV_ANGLE / 2) + 360);
var maxVisibleAngle = (actualAngle + (FOV_ANGLE / 2) + 360);
if (this.TurretAngle >= minVisibleAngle && this.TurretAngle <= maxVisibleAngle)
{
return true;
}
return false;
}
Notes:
The +360's are there to force any negative angles to their corresponding positive values and to shift the boundary case of angle 0 to somewhere easier to range test.
This might be doable using only radian angles but I think they're dirty and hard to read :/
See the Math.Atan2 documentation for more details.
I highly recommend looking into the XNA Framework, as it's created with game design in mind. However, it doesn't use WPF.
This assumes that:
there are no obstacles to obstruct the view
Bot class has X and Y properties
The X and Y properties are at the center of the bot.
Bot class a TurretAngle property which denotes the turret's positive angle relative to the x-axis, counterclockwise.
Bot class has a static const angle called FOV_ANGLE denoting the turret's field of vision.
Disclaimer: This is not tested or even checked to compile, adapt it as necessary.
A couple of suggestions after implementing something similar (a long time ago!):
The following assumes that you are looping through all bots on the battlefield (not a particularly nice practice, but quick and easy to get something working!)
1) Its a lot easier to check if a bot is in range then if it can currently be seen within the FOV e.g.
int range = Math.sqrt( Math.abs(my.Location.X - bots.Location.X)^2 +
Math.abs(my.Location.Y - bots.Location.Y)^2 );
if (range < maxRange)
{
// check for FOV
}
This ensures that it can potentially short-cuircuit a lot of FOV checking and speed up the process of running the simulation. As a caveat, you could have some randomness here to make it more interesting, such that after a certain distance the chance to see is linearly proportional to the range of the bot.
2) This article seems to have the FOV calculation stuff on it.
3) As an AI graduate ... nave you tried Neural Networks, you could train them to recognise whether or not a robot is in range and a valid target. This would negate any horribly complex and convoluted maths! You could have a multi layer perceptron [1], [2] feed in the bots co-ordinates and the targets cordinates and recieve a nice fire/no-fire decision at the end. WARNING: I feel obliged to tell you that this methodology is not the easiest to achieve and can be horribly frustrating when it goes wrong. Due to the (simle) non-deterministic nature of this form of algorithm, debugging can be a pain. Plus you will need some form of learning either Back Propogation (with training cases) or a Genetic Algorithm (another complex process to perfect)! Given the choice I would use Number 3, but its no for everyone!
It can be quite easily achieved with the use of a concept in vector math called dot product.
http://en.wikipedia.org/wiki/Dot_product
It may look intimidating, but it's not that bad. This is the most correct way to deal with your FOV issue, and the beauty is that the same math works whether you are dealing with 2D or 3D (that's when you know the solution is correct).
(NOTE: If anything is not clear, just ask in the comment section and I will fill in the missing links.)
Steps:
1) You need two vectors, one is the heading vector of the main tank. Another vector you need is derived from the position of the tank in question and the main tank.
For our discussion, let's assume the heading vector for main tank is (ax, ay) and vector between main tank's position and target tank is (bx, by). For example, if main tank is at location (20, 30) and target tank is at (45, 62), then vector b = (45 - 20, 62 - 30) = (25, 32).
Again, for purpose of discussion, let's assume main tank's heading vector is (3,4).
The main goal here is to find the angle between these two vectors, and dot product can help you get that.
2) Dot product is defined as
a * b = |a||b| cos(angle)
read as a (dot product) b since a and b are not numbers, they are vectors.
3) or expressed another way (after some algebraic manipulation):
angle = acos((a * b) / |a||b|)
angle is the angle between the two vectors a and b, so this info alone can tell you whether one tank can see another or not.
|a| is the magnitude of the vector a, which according to the Pythagoras Theorem, is just sqrt(ax * ax + ay * ay), same goes for |b|.
Now the question comes, how do you find out a * b (a dot product b) in order to find the angle.
4) Here comes the rescue. Turns out that dot product can also be expressed as below:
a * b = ax * bx + ay * by
So angle = acos((ax * bx + ay * by) / |a||b|)
If the angle is less than half of your FOV, then the tank in question is in view. Otherwise it's not.
So using the example numbers above:
Based on our example numbers:
a = (3, 4)
b = (25, 32)
|a| = sqrt(3 * 3 + 4 * 4)
|b| = sqrt(25 * 25 + 32 * 32)
angle = acos((20 * 25 + 30 * 32) /|a||b|
(Be sure to convert the resulting angle to degree or radian as appropriate before comparing it to your FOV)
This will tell you if the center of canvas2 can be hit by canvas1. If you want to account for the width of canvas2 it gets a little more complicated. In a nutshell, you would have to do two checks, one for each of the relevant corners of canvas2, instead of one check on the center.
/// assumming canvas1 is firing on canvas2
// positions of canvas1 and canvas2, respectively
// (you're probably tracking these in your Tank objects)
int x1, y1, x2, y2;
// orientation of canvas1 (angle)
// (you're probably tracking this in your Tank objects, too)
double orientation;
// angle available for firing
// (ditto, Tank object)
double angleOfSight;
// vector from canvas1 to canvas2
int cx, cy;
// angle of vector between canvas1 and canvas2
double azimuth;
// can canvas1 hit the center of canvas2?
bool canHit;
// find the vector from canvas1 to canvas2
cx = x2 - x1;
cy = y2 - y1;
// calculate the angle of the vector
azimuth = Math.Atan2(cy, cx);
// correct for Atan range (-pi, pi)
if(azimuth < 0) azimuth += 2*Math.PI;
// determine if canvas1 can hit canvas2
// can eliminate the and (&&) with Math.Abs but this seems more instructive
canHit = (azimuth < orientation + angleOfSight) &&
(azimuth > orientation - angleOfSight);
Looking at both of your questions I'm thinking you can solve this problem using the math provided, you then have to solve many other issues around collision detection, firing bullets etc. These are non trivial to solve, especially if your bots aren't square. I'd recommend looking at physics engines - farseer on codeplex is a good WPF example, but this makes it into a project way bigger than a high school dev task.
Best advice I got for high marks, do something simple really well, don't part deliver something brilliant.
Does your turret really have that wide of a firing pattern? The path a bullet takes would be a straight line and it would not get bigger as it travels. You should have a simple vector in the direction of the the turret representing the turrets kill zone. Each tank would have a bounding circle representing their vulnerable area. Then you can proceed the way they do with ray tracing. A simple ray / circle intersection. Look at section 3 of the document Intersection of Linear and Circular Components in 2D.
Your updated problem seems to come from different "zero" directions of orientation and azimuth: an orientation of 0 seems to mean "straight up", but an azimuth of 0 "straight right".

Resources