Issue with zero and negative goal values - zero

I’m following the example of http://wiki.ros.org/navigation/Tutorials/SendingSimpleGoals with navigation stack. If I set the goal position with positive values the turtle moves as expected, but If it's set to zero or negative values the turtle rotates in the same place, and It returns the message:
“[ERROR] [1433622909.572352221]: Aborting because a valid plan could not be found.
Even after executing all recovery behaviors.”
Does navigation only accepts positives values for the position MoveBaseGoal component?

A reason might be that your map only has positive coordinates. So the robot obviously cant find a route as the goal is not on the map.
Maybe have a look at your map using rviz.

Turtlesim will always return positive values at the position because it has the (0,0) located at the bottom left corner, and by moving up and to the right it increases the coordinates values.
To allow navigation stack to handle the Turtlesim in a negative position (and relatively easy to do tests) I suggest modify the file turtleOdometry.cpp
It generates tf for odometry and some other stuff. What we have to do there is to change the callback function: poseCallback (..): msg-> msg- x> x by msg-> x - OFFSET_X and msg-> and -OFFSET_Y throughout the hole file. with OFSSET in positive calues, I put 5 to the offset and the turtle in the rviz appears almost in the world frame 1.1
With you will be able to test the navigation stack.
Then it is important to set the size and origin of global_costmap!
You need to modify the File: move_base.launch
<rosparam file = "$ (find <blah>) / global_costmap_params.yaml" command = "load" />
<Param name = "global_costmap / width" value = "20.0" />
<Param name = "global_costmap / height" value = "20.0" />
<Param name = "global_costmap / origin_x" value = "- 10.0" />
<Param name = "global_costmap / origin_y" value = "- 10.0" />
The default values ​​that are 10, 10, 0, 0 in that order.
For more info, read http://wiki.ros.org/costmap_2d
Regards,

Related

FiPy: Can I directly change faceVariables depending on neighboring cells?

I am working with a biological model of the distribution of microbial biomass (b1) on a 2D grid. From the biomass a protein (p1) is produced. The biomass diffuses over the grid, while the protein does not. Only if a certain amount of protein is produced (p > p_lim), the biomass is supposed to diffuse.
I try to implement this by using a dummy cell variable z multiplied with the diffusion coefficient and setting it from 0 to 1 only in cells where p > p_lim.
The condition works fine and when the critical amount of p is reached in a cell, z is set to 1, and diffusion happens. However, the diffusion still does not work with the rate I would like, because to calculate diffusion, the face variable, not the value of the cell itself is used. The faces of z are always a mean of the cell with z=1 and its neighboring cells with z=0. I I, however, would like the diffusion to work at its original rate even if the neighbouring cell is still at p < p_lim.
So, my question is: Can i somehow access a faceVariable and change it? For example, set a face to 1 if any neigboring cell has reached p1 > p_lim? I guess this is not a proper mathematical thing to do, but I couldn't think of another way to simulate this problem.
I will show a very reduced form of my model below. In any case, I thank you very much for your time!
##### produce mesh
nx= 5.
ny= nx
dx = 1.
dy = dx
L = nx*dx
mesh = Grid2D(nx=nx,ny=ny,dx=dx,dy=dy)
#parameters
h1 = 0.5 # production rate of p
Db = 10. # diffusion coeff of b
p_lim=0.1
# cell variables
z = CellVariable(name="z",mesh=mesh,value=0.)
b1 = CellVariable(name="b1",mesh=mesh,hasOld=True,value=0.)
p1= CellVariable(name="p1",mesh=mesh,hasOld=True,value=0.)
# equations
eqb1 = (TransientTerm(var=b1)== DiffusionTerm(var=b1,coeff=Db*z.arithmeticFaceValue)-ImplicitSourceTerm(var=b1,coeff=h1))
eqp1 = (TransientTerm(var=p1)==ImplicitSourceTerm(var=b1,coeff=h1))
# set b1 to 10. in the center of the grid
b1.setValue(10.,where=((x>2.)&(x<3.)&(y>2.)&(y<3.)))
vi=Viewer(vars=(b1,p1),FIPY_VIEWER="matplotlib")
eq = eqb1 & eqp1
from builtins import range
for t in range(10):
b1.updateOld()
p1.updateOld()
z.setValue(z + 0.1,where=((p1>=p_lim) & (z < 1.)))
eq.solve(dt=0.1)
vi.plot()
In addition to .arithmeticFaceValue, FiPy provides other interpolators between cell and face values, such as .harmonicFaceValue and .minmodFaceValue.
These properties are implemented using subclasses of _CellToFaceVariable, specifically _ArithmeticCellToFaceVariable, _HarmonicCellToFaceVariable, and _MinmodCellToFaceVariable.
You can also make a custom interpolator by subclassing _CellToFaceVariable. Two such examples are _LevelSetDiffusionVariable and ScharfetterGummelFaceVariable (neither is well documented, I'm afraid).
You need to override the _calc_() method to provide your custom calculation. This method takes three arguments:
alpha: an array of the ratio (0-1) of the distance from the face to the cell on one side, relative to the distance from distance from the cell on the other side to the cell on the first side
id1: an array of indices of the cells on one side of the face
id2: an array of indices of the cells on the other side of the face
Note: You can ignore any clause if inline.doInline: and look at the _calc_() method defined under the else: clause.

MEL : extract entire value of a keyframe in maya

My question is pretty simple, i use a mel command to extract camera data value.
i use this code :
keyframe -q -vc Camera0Node.translateX
it result : 2385.11
the problem is my keyframe value is 2385.11010742
the difference is minimal but i want to have the exact value
any idea how i can get non rounded value ?
thank you
I'm pretty sure the result is your full value, but Maya only displays it as a rounded number.
As an example, let's assign that value directly to a variable:
$val = 2385.11010742;
Then print it to console:
print($val);
The result displays 2385.110107.
Now let's do a comparison:
print($val == 2385.110107); // returns 0 for False
But when comparing to the original value:
print($val == 2385.11010742); // returns 1 for True

Get every Y value from every line based on a X value

Using OxyPlot library, I have a LineSeries with a max count of 8. Given a X value (got from a left mouse click), how can I get (and show it in the legend) the corresponding Y value for each line?
You can get the point value using the MouseDown method which you attach to your line series found here in the MouseDownEventHitTestResult method
var s1 = new LineSeries();
s1.MouseDown += (s, e) =>
{
model.Subtitle = "Y value of nearest point in LineSeries: " +
Math.Round(e.HitTestResult.NearestHitPoint.Y);
model.InvalidatePlot(false);
};
There doesn't appear to be any way to change much of whats in the legend area as that's just a reflection of the graph titles. You could display it to the subtitle as in the example or draw an annotation on the screen.
They have a whole bunch of examples you can look through for ideas here

Matlab - Cropping 2d image maps in a loop and storing in a single variable

I have a code to crop connected components of input image, input, by finding the boundary conditions from a binary image's labelled map, labelledmap ([labelledmap, labelcount] = bwlabel(hvedged, 8);)
I'm new to matlab so this might sound stupid..
The problem is, I am unable to store different cropped images in the same variable, Because matlab seems to merge the ends of the already existing image and the new cropped image, i.e, it is storing the complete map between the two cropped images, the way i see it :/
This is the output Using different variables for storing cropped image (the kind of output i want)
Output Using different variables for storing cropped image
This is the output i'm getting by storing the cropped image in the same variable(not helpful)
Output when storing cropped image in the same varible
I tried using an array of size equal to total number of labels produced but it's giving the same result.. also i tried clearvars for clearing the output token image, ltoken, after every iteration of the loop but it's not helping
So, is there any possible way to display individual cropped images.. also the number of cropped images might be in thousands so i want to use a loop to code their cropping mechanism
here is a part of the code attached.. thanks in advance ;)
for h=1:labelcount
for i=1:r
for j=1:c
if labelledmap(i,j)==h
if i<ltop
ltop=i;
end
if i>lbottom
lbottom=i;
end
if j<lleft
lleft=j;
end
if j>lright
lright=j;
end
end
end
end
if ltop>5
ltop=ltop-5;
end
if lbottom<r-5
lbottom=lbottom+5;
end
if lleft>5
lleft=lleft-5;
end
if lright<c-5
lright=lright+5;
end
lwidth=lright-lleft;
lheight=lbottom-ltop;
ltoken=imcrop(input,[lleft ltop lwidth lheight]);
figure('Name', 'Cropped Token'), imshow(ltoken);
clearvars ltoken;
end
you need to initialize ltop lbottom lleft and lright for each iteration of label h. I think this is the reason why you get the cropped images "glued" together.
It is EXTREMELY inefficient to go through all the pixels for each and every one of your labels. Especially when you are expected to have many labels.
Use regionprops to get the 'BoundingBox' property for each label.
Here's an example
st = regionprops( labelledmap, 'BoundingBox' );
imlist = cell( 1, numel(st) ); % pre-allocate
for ii=1:numel(st)
r = st(ii).BoundingBox;
% I understand you want to increase the BB by 5 pixels at each side:
r(1:2) = r(1:2) - 5; % start point moves -5
r(3:4) = r(3:4) + 10; % width and height increases by 10
imlist{ii} = imcrop( input, r );
end
I'm still a bit in shock by your code that explicitly loops through all pixels just for finding the bouding box. This is NOT the matlab way of doing things.
If you insist on NOT using regionprops here's a more Matlab-ish way of finding the ii-th bounding box:
imsk = (labeledmap == ii); % create a binary map with True for ii-th region
xFlat = any(imsk,1); % "flattening" imsk on the x-axis
lleft = find( xFlat, 1, 'first' );
lright = find( xFlat, 1, 'last' );
yFlat = any(imsk, 2);
ltop = find( yFlat, 1, 'first' );
lbottom = find( yFlat, 1, 'last' );
No loops at all over image coordinates.

WPF: Finding an element along a path

I have not marked this question Answered yet.
The current accepted answer got accepted automatically because of the Bounty Time-Limit
With reference to this programming game I am currently building.
As you can see from the above link, I am currently building a game in where user-programmable robots fight autonomously in an arena.
Now, I need a way to detect if a robot has detected another robot in a particular angle (depending on where the turret may be facing):
alt text http://img21.imageshack.us/img21/7839/robotdetectionrg5.jpg
As you can see from the above image, I have drawn a kind of point-of-view of a tank in which I now need to emulate in my game, as to check each point in it to see if another robot is in view.
The bots are just canvases that are constantly translating on the Battle Arena (another canvas).
I know the heading the turret (the way it will be currently facing), and with that, I need to find if there are any bots in its path(and the path should be defined in kind of 'viewpoint' manner, depicted in the image above in the form of the red 'triangle'. I hope the image makes things more clear to what I am trying to convey.
I hope that someone can guide me to what math is involved in achieving this problem.
[UPDATE]
I have tried the calculations that you have told me, but it's not working properly, since as you can see from the image, bot1 shouldn't be able to see Bot2 . Here is an example :
alt text http://img12.imageshack.us/img12/7416/examplebattle2.png
In the above scenario, Bot 1 is checking if he can see Bot 2. Here are the details (according to Waylon Flinn's answer):
angleOfSight = 0.69813170079773179 //in radians (40 degrees)
orientation = 3.3 //Bot1's current heading (191 degrees)
x1 = 518 //Bot1's Center X
y1 = 277 //Bot1's Center Y
x2 = 276 //Bot2's Center X
y2 = 308 //Bot2's Center Y
cx = x2 - x1 = 276 - 518 = -242
cy = y2 - y1 = 308 - 277 = 31
azimuth = Math.Atan2(cy, cx) = 3.0141873380511295
canHit = (azimuth < orientation + angleOfSight/2) && (azimuth > orientation - angleOfSight/2)
= (3.0141873380511295 < 3.3 + 0.349065850398865895) && (3.0141873380511295 > 3.3 - 0.349065850398865895)
= true
According to the above calculations, Bot1 can see Bot2, but as you can see from the image, that is not possible, since they are facing different directions.
What am I doing wrong in the above calculations?
The angle between the robots is arctan(x-distance, y-distance) (most platforms provide this 2-argument arctan that does the angle adjustment for you. You then just have to check whether this angle is less than some number away from the current heading.
Edit 2020: Here's a much more complete analysis based on the updated example code in the question and a now-deleted imageshack image.
Atan2: The key function you need to find an angle between two points is atan2. This takes a Y-coordinate and X-coordinate of a vector and returns the angle between that vector and the positive X axis. The value will always be wrapped to lie between -Pi and Pi.
Heading vs Orientation: atan2, and in general all your math functions, work in the "mathematical standard coordinate system", which means an angle of "0" corresponds to directly east, and angles increase counterclockwise. Thus, an "mathematical angle" of Pi / 2 as given by atan2(1, 0) means an orientation of "90 degrees counterclockwise from due east", which matches the point (x=0, y=1). "Heading" is a navigational idea that expresses orientation is a clockwise angle from due north.
Analysis: In the now-deleted imageshack image, your "heading" of 191 degrees corresponded to a south-south-west direction. This actually an trigonometric "orientation" of -101 degrees, or -1.76. The first issue in the updated code is therefore conflating "heading" and "orientation". you can get the latter from the former by orientation_degrees = 90 - heading_degrees or orientation_radians = Math.PI / 2 - heading_radians, or alternatively you could specify input orientations in the mathematical coordinate system rather than the nautical heading coordinate system.
Checking that an angle lies between two others: Checking that an vector lies between two other vectors is not as simple as checking that the numeric angle value is between, because of the way the angles wrap at Pi/-Pi.
Analysis: in your example, the orientation is 3.3, the right edge of view is orientation 2.95, the left edge of view is 3.65. The calculated azimith is 3.0141873380511295, which happens to be correct (it does lie between). However, this would fail for azimuth values like -3, which should be calculated as "hit". See Calculating if an angle is between two angles for solutions.
Calculate the relative angle and distance of each robot relative to the current one. If the angle is within some threshold of the current heading and within the max view range, then it can see it.
The only tricky thing will be handling the boundary case where the angle goes from 2pi radians to 0.
Something like this within your bot's class (C# code):
/// <summary>
/// Check to see if another bot is visible from this bot's point of view.
/// </summary>
/// <param name="other">The other bot to look for.</param>
/// <returns>True iff <paramref name="other"/> is visible for this bot with the current turret angle.</returns>
private bool Sees(Bot other)
{
// Get the actual angle of the tangent between the bots.
var actualAngle = Math.Atan2(this.X - other.X, this.Y - other.Y) * 180/Math.PI + 360;
// Compare that angle to a the turret angle +/- the field of vision.
var minVisibleAngle = (actualAngle - (FOV_ANGLE / 2) + 360);
var maxVisibleAngle = (actualAngle + (FOV_ANGLE / 2) + 360);
if (this.TurretAngle >= minVisibleAngle && this.TurretAngle <= maxVisibleAngle)
{
return true;
}
return false;
}
Notes:
The +360's are there to force any negative angles to their corresponding positive values and to shift the boundary case of angle 0 to somewhere easier to range test.
This might be doable using only radian angles but I think they're dirty and hard to read :/
See the Math.Atan2 documentation for more details.
I highly recommend looking into the XNA Framework, as it's created with game design in mind. However, it doesn't use WPF.
This assumes that:
there are no obstacles to obstruct the view
Bot class has X and Y properties
The X and Y properties are at the center of the bot.
Bot class a TurretAngle property which denotes the turret's positive angle relative to the x-axis, counterclockwise.
Bot class has a static const angle called FOV_ANGLE denoting the turret's field of vision.
Disclaimer: This is not tested or even checked to compile, adapt it as necessary.
A couple of suggestions after implementing something similar (a long time ago!):
The following assumes that you are looping through all bots on the battlefield (not a particularly nice practice, but quick and easy to get something working!)
1) Its a lot easier to check if a bot is in range then if it can currently be seen within the FOV e.g.
int range = Math.sqrt( Math.abs(my.Location.X - bots.Location.X)^2 +
Math.abs(my.Location.Y - bots.Location.Y)^2 );
if (range < maxRange)
{
// check for FOV
}
This ensures that it can potentially short-cuircuit a lot of FOV checking and speed up the process of running the simulation. As a caveat, you could have some randomness here to make it more interesting, such that after a certain distance the chance to see is linearly proportional to the range of the bot.
2) This article seems to have the FOV calculation stuff on it.
3) As an AI graduate ... nave you tried Neural Networks, you could train them to recognise whether or not a robot is in range and a valid target. This would negate any horribly complex and convoluted maths! You could have a multi layer perceptron [1], [2] feed in the bots co-ordinates and the targets cordinates and recieve a nice fire/no-fire decision at the end. WARNING: I feel obliged to tell you that this methodology is not the easiest to achieve and can be horribly frustrating when it goes wrong. Due to the (simle) non-deterministic nature of this form of algorithm, debugging can be a pain. Plus you will need some form of learning either Back Propogation (with training cases) or a Genetic Algorithm (another complex process to perfect)! Given the choice I would use Number 3, but its no for everyone!
It can be quite easily achieved with the use of a concept in vector math called dot product.
http://en.wikipedia.org/wiki/Dot_product
It may look intimidating, but it's not that bad. This is the most correct way to deal with your FOV issue, and the beauty is that the same math works whether you are dealing with 2D or 3D (that's when you know the solution is correct).
(NOTE: If anything is not clear, just ask in the comment section and I will fill in the missing links.)
Steps:
1) You need two vectors, one is the heading vector of the main tank. Another vector you need is derived from the position of the tank in question and the main tank.
For our discussion, let's assume the heading vector for main tank is (ax, ay) and vector between main tank's position and target tank is (bx, by). For example, if main tank is at location (20, 30) and target tank is at (45, 62), then vector b = (45 - 20, 62 - 30) = (25, 32).
Again, for purpose of discussion, let's assume main tank's heading vector is (3,4).
The main goal here is to find the angle between these two vectors, and dot product can help you get that.
2) Dot product is defined as
a * b = |a||b| cos(angle)
read as a (dot product) b since a and b are not numbers, they are vectors.
3) or expressed another way (after some algebraic manipulation):
angle = acos((a * b) / |a||b|)
angle is the angle between the two vectors a and b, so this info alone can tell you whether one tank can see another or not.
|a| is the magnitude of the vector a, which according to the Pythagoras Theorem, is just sqrt(ax * ax + ay * ay), same goes for |b|.
Now the question comes, how do you find out a * b (a dot product b) in order to find the angle.
4) Here comes the rescue. Turns out that dot product can also be expressed as below:
a * b = ax * bx + ay * by
So angle = acos((ax * bx + ay * by) / |a||b|)
If the angle is less than half of your FOV, then the tank in question is in view. Otherwise it's not.
So using the example numbers above:
Based on our example numbers:
a = (3, 4)
b = (25, 32)
|a| = sqrt(3 * 3 + 4 * 4)
|b| = sqrt(25 * 25 + 32 * 32)
angle = acos((20 * 25 + 30 * 32) /|a||b|
(Be sure to convert the resulting angle to degree or radian as appropriate before comparing it to your FOV)
This will tell you if the center of canvas2 can be hit by canvas1. If you want to account for the width of canvas2 it gets a little more complicated. In a nutshell, you would have to do two checks, one for each of the relevant corners of canvas2, instead of one check on the center.
/// assumming canvas1 is firing on canvas2
// positions of canvas1 and canvas2, respectively
// (you're probably tracking these in your Tank objects)
int x1, y1, x2, y2;
// orientation of canvas1 (angle)
// (you're probably tracking this in your Tank objects, too)
double orientation;
// angle available for firing
// (ditto, Tank object)
double angleOfSight;
// vector from canvas1 to canvas2
int cx, cy;
// angle of vector between canvas1 and canvas2
double azimuth;
// can canvas1 hit the center of canvas2?
bool canHit;
// find the vector from canvas1 to canvas2
cx = x2 - x1;
cy = y2 - y1;
// calculate the angle of the vector
azimuth = Math.Atan2(cy, cx);
// correct for Atan range (-pi, pi)
if(azimuth < 0) azimuth += 2*Math.PI;
// determine if canvas1 can hit canvas2
// can eliminate the and (&&) with Math.Abs but this seems more instructive
canHit = (azimuth < orientation + angleOfSight) &&
(azimuth > orientation - angleOfSight);
Looking at both of your questions I'm thinking you can solve this problem using the math provided, you then have to solve many other issues around collision detection, firing bullets etc. These are non trivial to solve, especially if your bots aren't square. I'd recommend looking at physics engines - farseer on codeplex is a good WPF example, but this makes it into a project way bigger than a high school dev task.
Best advice I got for high marks, do something simple really well, don't part deliver something brilliant.
Does your turret really have that wide of a firing pattern? The path a bullet takes would be a straight line and it would not get bigger as it travels. You should have a simple vector in the direction of the the turret representing the turrets kill zone. Each tank would have a bounding circle representing their vulnerable area. Then you can proceed the way they do with ray tracing. A simple ray / circle intersection. Look at section 3 of the document Intersection of Linear and Circular Components in 2D.
Your updated problem seems to come from different "zero" directions of orientation and azimuth: an orientation of 0 seems to mean "straight up", but an azimuth of 0 "straight right".

Resources