How to get Nao robot's joint coordinates? - nao-robot

For my project, a human follows a Nao robot's movements and I have human skeleton joint data in world coordinate as (x,y,z) from a depth sensor. Now I need the robot's joint coordinates to justify human imitation. The depth sensor cannot track the Nao robot.
I have found the robot's sensor positions using the motionProxy.getPosition function. I have also tried finding out the joint names with the deprecated function getJointNames but there is no function for getting the "positions" of these joints.
Should I manually measure the distance between joints and sensors and calculate the joint position? Also, has anyone tried using Vicon or other motion tracking system on a Nao robot?

You can have the position of any joint in cartesian world using ALMotion.getPosition.
eg:
ALMotion.getPosition( RShoulderPitch 0 1)
[-0.627288878,-0.0759276226,0.991154313,-0.000606168585,-0.315976441,0.014949495]
ALMotion.getPosition( HeadPitch FRAME_TORSO 1)
[-0.0248374045,-0.000473844644,0.991121054,-0.000527210592,-0.317508608,-0.0489440858]
...

Related

NAO robot grasps a ball on the ground

I create a timeline on Choregraph,and turn to the animation mode.I have trouble in making NAO crouch and making its hand able to reach the ball on the ground.How to record every gesture,such as intervals between two gestures,how many gesture I should record.Meanwhile,the robot often falls down...How can I adjust the gesture.
Choregraphe's animation mode helped you design grossly the movement of the robot, but it is up to you refine the result by spacing the key frames properly, and tuning the movement to maintain the balance.
Robot animation requires time and skill, and includes specific challenges :
you may have to move the robot to positions that are not balanced, which the robot requires too much strength to maintain, resulting in hot joints*, loss of strength, or the robot falling. Hence you need to be able to carry the robot regularly to record or test the robot's position.
the positions are irremediably mechanically constrained.
real joints are imperfect, have limited maximum strength and speed, and have play.
real world have real obstacles, and the robot tries to avoid them, altering the desired trajectories.
*When you get hot joints, just let the robot sit for 2 minutes, you do not need to reboot it.

Physics in 3D world in OPENGL with C language only

I've been trying to code a 3D game where the player shoots an arrow and I wanted to do the equations for the 3D. I know the equations for the 2D world where:
x = v0 * cosθ * t
y = v0 * sinθ * t - 0.5 * g * t^2
But how do I use these equations in my 3D world where I have the Z axis?
Instead of making the arrows follow an explicit curve, I suggest simulating the arrow step by step.
What you need to store is a position (with x,y,z coordinates, starting off at the archer's location) and a velocity (also with x,y,z coordinates, starting off as some constant times the direction the player is looking), and some scene gravity (also with x,y,z coordinates, but it'll probably point straight down).
When the simulation progresses by a timestep of t, add t times the velocity to the position, then add t times gravity to the velocity.
This way, you're free to do more interesting things to the arrow later, like having wind act on it (add t times wind to the velocity) or having air resistance act on it (multiply velocity by t times some value a bit smaller than 1) or redirecting it (change velocity to something else entirely) without having to recalculate the path of the arrow.

Do depth values in AVDepthData (from TrueDepth camera) indicate distance from camera or camera plane?

Do depth values in AVDepthData (from TrueDepth camera) indicate distance in meters from the camera, or perpendicular distance from the plane of the camera (i.e. z-value in camera space)?
My goal is to get an accurate 3D point from the depth data, and this distinction is important for accuracy. I've found lots online regarding OpenGL or Kinect, but not for TrueDepth camera.
FWIW, this is the algorithm I use. I'm find the value of depth buffer at a pixel found using some OpenCV feature detection. Below is the code I use to find real world 3D point at a given pixel at let cgPt: CGPoint. This algorithm seems to work quite well, but I'm not sure whether small error is introduced by the assumption of depth being distance to camera plane.
let depth = 1/disparity
let vScreen = sceneView.projectPoint(SCNVector3Make(0, 0, -depth))
// cgPt is the 2D coordinates at which I sample the depth
let worldPoint = sceneView.unprojectPoint(SCNVector3Make(cgPt.x, cgPt.y, vScreen.z))
I'm not sure of authoritative info either way, but it's worth noticing that capture in a disparity (not depth) format uses distances based on a pinhole camera model, as explained in the WWDC17 session on depth photography. That session is primarily about disparity-based depth capture with back-facing dual cameras, but a lot of the lessons in it are also valid for the TrueDepth camera.
That is, disparity is 1/depth, where depth is distance from subject to imaging plane along the focal axis (perpendicular to imaging plane). Not, say, distance from subject to the focal point, or straight-line distance to the subject's image on the imaging plane.
IIRC the default formats for TrueDepth camera capture are depth, not disparity (that is, depth map "pixel" values are meters, not 1/meters), but lacking a statement from Apple it's probably safe to assume the model is otherwise the same.
It looks like it measures distance from the camera's plane rather than a straight line from the pinhole. You can test this out by downloading the Streaming Depth Data from the TrueDepth Camera sample code.
Place the phone vertically 10 feet away from the wall, and you should expect to see one of the following:
If it measures from the focal point to the wall as a straight line, you should expect to see a radial pattern (e.g. the point closest to the camera is straight in front of it; the points furthest to the camera are those closer to the floor and ceiling).
If it measures distance from the camera's plane, then you should expect the wall color to be nearly uniform (as long as you're holding the phone parallel to the wall).
After downloading the sample code and trying it out, you will notice that it behaves like #2, meaning it's distance from the camera's plane, not from the camera itself.

S Curve motion profile (motor speed v/s time)

I am trying to control a Industrial AC Servo motor using my XE166 device.
The controller interfaces with the servo controller using the PULSE and DIRECTION control.
To achieve a jerk-free motion I have been trying to create a S Curve motion profile (motor speed v/s time).
Calculating instantaneous speed is no problem as I know the distance moved by the motor per pulse, and the pulse duration.
I need to understand how to arrive at a mathematical equation that I could use, that would tell me what should be the nth pulses duration to have the speed profile as an S-Curve.
Since these must be a common requirement in any domain requiring motion control (Robotics, CNC, industrial) there must be some standard reference to do it.
With anticipation
I have just answered a similar question over on robotics.
The standard solution would be to use a low level velocity PID controller to generate the PULSE and DIRECTION signals given a velocity demand, and then have an outer supervisory controller, which would ramp the velocity demand (mm/s) up or down in accordance with your required acceleration (mm/s/s) and jolt (mm/s/s/s) control parameters.
Initially, I would suggest that you try a trapezoidal velocity profile (instantaneous change in acceleration), as I suggested in Control both Velocity and Position (Linear actuator) and then extend it to add the jolt/jerk term later.

What if a particle hit the wall in a scenario of a particle filter?

Now I am trying to implement a particle filter. I am given a wall-mounted map, and I try to localize a robot in this map. Based on particle filter method, I initialize 1000 random particles, and in each step, I move these 1000 particles according to a certain movement instruction, i.e. an angle-odometry pair. After a move, I calculate the likelihood of the measurements compared to the sensed distance to the wall, and then resample the particles based on their likelihoods. I think this is the basic process for particle filter. What confuses me now is that how should I deal with the situations where some of the particles hit the wall while they are forwarding?
I think it is too late for you. However, it may help other people. Particle filter is a probabilistic approach, where particles can be sampled everywhere based on motion and prior distributions.
In your case, you can sample on the wall without any worry. Afterwards, the likelihood process will return a very low probability for that particle and it will be automatically resampled to another one with higher probability.

Resources