Project Tango -Start of Service Coordinate System, C API - c

I am working with the Motion Tracking app, trying to record the pose data on C API. The pose data is recorded with respect to the Start_Of_Service coordinate system. I am having trouble understanding this coordinate system.
Is the Z+ always aligned with gravity?
The back of the device is used as the Y axis?
The documentation says that the X-Y plane is perpendicular to Z and level with the ground. If Z+ is aligned with gravity and the Tango tablet is at an angle with gravity, how are the X and Y aligned?

Is the Z+ always aligned with gravity?
Per the docs:
Project Tango uses a right-handed, local-level frame for the START_OF_SERVICE and AREA_DESCRIPTION coordinate frames. This convention sets the Z-axis aligned with gravity, with Z+ pointed upwards, and the X-Y plane is perpendicular to gravity and locally level with the ground plane.
So yes. For START_OF_SERVICE and AREA_DESCRIPTION base frames.
The back of the device is used as the Y axis?
Per the docs:
Project Tango uses the direction the back of the device is pointed when the service started as the Y axis
Perpendicular to the device with y+ pointing out the back.
The documentation says that the X-Y plane is perpendicular to Z and level with the ground. If Z+ is aligned with gravity and the Tango tablet is at an angle with gravity, how are the X and Y aligned?
Imagine you are holding the device in the image at the START_OF_SERVICE Frame. Notice how the device is square with the room.
Now tilt the tablet forward or back about the x axis. The device moves, but all the axes stay the same.
Now rotate the device right or left about the y axis. All the axes stay the same.
So if your device is tilted, first rotate the device about the y axis, then about the x axis until the tablet screen aligns with the z axis...at which point it is easier to visualize where your axes are located.

Related

how to get Heading direction from raw IMU Data?

I have some raw data which accelerated and rotated in each axis(x, y, z). but I don't know what axis is gravity direction. Depends on each object, I can't figure out which direction the IMU is installed. Sometimes the x-axis is the direction of gravity, sometimes the y-axis, sometimes the z-axis and sometimes not all.
I need to find out when the object(with the IMU mounted) is moving at 1m/s^2 in Heading direction.
If the Z-axis is the direction of gravity and the x-axis is the direction of motion, the IMU needs to find a value with an Ax value of 1m/s^2 or more(If IMU installed oriented like the image below).
-img1
But I don't know which direction is the direction of motion and which is the direction of gravity. Therefore, I want to find out which direction is the moving direction through 3 acceleration signals and 3 gyro signals.
Even if the sensor is installed at an angle as shown in Figure 2, what should be done to find out that the sensor is moving with an acceleration of 1m/s^2 in the moving direction? I need to code in C. Since there is not enough computing margin in the my embedded environment, implementation should be as simple as possible. Is there any good solution ?

How to transform a 3D bounding box from pointclouds to matched RGB frames in ROS?

i am doing SLAM with depth camera realsense D455 on ROS (https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i) and I am able to create 3D point clouds of the environment.
I am new in ROS and my goal is to bound a 3d box in a region of global pointcloud then transform the same box on RGB frames that are matched with that points(at the same coordinates) in the global point cloud.
now i have RGB, depth, and point cloud topic and tf but since i am new in this field i do not know how can i find RGB frames that are matched with each point in the global point cloud?
and how do the same operation from point cloud to RGB frames?
i would be grateful if someone can help me with that
Get 3D bounding box points.
Transform them into camera's frame (z is depth).
Generate your camera's calibration model (intrinsic and extrinsic parameters of the camera, focal length etc.) -> Check openCV model, Scaramuzza Model or another stereo camera calibration model
Project 3D points to 2D pixels. (openCV has one projectPoints() function).

OpenGL: Sphere texture appearing oddly

I'm currently trying to map this pool ball texture to a sphere I have created. My approach is as follows:
Generate the sphere vertices
For every sphere vertex, translate that vertexes coordinates from the openGL world to the texture coordinates.
I want the white circle with the '1' in it to appear at the top of the sphere (at z=1), so I am using the x and z coordinates of the sphere vertices.
The texture file I am using has multiple textures. The texture below is the one I am concerned with. In the texture file, the top left of this particular texture is at (0.01, 0.01) and the bottom right is at (0.24, 0.24). If my math is right, this makes the dead center at about (0.115, 0.115). Since I want the white circle to be on top of the ball (z=1), I've come up with the following two lines of code to map the points:
tex_coords[i].x = 0.125 + (verticies[i].x)*0.115;
tex_coords[i].y = 0.125 + (verticies[i].z)*0.115;
My logic is that if X or Z is 0, the respective coordinate is 0.115, which is right in the middle. Otherwise, X and Z range from -1 to 1 so the maximum value we can reach is 0.24 and the minimum value is 0.01.
As you can see in the bottom screenshot, something has gone wrong. If you look very closely you can see that one tiny part of the sphere is colored white.
There was a discrepancy between one of my shaders and my init function. I had a variable called "vTexCoord" in my shaders but was using "vTexCoords" in my init function.

OpenGL Rotate an Object around It's Local Axes

Imagine a 3D rectangle at origin. It is first rotated along Y-axis. So good so far. Now, it is rotated around X-axis. However, OpenGL (API: glrotatef) interprets the X-axis to be the global X-axis. How can I ensure that the "axes move with the object"?
This is very much like an airplane. For example, if yaw (Y rotation) is applied first, and then pitch (X-rotation), a correct pitch would be X-rotation along the plane's local axes.
EDIT: I have seen this called gimbal lock problem, but I don't think it is though.
You cannot consistently describe an aeroplane's orientation as one x rotation and one y rotation. Not even if you also store and one z rotation. That's exactly the gimbal lock problem.
The crux of it is that you have to apply the rotations in some order. Say it's x then y then z for the sake of argument. Then what happens if the x rotation is by 90 degrees? That folds the y axis onto where the z axis was. Then say the y rotation is also by 90 degrees. That's now bent the z axis onto where the x axis was. So now what effect does any z rotation have?
That's just an easy to grasp example. It's not a special case. You can't wave your hands out of it by saying "oh, I'll detect when to do z rotations first" or "I'll do 90 degree rotations with a special pathway" or any other little hack. Trying to store and update orientations as three independent scalars doesn't work.
In classic OpenGL, a call to glRotatef means "... and then rotate the current matrix like this". It's not relative to world coordinates or to model coordinates or to any other space that you're thinking in.

Rotate Camera in the Direction behind object in OpenGL

I'm making a game in OpenGL, using freeglut.
I have a car, which I am able to move back and forward using keys and the camera follows it. Now, when I turn the car(glRotate in xz plane), I want the camera to change the Camera position(using gluLookAt) so it always points to the back of the car.
Any suggestions how do I do that?
For camera follow I use the object transform matrix
get object transform matrix
camera=object
use glGetMatrix or whatever for that
shift rotate the position so Z axis is directing where you want to look
I use object aligned to forward on Z axis, but not all mesh models are like this so rotate by (+/-)90 deg around x,y or z to match this:
Z-axis is forward (or backward depends on your projection matrix and depth function)
X-axis is Right
Y-axis is Up
with respect to your camera/screen coordinate system (projection matrix). Then translate to behind position
apply POV rotation (optional)
if you can slightly rotate camera view from forward direction (mouse look) then do it at this step
camera*=rotation_POV
convert matrix to camera
camera matrix is usually inverse of the coordinate system matrix it represents so:
camera=Inverse(camera)
For more info look here understanding transform matrices the OpenGL inverse matrix computation in C++ is included there.

Resources