How do I obtain the orientation of the Azure Kinect? - azurekinect

I believed the IMU would get me this information, but I was at first confused as to why it didn't. The IMU will only report live feedback of the device once it is in motion (like any phone would), but it doesn't give me information about its current state.
What I would like to have is a return of the camera's rotation (yaw, pitch, roll) at any given moment so that I could later calibrate my work to it. Is there a way to do this with IMU that I am not seeing?

IMU data begins streaming after calling k4a_device_start_imu(). If you are not seeing a stream then there is likely a bug in your code.
There is some documentation for k4a_device_get_imu_sample() here: https://microsoft.github.io/Azure-Kinect-Sensor-SDK/master/group___functions_ga8e5913b3bb94a453c7143bbd6e399a0e.html#ga8e5913b3bb94a453c7143bbd6e399a0e

Related

How to get specific values (eg. battery2, servo outputs) available in Mission Planner through Dronekit?

I am currently using dronekit-python to implement somewhat of a Mission Planner clone, as an API. I've generally been able to replicate most important features from Mission Planner; however, some features don't seem to be present.
One such feature is reading live servo outputs, which can be done in Setup > Mandatory Hardware > Servo Output (image below). I have been able to emulate getting/setting the output's function, min, trim, max, and reversed values through parameters. However, I cannot seem to access the live position values through dronekit. How would you go about this?
A second feature is reading specific values from the plane, beyond the class attributes present. This is available in Mission Planner when double-clicking a value in the Quick pane in order to change what measurement is displayed (image below). For my use case, I'd like to specifically access battery_voltage2 and battery_remaining2, as these are vital measurements for our system. I tried using vehicle.battery in dronekit, but this seems to only display data from battery 1. Any ideas?
Thank You so much for the help!
It might be possible to get the battery information and other information from the drone by using Mavlink messages. For battery information, look at the BATTERY_STATUS (#147) Mavlink message. For servo information, look at the SERVO_OUTPUT_RAW (#36) message.
In order to receive these messages, look into using message listeners from dronekit-python. You should be able to receive and parse the Mavlink message.
In general, you can use message listeners and the dronekit-python message factory to receive and send Mavlink messages, which allows you more control than some of the built-in dronekit functions. If you decide to control the drone this way, though, be careful because it's pretty easy to mess up your logic and have the drone behave unexpectedly.
Hope this helps!

GPS + IMU Fusion filter

I have a question. I have a twin-engine boat and I want to implement an autopilot in it. Using GPS alone my boat goes to the destination like a sine wave, unfortunately for 300m. Just like the picture below.
I Use RadioLink M8N GPS SE100 (http://radiolink.com.cn/doce/UploadFile/ProductFile/SE100Manual.pdf) and STM32. It has a built-in geomagnetic sensor HMC5983. Is it possible to use this sensor and GPS to let my boat go straight?
I don't know much about all those Kalman filters, Fusion, etc.
My question is what should I use, apart from the GPS itself, what kind of sensors and filters to make my boat sail in a straight line.
Thanks in advance for the tips and hints.
It is important to position a boat. If you only use poor GPS receiver, you couldn't do this. In order to solve this, you should apply UKF(unscented kalman filter) with fusion of GPS and INS.
According to ublox documentation it is possible to enable fusion filter for M8N and feed the data to it.

Any Kalman Filter implementation in C for GPS + Accelerometer?

I'm trying to rectify GPS readings using Kalman Filter. I already have an IMU with me which has an accelerometer, gyro, and magnetometer.
I've tried looking up on Kalman Filters but it's all math and I can't understand anything. Any example codes would be great!
EDIT: In my project, I'm trying to move from one LAT,LONG GPS co-ordinate to another. I'd like to get smooth GPS reading instead of the ones showing displacement even when there's no movement. I am thinking of using an accelerometer to check displacement and remove GPS reading outliers. However, from what I've read, a Kalman Filter is used for such an application. But every example of it I've found is in some high-level language. It would be great if there's something in C I can build on. Thanks!
You're asking for a code example, without specifying any details, so it's hard to help you further.
You could try browsing github by searching for "kalman", and limit your query to C code.
https://github.com/search?l=C&q=kalman&type=Repositories

Fail to improve pose of point cloud with ADF origin

I save the point clouds of a scene and its quaternion in a pcl file.
First, I've only used pose w.r.t to device start (see second image) to get the quaternion. I've discovered a drifting problem which I mentioned here.
Therefore, I learned the scene with area learning (see first image) by walking around the table.
After that, I'm loading the area description file (ADF) to overcome the drifting. I wait for the first loop closure/localization in the onPoseAvailable callback.
Then in the onXyzIjAvailable callback I use its timestamp to get a valid pose w.r.t to the ADF origin (baseFrame = COORDINATE_FRAME_AREA_DESCRIPTION and targetFrame = COORDINATE_FRAME_DEVICE).
And I save the poseAtTime(xyzIj.timestamp) and the xyzIj in a *.PCD file.
But, the drifting seems to get even worse (see third image). It's better oriented to the origin, but the point clouds aren't that
close as in the image without adf.
Do I something wrong?
Is there any way to improve this?
You should set up the pose callback such that only poses with respect to ADF base are returned - poses with respect to start of service should not be returned - drift will not go away, but it will become minimal and will autocorrect - the ADF needs to be well trained to keep up the pose return rate.

Robotic Navigation using Kinect

Till now I have been able to create an application where the Kinect sensor is at one place. I have used speech recognition EmguCV (open cv) and Aforge.NET to help me process an image, learn and recognize objects. It all works fine but there is always scope for improvement and I am posing some problems: [Ignore the first three I want the answer for the fourth]
The frame rate is horrible. Its like 5 fps even though it should be like 30 fps. (This is WITHOUT all the processing) My application is running fine, it gets color as well as depth frames from the camera and displays it. Still the frame rate is bad. The samples run awesome, around 25 fps. Even though I ran the exact same code from the samples it wont just budge. :-( [There is no need for code, please tell me the possible problems.]
I would like to create a little robot on which the kinect and my laptop will be mounted on. I tried using the Mindstorms Kit but the lowtorque motors dont do the trick. Please tell me how will I achieve this.
How do I supply power on board? I know that the Kinect uses 12 volts for the motor. But it gets that from an AC adapter. [I would not like to cut my cable and replace it with a 12 volt battery]
The biggest question: How in this world will it navigate. I have done A* and flood-fill algorithms. I read this paper like a thousand times and I got nothing. I have the navigation algorithm in my mind but how on earth will it localize itself? [It should not use GPS or any kind of other sensors, just its eyes i.e. the Kinect]
Helping me will be Awesome. I am a newbie so please don't expect me to know everything. I have been up on the internet for 2 weeks with no luck.
Thanks A lot!
Localisation is a tricky task, as it depends on having prior knowledge of the environment in which your robot will be placed (i.e. a map of your house). While algorithms exist for simultaneous localisation and mapping, they tend to be domain-specific and as such not applicable to the general case of placing a robot in an arbitrary location and having it map its environment autonomously.
However, if your robot does have a rough (probabilistic) idea of what its environment looks like, Monte Carlo localisation is a good choice. On a high level, it goes something like:
Firstly, the robot should make a large number of random guesses (called particles) as to where it could possibly be within its known environment.
With each update from the sensor (i.e. after the robot has moved a short distance), it adjusts the probability that each of its random guesses is correct using a statistical model of its current sensor data. This can work especially well if the robot takes 360ยบ sensor measurements, but this is not completely necessary.
This lecture by Andrew Davison at Imperial College London gives a good overview of the mathematics involved. (The rest of the course will most likely be very interesting to you as well, given what you are trying to create). Good luck!

Resources