Exact Return Values of "Attitude" and "Velocity" In Dronekit API - dronekit-python

I have perused the Dronekit-Python API Reference thoroughly and before I continue with my Masters Engineering project I need some more information. I am using a Raspberry-Pi 2B as a companion computer with the Pixhawk Flight Controller to obtain certain information about the copter at a specific time instance. I need some more information on the return structure and returned values of certain calls in the Dronekit-Python API.
First and foremost I can only work with Euler angles, and if "class dronekit.Attitude" does not return Euler angles, I would like to know what is the easiest way to get the current attitude of the copter in Euler angles (in order of Yaw-Pitch-Roll) from the Pixhawk Flight Controller.
Secondly I would like to know in what axes/reference frame the velocity vector is returned in. Is it relatively to the fixed body axes of the copter or relative so some starting position in the North-East-Down coordinate system. I would also like to know how the velocity vector is obtained, is it solely based on GPS and pressure sensor measurements, or is it a fusion of all the on-board sensors including the IMU. This will greatly influence the accuracy of the Velocity vector which will incorporate a lot of uncertainty into my UKF.

Attitude is returned in radians. Here I'll show you a code I used to convert from radians to degrees. Just change the conversion constant to radians to euler conversion.
Let's assume you are inside a vehicle class.
# Attitude attribute listener.
# This will call attitude_callback every time the attitude changes
self.add_attribute_listener('attitude', self.attitude_callback)
def attitude_callback(self, *args):
attitude = args[2]
if attitude is not None:
# degrees = radians * 180 / pi
# CHANGE TO CONVERT FROM RADIANS TO EULER HERE
const = 180 / pi
pitch = attitude.pitch * const
yaw = attitude.yaw * const
roll = attitude.roll * const
self.horizon = (int(pitch), int(roll), int(yaw))
Velocity gives you [velocity_x, velocity_y, velocity_z].
velocity_x, # X velocity in NED frame in m/s
velocity_y, # Y velocity in NED frame in m/s
velocity_z, # Z velocity in NED frame in m/s
To understand how it works you need to understand first how NED frame works.
Take a look at send_ned_velocity() in:
http://python.dronekit.io/examples/guided-set-speed-yaw-demo.html#example-guided-mode-send-global-velocity

Related

How to Use FFT to clean the noise signal of my accelerometer

I'm using a accelerometer 3-axis, to get the acceleration and then calculate the roll and pitch angles using this formula below in C.
roll = atan2(aux_y, aux_z) * 57.3;
pitch = atan2((-aux_x), sqrt(aux_y * aux_y + aux_z * aux_z)) * 57.3;
It's working fine 0º is 0.3º, but the problem is when I get some vibration noise like a motor of a truck etc, the results are "crazy" 0 goes to 22, 45,16 etc, I need to clean the vibration of my signal, some coworkers mention to me that I must to use FFT to clean it, but I'm not so sure how I gonna do that.

AzureMap : Real Time Alert pop up

I am new to AzureMap with limited knowledge of JavaScript and looking help in getting the real time alert based on some random flag during the fleet movement based on the co ordinates .
I tried multiple sources to design it like followed Sample code enter link description here
My requirement is :
Pip should pop up or appear only on arrival of the fleet (truck) .
Thanks
To verify, when the truck gets to the end of the line you want to open a popup. I'm assuming you have a constant flow of data updating the truck position and that you can easily grab the trucks coordinate. As such you would only need a function to determine if the truck coordinate is at the end of the route line. You will likely need to account for a margin of error (i.e. within 15 meters of the end of the line) as a single coordinate can represent a single molecule with enough decimal places and GPS devices typically have an accuracy of +/- 15 meters. With this in mind all you would need to do is calculate the distance from the truck coordinate to the last coordinate of the route line. For example:
var lastRouteCoord = [-110, 45];
var truckCoord = [-110.0001, 45.0001];
var minDistance = 15;
//Get the distance between the coordinates (by default this function returns a distance in meters).
var distance = atlas.math.getDistanceTo(lastRouteCoord, truckCoord);
if(distance <= minDistance){
//Open popup.
//Examples: https://azuremapscodesamples.azurewebsites.net/index.html#Popups
}

How to update weights when using mini batches?

I am trying to implement mini batch training to my neural network instead of the "online" stochastic method of updating weights every training sample.
I have developed a somewhat novice neural network in C whereby i can adjust the number of neurons in each layer , activation functions etc. This is to help me understand neural networks. I have trained the network on mnist data set but it takes around 200 epochs to get down do an error rate of 20% on the training set which seams very poor to me. I am currently using online stochastic gradient decent to train the network. What i would like to try is use mini batches instead. I understand the concept that i must accumulate and average the error from each training sample before i propagate the error back. My problem comes in when i want to calculate the changes i must make to the weights. To explain this better consider a very simple perceptron model. One input, one hidden layer one output. To calculate the change i need to make to the weight between the input and the hidden unit i will use this following equation:
∂C/∂w1= ∂C/∂O*∂O/∂h*∂h/∂w1
If you do the partial derivatives you get:
∂C/∂w1= (Output-Expected Answer)(w2)(input)
Now this formula says that you need to multiply the back propogated error by the input. For online stochastic training that makes sense because you use 1 input per weight update. For minibatch training you used many inputs so which input does the error get multiplied by?
I hope you can assist me with this.
void propogateBack(void){
//calculate 6C/6G
for (count=0;count<network.outputs;count++){
network.g_error[count] = derive_cost((training.answer[training_current])-(network.g[count]));
}
//calculate 6G/6O
for (count=0;count<network.outputs;count++){
network.o_error[count] = derive_activation(network.g[count])*(network.g_error[count]);
}
//calculate 6O/6S3
for (count=0;count<network.h3_neurons;count++){
network.s3_error[count] = 0;
for (count2=0;count2<network.outputs;count2++){
network.s3_error[count] += (network.w4[count2][count])*(network.o_error[count2]);
}
}
//calculate 6S3/6H3
for (count=0;count<network.h3_neurons;count++){
network.h3_error[count] = (derive_activation(network.s3[count]))*(network.s3_error[count]);
}
//calculate 6H3/6S2
network.s2_error[count] = = 0;
for (count=0;count<network.h2_neurons;count++){
for (count2=0;count2<network.h3_neurons;count2++){
network.s2_error[count] = += (network.w3[count2][count])*(network.h3_error[count2]);
}
}
//calculate 6S2/6H2
for (count=0;count<network.h2_neurons;count++){
network.h2_error[count] = (derive_activation(network.s2[count]))*(network.s2_error[count]);
}
//calculate 6H2/6S1
network.s1_error[count] = 0;
for (count=0;count<network.h1_neurons;count++){
for (count2=0;count2<network.h2_neurons;count2++){
buffer += (network.w2[count2][count])*network.h2_error[count2];
}
}
//calculate 6S1/6H1
for (count=0;count<network.h1_neurons;count++){
network.h1_error[count] = (derive_activation(network.s1[count]))*(network.s1_error[count]);
}
}
void updateWeights(void){
//////////////////w1
for(count=0;count<network.h1_neurons;count++){
for(count2=0;count2<network.inputs;count2++){
network.w1[count][count2] -= learning_rate*(network.h1_error[count]*network.input[count2]);
}
}
//////////////////w2
for(count=0;count<network.h2_neurons;count++){
for(count2=0;count2<network.h1_neurons;count2++){
network.w2[count][count2] -= learning_rate*(network.h2_error[count]*network.s1[count2]);
}
}
//////////////////w3
for(count=0;count<network.h3_neurons;count++){
for(count2=0;count2<network.h2_neurons;count2++){
network.w3[count][count2] -= learning_rate*(network.h3_error[count]*network.s2[count2]);
}
}
//////////////////w4
for(count=0;count<network.outputs;count++){
for(count2=0;count2<network.h3_neurons;count2++){
network.w4[count][count2] -= learning_rate*(network.o_error[count]*network.s3[count2]);
}
}
}
The code i have attached is how i do the online stochastic updates. As you can see in the updateWeights() function the weight updates are based on the input values (dependent on the sample fed in) and the hidden unit values (also dependent on the input sample value fed in). So when i have the minibatch average gradient that i am propogating back how will i update the weights? which input values do i use?
Ok so i figured it out. When using mini batches you should not accumulate and average out the error at the output of the network. Each training examples error gets propogated back as you would normally except instead of updating the weights you accumulate the changes you would have made to each weight. When you have looped through the mini batch you then average the accumulations and change the weights accordingly.
I was under the impression that when using mini batches you do not have to propogate any error back until you have looped through the mini batch. I was wrong you still need to do that the only difference is you only update the weights once you have looped through your mini batch size.
For minibatch training you used many inputs so which input does the error get multiplied by?
"Many inputs" this is a proportion of the dataset size N, which typically segments your data into sizes which are not too large to fit into memory. DL needs Big Data and the full batch cannot fit into most computer systems to process in one go and therefore the mini-batch is necessary.
The error which gets backpropagated is the sum or average error calculated for the data samples in your current mini-batch $X^{{t}}$ which is of size M where $M<N$, $J^{{t}} = 1/m \sum_1^M ( f(x_m^{t})-y_m^{t} )^2$. This is the sum of the squared distances to the target across samples in the batch 't'. This is the forward step and then the backwards propagation of this error is made using the chain rule through the 'neurons' of the network; using this single value of the error for the whole batch. The update of the parameters is based upon this value for this mini-batch.
There are variations to how this scheme is implemented but if you consider your idea of using "many inputs" in the calculation of the parameter update using multiple input samples from the batch, we are averaging over multiple gradients to smooth over the gradient in comparison to stochastic gradient descent.

Openlayers 3 Circle radius in meters

How to get Circle radius in meters
May be this is existing question, but i am not getting proper result. I am trying to create Polygon in postgis with same radius & center getting from openlayers circle.
To get radius in meters I followed this.
Running example link.
var radiusInMeters = circleRadius * ol.proj.METERS_PER_UNIT['m'];
After getting center, radius (in meters) i am trying to generate Polygon(WKT) with postgis (server job) & drawing that feature in map like this.
select st_astext(st_buffer('POINT(79.25887485937808 17.036647682474722 0)'::geography, 365.70644956827164));
But both are not covering same area. Can any body please let me know where i am doing wrong.
Basically my input/output to/from Circle will be in meters only.
ol.geom.Circle might not represent a circle
OpenLayers Circle geometries are defined on the projected plane. This means that they are always circular on the map, but the area covered might not represent an actual circle on earth. The actual shape and size of the area covered by the circle will depend on the projection used.
This could be visualized by Tissot's indicatrix, which shows how circular areas on the globe are transformed when projected onto a plane. Using the projection EPSG:3857, this would look like:
The image is from OpenLayer 3's Tissot example and displays areas that all have a radius of 800 000 meters. If these circles were drawn as ol.geom.Circle with a radius of 800000 (using EPSG:3857), they would all be the same size on the map but the ones closer to the poles would represent a much smaller area of the globe.
This is true for most things with OpenLayers geometries. The radius, length or area of a geometry are all reported in the projected plane.
So if you have an ol.geom.Circle, getting the actual surface radius would depend on the projection and features location. For some projections (such as EPSG:4326), there would not be an accurate answer since the geometry might not even represent a circular area.
However, assuming you are using EPSG:3857 and not drawing extremely big circles or very close to the poles, the Circle will be a good representation of a circular area.
ol.proj.METERS_PER_UNIT
ol.proj.METERS_PER_UNIT is just a conversion table between meters and some other units. ol.proj.METERS_PER_UNIT['m'] will always return 1, since the unit 'm' is meters. EPSG:3857 uses meters as units, but as noted they are distorted towards the poles.
Solution (use after reading and understanding the above)
To get the actual on-the-ground radius of an ol.geom.Circle, you must find the distance between the center of the circle and a point on it's edge. This could be done using ol.Sphere:
var center = geometry.getCenter()
var radius = geometry.getRadius()
var edgeCoordinate = [center[0] + radius, center[1]];
var wgs84Sphere = new ol.Sphere(6378137);
var groundRadius = wgs84Sphere.haversineDistance(
ol.proj.transform(center, 'EPSG:3857', 'EPSG:4326'),
ol.proj.transform(edgeCoordinate, 'EPSG:3857', 'EPSG:4326')
);
More options
If you wish to add a geometry representing a circular area on the globe, you should consider using the method used in the Tissot example above. That is, defining a regular polygon with enough points to appear smooth. That would make it transferable between projections, and appears to be what you are doing server side. OpenLayers 3 enables this by ol.geom.Polygon.circular:
var circularPolygon = ol.geom.Polygon.circular(wgs84Sphere, center, radius, 64);
There is also ol.geom.Polygon.fromCircle, which takes an ol.geom.Circle and transforms it into a Polygon representing the same area.
My answer is a complement of the great answer by Alvin.
Imagine you want to draw a circle of a given radius (in meters) around a point feature. In my particular case, a 200m circle around a moving vehicle.
If this circle has a small diameter (< some kilometers), you can ignore earth roudness. Then, you can use the marker "Circle" in the style function of your point feature.
Here is my style function :
private pointStyle(feature: Feature, resolution: number): Array<Style> {
const viewProjection = map.getView().getProjection();
const coordsInViewProjection = (<Point>(feature.getGeometry())).getCoordinates();
const longLat = toLonLat(coordsInViewProjection, viewProjection);
const latitude_rad = longLat[1] * Math.PI / 180.;
const circle = new Style({
image: new CircleStyle({
stroke: new Stroke({color: '#7c8692'});,
radius: this._circleRadius_m / (resolution / viewProjection.getMetersPerUnit() * Math.cos(latitude_rad)),
}),
});
return [circle];
}
The trick is to scale the radius by the latitude cosine. This will "locally" disable the distortion effect we can observe in the Tissot Example.

Programmatically detect and extract an audio envelope

All suggestions and links to relevant info welcome here. This is the scenario:
Let us say I have a .wav file of someone speaking (and therefore all the samples associated with it).
I would like to run an algorithm on the series of samples to detect when an event happens i.e. the beginning and the end of an envelope. I would then use this starting and end point to extract that data to be used elsewhere.
What would be the best way to tackle this? Any pseudocode? Example code? Source code?
I will eventually be writing this in C.
Thanks!
EDIT 1
Parsing the wav file is not a problem. But some pseudo-code for the envelope detection would be nice! :)
The usual method is:
take absolute value of waveform, abs(x[t])
low pass filter (say 10 Hz cut-off)
apply threshold
You could use the same method as an old fashioned analog meter. Rectify the sample vector, pass the absolute value result though a low pass filter (FIR, IIR, moving average, etc.), than compare against some threshold. For a more accurate event time, you will have to subtract the group delay time of the low pass filter.
Added: You might also need to remove DC beforehand (say with a high-pass filter or other DC blocker equivalent to capacitive coupling).
Source code of simple envelope detectors can be found in the Music-DSP Source Code Archive.
I have written an activity detector class in Java. It's part of my open-source Java DSP collection.
first order low pass filter C# Code:
double old_y = 0;
double R1Filter(double x, double rct)
{
if (rct == 0.0)
return 0;
if (x > old_y)
old_y = old_y-(old_y - x)*rct/256;
else
old_y = old_y + (x - old_y) * rct/256;
return old_y;
}
When rct=2, it works like this:
The signal = (ucm + ucm * ma * Cos(big_omega * x)) * (Cos(small_omega1 * x) + Cos(small_omega2 * x) )
where ucm=3,big_omega=200,small_omega1=4,small_omega2=12 and ma=0.8
Pay attention that the filter may change the phase of the base band signal.

Resources