I have wasted 2 days (and nights) on this specific issue and completely failed in tilt compensating my magnetometer output.
I tried about everything I could read on Google and in open source examples, nothing guided me to a proper tilt compensation for both (roll and pitch). .
The Setup:
I use calibrated values but not unity values.
I have a fused gravity vector which is working properly and exact.
The sensor is a 9dof BMX055 (http://www.mouser.com/ds/2/621/BST-BMX055-DS000-01-274824.pdf)
The magnetometer min/max are +- 512 on each axis (with small differences but all axes are zero'd out).
The hardware is Sam3X8e (Cortex M3), all written in C.
Floating point and trigonometry can be done quite fast, so that's no problem.
The BMX055 chip is aligned so that the pins 20,19,18 point to the front.
Datasheet page 159-161 show the orientation.
Pitch rises when I rise the front.
Roll rises when I rise the left side.
Example values:
Pointing at a direction my algorithm calls 305deg when leveled horizontal
Pitch: 0 , Roll 0 : MAG cal: x 132 y 93 z -364
Pitch: +24, Roll 0 : MAG cal: x -109 y 93 z -397
Pitch: +46, Roll 0 : MAG cal: x -303 y 89 z -351
Pitch: 0 , Roll -44 : MAG cal: x 151 y 352 z -235
Pitch: 0 , Roll +36 : MAG cal: x 130 y -140 z -328
Pitch: 78 , Roll -2 : MAG cal: x -503 y 93 z -199
Pitch: 7 , Roll -53 : MAG cal: x 135 y 424 z -180
The alignment should always have been around 305 degree (as good as I could do it), maybe +- 5 degree.
The formula: (same one used about everywhere)
uint16_t compass_tilt_compensation(float roll_radians, float pitch_radians,float mag_y, float mag_x, float mag_z)
{
float tilt_compensated_heading;
float MAG_X;
float MAG_Y;
float cos_roll;
float sin_roll;
float cos_pitch;
float sin_pitch;
int16_t heading;
//pitch_radians =roll_radians;
//roll_radians *= -1;
//mag_x *= -1;
//roll_radians=0.0f;
//pitch_radians=0;v
//pitch_radians *=-1;
cos_roll = cos(roll_radians);
sin_roll = sin(roll_radians);
cos_pitch = cos(pitch_radians);
sin_pitch = sin(pitch_radians);
#if 0
MAG_X = mag_x*cos_pitch+mag_y*sin_roll*sin_pitch+mag_z*cos_roll*sin_pitch;
MAG_Y = mag_y*cos_roll-mag_z*sin_roll;
tilt_compensated_heading = atan2f(-MAG_Y,MAG_X);
#else
MAG_X = mag_x * cos_pitch + mag_z * sin_pitch;
MAG_Y = mag_x * sin_roll * sin_pitch + mag_y * cos_roll - mag_z * sin_roll * cos_pitch;
tilt_compensated_heading = atan2f(-MAG_Y,MAG_X);
//tilt_compensated_heading = atan2f(-mag_y,mag_x); // works fine when leveled
#endif
//convert to degree from 0-3599
heading = tilt_compensated_heading * RAD_TO_DEG * 10;
if ( heading < 0 )
{
heading += 3600;
}
return heading;
}
I tried various combinations, I tried to only fix one axis and leave one always at 0, exchanging X/Y pitch/roll, *-1 on any of the inputs.
The results are always completely wrong.
Sometimes (depending on where I try to invert or not invert a value by trial/error) the values seem to be off nearly linear. Sometimes one axis is compensated in positive areas.
However rolling and pitching always causes 'random' jumps and changes of the heading.
Math has never been my favourite, now I regret it.
My personal guess is that the formula is right in principe, the mag is working in principe (after all I get proper degrees when leveled) but I am somehow feeding something wrong.
(Like Y and X need to be exchanged, z*-1, pitch needs to be *-1)
Could someone who is good in this subject maybe take a look and guide me how to get a proper heading ?
Would be great to have a few hours sleep this night without having to dream about a dysfunctional algorithm again :)
Update:
The tilt compensation here works for negative roll when pointing at the 305deg heading.
It is also used here: http://www.emcu.it/MEMS/Compass/MEMS_Compass_A_RTM1v3.pdf
3 days of work, finally I found the issues I had and tilt compensation is working !
I read quite a few people had such issues in various forums, so here is what I did:
I explain what I did and why I did, step by step. Maybe someone with a similar issue will find a solution that way.
Playing around with the values can help (maybe just pitch or roll has to be inverted, X or Y has to be exchanged).
However, there are quite a lot of values and the amount of combinations is too high if your problem is more than a tiny one.
The formula posted works fine (the one which is in the active #if branch.
If you have a magnetometer and the atan2(-y,x) gives you a proper heading when leveled flat, then this formula will work for you too.
What I did was to completely go through all my sensors and vectors beginning from the I2C binary logic (and the datasheet).
Important in this secial case (BMX055), the Datasheet orientation page is WRONG!
There are multiple bugs regarding orientation of axes (x,y) as well as when a rotation is positive or negative. Sometimes right hand rule applies, sometimes not and the drawings are misleading (wrong). Bosch did a bad job documenting this chip (and the previous one).
They do not seem to want people to understand it properly, they write about a fusion API several times with optimized fixed point arithmetiks and advanced calibration but it's not available for the public.
What I needed to do was to make a proper body reference system.
Decide yourself where X is and then make it so all your sensors properly change the X axis when pitched/rolled into the same direction (positive/negative).
Do this for pitch,roll and gravity/magnetic field.
Once all of them play together nicely I started all over again.
The heading formula was still dysfunctional but now I trusted the vextors the first time.
I added a vector rotation matrix function and rotated the magnetic vector back using roll and pitch and yaw=0.
Surprise: the magnetic vector was tilt compensated.
Now I knew it CAN be done :)
I went back to the original formula, replaced X with Y (because I had exchanged them to match the body reference system (X and Y of gyro/mag).
Now tilt compensation worked for pitch but not for roll.
So I inverted roll_radians and suddenly it's perfectly tilt compensated.
I have two solutions now. One is a rotation matrix solution and the other one the standard solution everyone is using. I will see if one of them performs better and maybe give a last update here if the results are worth it.
First, verify that it is indeed a software problem. Your equations seem to be correct. Just in case, generate a table populated with test data to pass to your procedure and compare the output of your function with the expected values if the code was correct.
You are using a magnetometer and these are very sensitive devices. The first thing I would check is whether there are any large metal structures near the sensor. If you have access to an oscilloscope, probe the chip's power rails and see if power going into it is stable. Adding a 1uF decoupling cap could solve power issues. I would also suggest getting a graph while having a sampling frequency larger than 100Hz and see if the jumps are periodic. If the signal is periodic with a 50Hz frequency, assuming the sensor is not being moved, that would indicate interference from your mains supply. Performing FFT analysis over the data can't hurt. We had similar problems caused by power cables running underneath our lab floor. If the sensor keeps jumping around randomly, the chip is probably dead. Did you use proper ESD protection while handling it?
Hope this helps.
Related
I have calculated the tilt angle of accelerometer by following formula:
Angle_Accel = atan(Ax/sqrt(Ay*Ay+Az*Az))*(180/PI)
I want to calculate the tilt angle from gyroscope now and I am using Gx co-ordinate to integrate as follows but my result is not correct.
Psuedo Code
Measure Gx every 0.1 seconds.
After sensitivity factor and bias correction, I multiply with 180/PI to convert into degress.
Then I divide by frequency i.e 10 and add it to final angle.
C Code
Gx = (((float)GYRO_PLAY.Gyroscope_X )* GYRO_PLAY.Gyro_Mult)-Gx_Correction;
Gy = (((float)GYRO_PLAY.Gyroscope_Y )* GYRO_PLAY.Gyro_Mult)-Gy_Correction;
Gz = (((float)GYRO_PLAY.Gyroscope_Z )* GYRO_PLAY.Gyro_Mult)-Gz_Correction;
Gx_temp = (Gx*degrees)/10.0; //degrees = 180/PI
Gx_Theta = Gx_Theta + Gx_temp;
My angle is not correct. How should I integrate?
Any help is much appreciated.
PS: I know that there is a question like that here but it does not answer my problem so kindly help me.
10Hz sampling seems far too low, and you are in any case doing unnecessary work on each sample. Apply the raw bias offset and integrate the raw value - the conversion to degrees/sec if needed can be done at presentation, and teh intermediat conversion to radians/sec serves no purpose.
The robot does not care about or even understand units you don't need any conversion; just sign and magnitude - the sensitivity can be dealt with by your closed-loop controller coefficients.
How is Gx_Correction determined - it will vary over time with thermal drift; if it is incorrect or not tracked in some way, your integrator will magnify the error.
Note that higher sample rates over the SPI may not be possible - that is what the on-chip DPM is for.
Another possible source of error is the use of float. The STM32F4 has a single precision FPU, so the operation will be done in hardware - however if you are using floating point in interrupt or thread contexts be-aware that the floating point registers are unlikely to be preserved between contexts unless you have explicitly implemented it, so for example a floating point operation will be corrupted if interrupted by an interrupt that performs floating point operations.
If the integrator only has to work the raw data values, the floating point is unnecessary:
Gx_Theta += GYRO_PLAY.Gyroscope_X - Gx_Zero_Bias;
Have to integrate Gy instead of Gx.
Also, the angle measured by gyroscope is in deg/s so no need to multiply with degrees.
Frequency of integration also needs to be 50Hz.
In a trained neural net the weight distribution will fall close around zero. So it makes sense for me to initiate all weights to zero. However there are methods such as random assignment for -1 to 1 and Nguyen-Widrow that outperformes zero initiation. How come these random methods are better then just using zero?
Activation & learning:
Additionally to the things cr0ss said, in a normal MLP (for example) the activation of layer n+1 is the dot product of the output of layer n and the weights between layer n and n + 1...so basically you get this equation for the activation a of neuron i in layer n:
Where w is the weight of the connection between neuron j (parent layer n-1) to current neuron i (current layer n), o is the output of neuron j (parent layer) and b is the bias of current neuron i in the current layer.
It is easy to see initializing weights with zero would practically "deactivate" the weights because weights by output of parent layer would equal zero, therefore (in the first learning steps) your input data would not be recognized, the data would be negclected totally.
So the learning would only have the data supplied by the bias in the first epochs.
This would obviously render the learning more challenging for the network and enlarge the needed epochs to learn heavily.
Initialization should be optimized for your problem:
Initializing your weights with a distribution of random floats with -1 <= w <= 1 is the most typical initialization, because overall (if you do not analyze your problem / domain you are working on) this guarantees some weights to be relatively good right from the start. Besides, other neurons co-adapting to each other happens faster with fixed initialization and random initialization ensures better learning.
However -1 <= w <= 1 for initialization is not optimal for every problem. For example: biological neural networks do not have negative outputs, so weights should be positive when you try to imitate biological networks. Furthermore, e.g. in image processing, most neurons have either a fairly high output or send nearly nothing. Considering this, it is often a good idea to initialize weights between something like 0.2 <= w <= 1, sometimes even 0.5 <= w <= 2 showed good results (e.g. in dark images).
So the needed epochs to learn a problem properly is not only dependent on the layers, their connectivity, the transfer functions and learning rules and so on but also to the initialization of your weights.
You should try several configurations. In most situations you can figure out what solutions are adequate (like higher, positive weights for processing dark images).
Reading the Nguyen article, I'd say it is because when you assign the weight from -1 to 1, you are already defining a "direction" for the weight, and it will learn if the direction is correct and it's magnitude to go or not the other way.
If you assign all the weights to zero (in a MLP neural network), you don't know which direction it might go to. Zero is a neutral number.
Therefore, if you assign a small value to the node's weight, the network will learn faster.
Read Picking initial weights to speed training section of the article. It states:
First, the elements of Wi are assigned values from a uniform random distributation between -1 and 1 so that its direction is random. Next, we adjust the magnitude of the weight vectors Wi, so that each hidden node is linear over only a small interval.
Hope it helps.
This is a hard one and although I can think of a few kludge methods of doing it, I have a feeling there is a clean mathematical method, although I am having difficulty inventing it myself.
I have a number of parameters which control (software) biquad filters for audio. Essentially there are just 3 parameters, frequency, gain and Q (or bandwidth). In audio terms, the frequency represents the center frequency of the filter. The gain represents whether this frequency is boosted or cut (a gain of 0 results in no change to the audio passing through the filter). Q represents the width of the filter - IE a very wide filter might affect frequencies far away from the center frequency, whereas a narrow (low Q) filter will only affect frequencies close to the center frequency.The filters take the form of a bell curve, or at least thats an approximation, whether its mathematically accurate I am not sure.
I want to display the characteristics of these filters graphically - display a graph of gain against frequency. There are several of these filters applied to the audio channel, and I want to be able to add the different result graphs, to produce an overall graph (IE a graph summing all the components of the combined filters). But I also want to be able to access the individual filters graphs.
I can handle adding the component graphs into a single 'total' graph, but how to produce the original x-y graph from the filter parameters escapes me. I will draw bitmaps so all I need is to be able to create arrays of the form frequency[x]=y. Im doing this in C so I don't have the mathematical tools in matlab etc. So I might have a filter with a center frequency of say 1000 (Hz), a gain of say 20 (db or linear I understand how to convert that), and a Q of say 3. The Q factor is relative and does not have to be exactly mathematically correct if that causes any complication.
It seems like a quite simple mathematical function but maths is not my strong point and I don't know enough - I have been messing round with sine functions etc but its not working and I suspect is probably wasting processing power by over complicating the maths (although I might be wrong there).
TIA, Pete
I have my doubts about the relationships between biquad filters, Q values, and bell curves. But I'll put those aside and just tell you how to draw a bell curve, since that's what you asked.
From this wikipedia article, the equation for a bell curve is
where for your application
a corresponds to the gain
b determines the center frequency
2c^2 is related to Q (larger values will make the curve wider)
The C code below computes a sample bell curve. For this example, the numbers were chosen based on drawing into a window that is 250 pixels wide by 200 pixels high, with a coordinate system where the origin {0,0} is at the bottom left corner.
int width = 250;
int height = 200;
int bellCurve[width]; // the output array that holds the f(x) values
double gain = 180; // the 'a' value, determines how high the peak is from the baseline
double offset = 10; // the 'd' value, determines the y coordinate of the baseline
double qFactor = 1000; // the '2c^2' value, determines how fat the curve is
double center = 100; // the 'b' value, determines the x coordinate of the peak
double dx;
for ( int x = 0; x < width; x++ )
{
dx = x - center;
bellCurve[x] = gain * exp( -( dx * dx ) / qFactor ) + offset;
}
Plotting the curve results in an image like this where the peak is at x=100, y=10+180=190
You could input a unit impulse (an array of all zeros, except one element=1.0) into your digital filters, treating them as black boxes. Then FFT the impulse response output array to get the frequency response of the filter. Plotting the magnitude of the complex frequency samples will give you a pretty picture. Python+numpy+matplotlib would probably be an easier way to go about it. You will need to know the sampling period to get meaningful plots.
What you really want is the bode plot of the filter. This is non trivial to calculate yourself, a cursory search for a library to do it for you in C yielded nothing. If accuracy is not important and you can approximate the shape once and stretch it based on the parameters of the particular filter. For example, you might have a normalized array of relative values and construct a new curve (array) based on the parameters of the filter and the base curve you generated earlier.
The base curve could be generated from MATLAB if you can or Wolfram Alpha or something like that.
Here is one in javascript.
http://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/
The filter you describe is the 'peak' filter. Use the log scale to display frequencies.
—Tom
EDIT:
I have stripped the program down to its basics: https://github.com/aidan-aidan/temp/blob/master/source/vpp.c
I have posted this question over at the gba-dev forums but they seem pretty dead and I have not gotten a response after many days.
Here is a video showing the problem: http://youtu.be/8gweFiSobwc (different from the code above, you can run the rom if you want to see it)
As you can see the BG rotation is unaffected, though they use the same LUT and they have the same data types in their structures as each other.
I have redone the LUT a few times to no avail, and this problem only surfaced after switching to 2048 circle divisions rather than 256.
Looking at the memory viewer in VBA, I can see that pb is not behaving the same as pa. pa ranges from 0x0100 to 0 as I would expect (same as 1 to 0), but pb ranges from 0 to 0x0096 when the gun is right from the center line, but jumps up to 0xFFFF as soon as it goes left from the center line. The only thing I can figure is that it is going negative (which makes sense, as cosine of that angle should result in a negative number), but I don't fully understand ones complement so i can't be sure. 0 "degrees" is horizontal on the right for the gun. 512 would be ninety degrees.
I have included everything I can think of, any help is appreciated.
Thanks!
As near as I can tell the program is technically working correctly. Instead your visual artifacts stem from the limited 8-bit fixed-point precision in the affine transformation used by the hardware.
Essentially the difference between a 0 and 1 in the sine tables makes a rather large and visible jump in the rotation for the right-angles in your rectangular sprite, while the 0°/90°/180°/270° corners are not that special for the round earth background.
The tangent for the (co-)sine function at 0 is high, so if you look at your tables there is only a single spot where they are precisely 0. If the rotation doesn't fall precisely on this index the result looks visibly skewed.
What you may try however is to cheat by fiddling with the rounding to produce more consecutive zeroes. At the moment you table generator is scaling by 256 and rounding towards the nearest integer:
sine[i] = floor(sin(i * M_PI / 1024.0) * 256.0 + 0.5);
As an alternative we may skew the table a little by rounding towards zero, as is the default in C, and increasing the scale to insure that the table still reaches 256 for at more than one entry.
int value = sin(i * M_PI / 1024.0) * 257.0;
if(value < -256) value = -256;
if(value > +256) value = +256;
sine[i] = value;
The table may be then flattened even further by increasing the scaling factor even further.
The problem is also that for small sprites near right angles there is nowhere for the rotation to go.
Picture a 64x1 straight line sprite pointing vertically at nearly an almost straight angle, with only the minimum 1/256th fractional step from the sine table. Keep in mind that the rotation is always centered so you get one jagged horizontal step precisely at the center.
For each of the 32 pixels from the center outwards you then add up another 1/256th fraction to the texture coordinate. However all of these together only make for 1/8th of a pixel in total and so no more horizontal steps are taken. In fact for this sprite shape you will see no visible change until the rotation angle reaches a full 8/256th fraction.
On the other hand your large earth object is sufficiently big to always show multiple visible integer "steps." This means that an increased rotation angle will always at least shift the position of the non-centered step, even if it insufficient to add up to another full pixel along entire side. The result is a smoother overall effect due to the continual animation.
Okay, this a bit of maths and DSP question.
Let us say I have 20,000 samples which I want to resample at a different pitch. Twice the normal rate for example. Using an Interpolate cubic method found here I would set my new array index values by multiplying the i variable in an iteration by the new pitch (in this case 2.0). This would also set my new array of samples to total 10,000. As the interpolation is going double the speed it only needs half the amount of time to finish.
But what if I want my pitch to vary throughout the recording? Basically I would like it to slowly increase from a normal rate to 8 times faster (at the 10,000 sample mark) and then back to 1.0. It would be an arc. My questions are this:
How do I calculate how many samples would the final audio track be?
How to create an array of pitch values that would represent this increase from 1.0 to 8.0 back to 1.0
Mind you this is not for live audio output, but for transforming recorded sound. I mainly work in C, but I don't know if that is relevant.
I know this probably is complicated, so please feel free to ask for clarifications.
To represent an increase from 1.0 to 8.0 and back, you could use a function of this form:
f(x) = 1 + 7/2*(1 - cos(2*pi*x/y))
Where y is the number of samples in the resulting track.
It will start at 1 for x=0, increase to 8 for x=y/2, then decrease back to 1 for x=y.
Here's what it looks like for y=10:
Now we need to find the value of y depending on z, the original number of samples (20,000 in this case but let's be general). For this we solve integral 1+7/2 (1-cos(2 pi x/y)) dx from 0 to y = z. The solution is y = 2*z/9 = z/4.5, nice and simple :)
Therefore, for an input with 20,000 samples, you'll get 4,444 samples in the output.
Finally, instead of multiplying the output index by the pitch value, you can access the original samples like this: output[i] = input[g(i)], where g is the integral of the above function f:
g(x) = (9*x)/2-(7*y*sin((2*pi*x)/y))/(4*pi)
For y=4444, it looks like this:
In order not to end up with aliasing in the result, you will also need to low pass filter before or during interpolation using either a filter with a variable transition frequency lower than half the local sample rate, or with a fixed cutoff frequency more than 16X lower than the current sample rate (for an 8X peak pitch increase). This will require a more sophisticated interpolator than a cubic spline. For best results, you might want to try a variable width windowed sinc kernel interpolator.