Weird angle results when drone is flying - c

We use the L3GD20 gyro sensor and the LSM303DLHC accelerometer sensor in combination with the complementary filter to measure the angles of the drone.
If we simulate the angles of the drone with our hands, for example, if we tilt the drone forward, our x angle is positive. And if we tilt it backwards, our x angle is negative.
But if we activate our motors, the drone will always go to a negative x-angle. On top of that, the x-angle that used to be positive is now negative. Because the drone tries to compensate this angle, but the angles are inverted, the drone will never go back to its original state.
#define RAD_TO_DEG 57.29578
#define G_GAIN 0.070
#define AA 0.98
#define LOOP_TIME 0.02
void calculate_complementaryfilter(float* complementary_angles)
{
complementary_angles[X] = AA * (complementary_angles[X] + gyro_rates[X] * LOOP_TIME) + (1 - AA) * acc_angles[X];
complementary_angles[Y] = AA * (complementary_angles[Y] + gyro_rates[Y] * LOOP_TIME) + (1 - AA) * acc_angles[Y];
}
void convert_accelerometer_data_to_deg()
{
acc_angles[X] = (float) atan2(acc_raw[X], (sqrt(acc_raw[Y] * acc_raw[Y] + acc_raw[Z] * acc_raw[Z]))) * RAD_TO_DEG;
acc_angles[Y] = (float) atan2(acc_raw[Y], (sqrt(acc_raw[X] * acc_raw[X] + acc_raw[Z] * acc_raw[Z]))) * RAD_TO_DEG;
}
void convert_gyro_data_to_dps()
{
gyro_rates[X] = (float)gyr_raw[X] * G_GAIN;
gyro_rates[Y] = (float)gyr_raw[Y] * G_GAIN;
gyro_rates[Z] = (float)gyr_raw[Z] * G_GAIN;
}
The problem isn't the shaking of the drone. If we put the motors on max speed and simulate the angles by hand, we get the right angles. Thus also the right compensation by the motors.
If we need to add more code, just ask.
Thankyou in advance.

Standard exhaustive methodology for this kind of problems
You can go top-bottom or bottom-top. In this case, I'm more inclined to think in a hardware related problem, but it is up to you:
Power related problem
When you take the drone with your hand and run motors at full throttle, do they have propellers installed?
Motors at full speed without propellers drawn only a fraction of their full load. When lifting drone weight, a voltage drop can cause electronic malfunction.
Alternative cause: shortcircuits/derivations?
Mechanical problem (a.k.a vibrations spoil sensor readings)
In the past, I've seen MEMS sensors suffer a lot under heavy vibrations (amplitude +-4g). And with "suffer a lot" I mean accelerometer not even registering gravity and gyros returning meaningless data.
If vibrations are significative, you need either a better frame for the drone or a better vibration isolation for the sensors.
SW issues (data/algorithm/implementation)
If it is definitely unrelated with power and mechanical aspects, you can log raw sensor data and process it offline to see if it makes sense.
For this, you need an implementation of the same algorithm embedded in the drone.
Here you will be able to discern between:
Buggy/wrong embedded implementation.
Algorithm fails under working conditions.
Other: data looks wrong (problem reading), SW not reaching time cycle constraints.

Related

How to use KissFFT with audio?

I have an array of 2048 samples of an audio file at 44.1 khz and want to transform it into a spectrum for an LED effect. I don't know too much about the inner workings of fft but I tryed it using kiss fft:
kiss_fft_cpx *cpx_in = malloc(FRAMES * sizeof(kiss_fft_cpx));
kiss_fft_cpx *cpx_out = malloc(FRAMES * sizeof(kiss_fft_cpx));
kiss_fft_cfg cfg = kiss_fft_alloc( FRAMES , 0 ,0,0 );
for(int j = 0;j<FRAMES;j++) {
float x = (alsa_buffer[(fft_last_index+j+BUFFER_OVERSIZE*FRAMES)%(BUFFER_OVERSIZE*FRAMES)] - offset);
cpx_in[j] = (kiss_fft_cpx){.r = x, .i = x};
}
kiss_fft(cfg, cpx_in, cpx_out);
My output seems really off. When I play a simple sine, there multiple outputs with values way above zero. Also it generally seems like the first entries are way higher. Do I have to weigh the outputs?
I also don't understand how I have to treat the complex numbers, I'm currently using my input values on the real and imaginary part and for the output I use the abs, is that right?
Also usually spectrum analyzers for audio have logarithmic scaling, so I tried that but the problem is that the fft output as far as I know isn't logarithmic, so the first band for example is say 0-100hz but optimally my first LED on the effect should be only up to like 60hz (so a fraction of the first outputs band), while the last LED would be say 8khz to 10khz which would in that case be 20 fft outputs.
Is there any way to make the output logarithmic? How do I limit the spectrum to 20khz (or know what the bands of the output are in general) and is there any other thing to look out for when working with audio signals?

Phase angle from FFT using atan2 - weird behaviour. Phase shift offset? Unwrapping?

I'm testing and performing simple FFT's and I'm interested in phase shift.
I geneate simple array of 256 samples with sinusoid with 10 cycles.
I perform an FFT of those samples and receiving complex data (2x128).
Than I calculate magnitude of those data and FFT looks like expected:
Then I want to calculate phase shift from fft complex output. I'm using atan2.
Combined output fft_magnitude (blue) + fft+phase (red) looks like this:
This is pretty much what I expect with a "small" problem. I know this is wrapping but if I imagine unwrapping it, the phase shift in the magnitude peak is reading 36 degrees and I think it should be 0 because my input sinusiod was not shifted at all.
If I shift this -36 deg (blue is in-phase, red is shifted, blue is printed only for reference) the sinusiod looks like this:
And than if I perform an FFT of this red data the magnitude + phase output looks like this:
So it is easy to imagine that unwrapped phase will be close to 0 at the magniture peak.
So there is 36 deg offset. But what happenes if I prepare data with 20 cycles per 256 samples and 0 phase shift
If I then perform an FFT, this is an output (magnitude + phase):
And I can tell you if will cross the peak point at 72 degrees. So there is now 72 degrees offset.
Can anyone give me a hint why is that happening?
Is it right that atan2() phase output is frequency dependent with offset of 2pi/cycles (360 deg/cycles) ?
How to unwrap it and get correct results (I couldn't find working C library to unwrap).
This is running on ARM Cortex-M7 processor (embedded).
#define phaseShift 0
#define cycles 20
#include <arm_math.h>
#include <arm_const_structs.h>
float32_t phi = phaseShift * PI / 180; //phase shift in radians
float32_t data[256]; //input data for fft
float32_t output_buffer[256]; //output buffer from fft
float32_t phase_data[128]; //will contain atan2 values of output from fft (output values are complex)
float32_t magnitude[128]; //will contain absolute values of output from fft (output values are complex)
float32_t incrFactorRadians = cycles * 2 * PI / 255;
arm_rfft_fast_instance_f32 RealFFT_Instance;
void setup()
{
Serial.begin(115200);
delay(2000);
arm_rfft_fast_init_f32(&RealFFT_Instance, 256); //initializing fft to be ready for 256 samples
for (int i = 0; i < 256; i++) //print sinusoids
{
data[i] = arm_sin_f32(incrFactorRadians * i + phi);
Serial.print(arm_sin_f32(incrFactorRadians * i), 8); Serial.print(","); Serial.print(data[i], 8); Serial.print("\n"); //print reference in-phase sinusoid and shifted sinusoid (data for fft)
}
Serial.print("\n\n");
delay(10000);
arm_rfft_fast_f32(&RealFFT_Instance, data, output_buffer, 0); //perform fft
for (int i = 0; i < 128; i++) //calculate absolute values of an fft output (fft output is complex), and phase shift
{
magnitude[i] = output_buffer[i * 2] * output_buffer[i * 2] + output_buffer[(i * 2) + 1] * output_buffer[(i * 2) + 1];
__ASM("VSQRT.F32 %0,%1" : "=t"(magnitude[i]) : "t"(magnitude[i])); //fast square root ARM DSP function
phase_data[i] = atan2(output_buffer[i * 2], output_buffer[i * 2 +1]) * 180 / PI;
}
}
void loop() //print magnitude of fft and phase output every 10 seconds
{
for (int i = 0; i < 128; i++)
{
Serial.print(magnitude[i], 8); Serial.print(","); Serial.print(phase_data[i], 8); Serial.print("\n");
}
Serial.print("\n\n");
delay(10000);
}
To break down the excellent answer by hotpaw2. (Their answers are always so loaded with golden nuggets of information that I spend days learning enough to comprehend the brilliance.)
When an engineer says "integer periodic" they mean your samples that you are feeding into the FFT (the aperture) needs to sample in a way the ensures you capture one full wave of the frequency sin wave.
Think of the sin wave starting at zero and cresting at one then falling below zero into the trough at negative one and then coming back up to zero.
This is one "full cycle". Now if your wave has a period of 10 cycles per second and you sample at 100 samples per second you will have 10 samples per wave.
So now you put 13 samples into an FFT and your phase is off. Why?
Well the phase is looking for the wave to smoothly continue forever. You just started a zero for the first sample and dropped off as .25 on the 13th sample. Now the phase calculation tries to connect the two ends and has this jump in the wave. This causes the phase to come out wrong.
What you need to do is select a number of samples to feed into your FFT that you know will contain full waves only.
(NOTE) You are only concerned with the phase of one freq at a time.
AND your sample aperture must not start and end at the sin waves same point.
IF you start at zero and end at zero the calculation pasting the two ends together in a forever circle will get two zeros at the transition. So you have to stop one sample short of the repeat point.
Code demonstrating this can be found: Scipy FFT - how to get phase angle
An bare FFT plus an atan2() only correctly measures the starting phase of an input sinusoid if that sinusoid is exactly integer periodic in the FFT's aperture width.
If the signal is not exactly integer periodic (some other frequency), then you have to recenter the data by doing an FFTshift (rotate the data by N/2) before the FFT. The FFT will then correctly measure the phase at the center of the original data, and away from the circular discontinuity produced by the FFT's finite length rectangular window on non-periodic-in-aperture signals.
If you want the phase at some point in the data other than the center, you can use the estimate of the frequency and phase at the center to recalculate the phase at other positions.
There are other window functions (Blackman-Nutall, et.al.) that might produce a better phase estimate than a rectangular window, but usually not as good an estimate as using an FFTShift.

RMS calculation DC offset

I need to implement an RMS calculations of sine wave in MCU (microcontroller, resource constrained). MCU lacks FPU (floating point unit), so I would prefer to stay in integer realm. Captures are discrete via 10 bit ADC.
Looking for a solution, I've found this great solution here by Edgar Bonet: https://stackoverflow.com/a/28812301/8264292
Seems like it completely suits my needs. But I have some questions.
Input are mains 230 VAC, 50 Hz. It's transformed & offset by hardware means to become 0-1V (peak to peak) sine wave which I can capture with ADC getting 0-1023 readings. Hardware are calibrated so that 260 VRMS (i.e. about -368:+368 peak to peak) input becomes 0-1V peak output. How can I "restore" back original wave RMS value providing I want to stay in integer realm too? Units can vary, mV will do fine also.
My first guess was subtracting 512 from the input sample (DC offset) and later doing this "magic" shift as in Edgar Bonet answer. But I've realized it's wrong because DC offset aren't fixed. Instead it's biased to start from 0V. I.e. 130 VAC input would produce 0-500 mV peak to peak output (not 250-750 mV which would've worked so far).
With real RMS to subtract the DC offset I need to subtract squared sum of samples from the sum of squares. Like in this formula:
So I've ended up with following function:
#define INITIAL 512
#define SAMPLES 1024
#define MAX_V 368UL // Maximum input peak in V ( 260*sqrt(2) )
/* K is defined based on equation, where 64 = 2^6,
* i.e. 6 bits to add to 10-bit ADC to make it 16-bit
* and double it for whole range in -peak to +peak
*/
#define K (MAX_V*64*2)
uint16_t rms_filter(uint16_t sample)
{
static int16_t rms = INITIAL;
static uint32_t sum_squares = 1UL * SAMPLES * INITIAL * INITIAL;
static uint32_t sum = 1UL * SAMPLES * INITIAL;
sum_squares -= sum_squares / SAMPLES;
sum_squares += (uint32_t) sample * sample;
sum -= sum / SAMPLES;
sum += sample;
if (rms == 0) rms = 1; /* do not divide by zero */
rms = (rms + (((sum_squares / SAMPLES) - (sum/SAMPLES)*(sum/SAMPLES)) / rms)) / 2;
return rms;
}
...
// Somewhere in a loop
getSample(&sample);
rms = rms_filter(sample);
...
// After getting at least N samples (SAMPLES * X?)
uint16_t vrms = (uint32_t)(rms*K) >> 16;
printf("Converted Vrms = %d V\r\n", vrms);
Does it looks fine? Or am I doing something wrong like this?
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? I.e. how much real samples do I need to feed to rms_filter() before I can get real RMS value providing my capture speed are X sps? At least how to evaluate required minimum N of samples?
I did not test your code, but it looks to me like it should work fine.
Personally, I would not have implemented the function this way. I would
instead have removed the DC part of the signal before trying to
compute the RMS value. The DC part can be estimated by sending the raw
signal through a low pass filter. In pseudo-code this would be
rms = sqrt(low_pass(square(x - low_pass(x))))
whereas what you wrote is basically
rms = sqrt(low_pass(square(x)) - square(low_pass(x)))
It shouldn't really make much of a difference though. The first formula,
however, spares you a multiplication. Also, by removing the DC component
before computing the square, you end up multiplying smaller numbers,
which may help in allocating bits for the fixed-point implementation.
In any case, I recommend you test the filter on your computer with
synthetic data before committing it to the MCU.
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC
capture rate (samples per second)?
The constant SAMPLES controls the cut-off frequency of the low pass
filters. This cut-off should be small enough to almost completely remove
the 50 Hz part of the signal. On the other hand, if the mains
supply is not completely stable, the quantity you are measuring will
slowly vary with time, and you may want your cut-off to be high enough
to capture those variations.
The transfer function of these single-pole low-pass filters is
H(z) = z / (SAMPLES * z + 1 − SAMPLES)
where
z = exp(i 2 π f / f₀),
i is the imaginary unit,
f is the signal frequency and
f₀ is the sampling frequency
If f₀ ≫ f (which is desirable for a good sampling), you can approximate
this by the analog filter:
H(s) = 1/(1 + SAMPLES * s / f₀)
where s = i2πf and the cut-off frequency is f₀/(2π*SAMPLES). The gain
at f = 50 Hz is then
1/sqrt(1 + (2π * SAMPLES * f/f₀)²)
The relevant parameter here is (SAMPLES * f/f₀), which is the number of
periods of the 50 Hz signal that fit inside your sampling window.
If you fit one period, you are letting about 15% of the signal through
the filter. Half as much if you fit two periods, etc.
You could get perfect rejection of the 50 Hz signal if you design a
filter with a notch at that particular frequency. If you don't want
to dig into digital filter design theory, the simplest such filter may
be a simple moving average that averages over a period of exactly
20 ms. This has a non trivial cost in RAM though, as you have to
keep a full 20 ms worth of samples in a circular buffer.

Moving Average for ADC

Hi All i am working on a project where I have to calculate the moving average of ADC readings. The data coming out from ADC represent an Sinusoidal wave.
This is the code I am using to get moving average of a given signal.
longNew = (8 bit data from ADC);
longNew = longNew << 8;
//Division
longNew = longNew >> 8; //255 Samples
longTemp = avgALong >> 8;
avgALong -= longTemp;// Old data
avgALong += longNew;// New Data
avgA = avgALong >> 8;//256 Point Average
Reference Image
Please refer this image for upper limit and lower limit relative to reference (or avgA)
Currently I am using a constant value to obtain the upper limit and lower limit of voltage for my application
which I am calculating as follows
upper_limit = avgA + Delta(x);
lower_limit = avgA - Delta(x);
In my case I am taking Delta(x) = 15.
I want to calculate this constant expression or Delta(x) based on signal strength.
The maximum voltage level of signal is 255 or 5Volt.
The minimum voltage level of signal varies frequently because of that a constant value is not useful for my application which determines the lower and upper limit.
Please help
Thank you
Now with the description of what's going on, I think you want three running averages:
The input signal. Lightly average it to help tamp down noise.
upper_limit When you determine local maximums, push them into this average.
lower_limit When you determine local minimums, push them into this average.
Your delta would be (upper_limit-lower_limit)/8 (or 4, or whatever). Your hysteresis points would be upper_limit - delta and lower_limit + delta.
Every time you transition to '1', push the current local minimum into the lower_limit moving average and then begin searching for a new local maximum. When you transition to '0', push the local maximum into the upper_limit moving average and begin searching for a new local minimum.
There is a problem if your signal strength is wildly varying (you could get to a point where your signal suddenly drops into the hysteresis band and you never get any more transitions). You could solve this a few ways:
Count how much time you spend in the hysteresis band and reset everything if you spend too much time.
Or
for each sample in the hysteresis band, bring upper_limit and lower_limit slightly closer together. Eventually they'd collapse to the point where you start detecting transitions again.
Take this with a grain of salt. If you're doing this for a school project, it almost certainly wont match whatever scholarly method your professor is looking for.

Suggestions on optimizing a Z-buffer implementation?

I'm writing a 3D graphics library as part of a project of mine, and I'm at the point where everything works, but not well enough.
In particular, my main headache is that my pixel fill-rate is horribly slow -- I can't even manage 30 FPS when drawing a triangle that spans half of an 800x600 window on my target machine (which is admittedly an older computer, but it should be able to manage this . . .)
I ran gprof on my executable, and I end up with the following interesting lines:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
43.51 9.50 9.50 vSwap
34.86 17.11 7.61 179944 0.04 0.04 grInterpolateHLine
13.99 20.17 3.06 grClearDepthBuffer
<snip>
0.76 21.78 0.17 624 0.27 12.46 grScanlineFill
The function vSwap is my double-buffer swapping function, and it also performs vsyching, so it makes sense to me that the test program will spend much of its time waiting in there. grScanlineFill is my triangle-drawing function, which creates an edge list and then calls grInterpolateHLine to actually fill in the triangle.
My engine is currently using a Z-buffer to perform hidden surface removal. If we discount the (presumed) vsynch overhead, then it turns out that the test program is spending something like 85% of its execution time either clearing the depth buffer, or writing pixels according to the values in the depth buffer. My depth buffer clearing function is simplicity itself: copy the maximum value of a float into each element. The function grInterpolateHLine is:
void grInterpolateHLine(int x1, int x2, int y, float z, float zstep, int colour) {
for(; x1 <= x2; x1 ++, z += zstep) {
if(z < grDepthBuffer[x1 + y*VIDEO_WIDTH]) {
vSetPixel(x1, y, colour);
grDepthBuffer[x1 + y*VIDEO_WIDTH] = z;
}
}
}
I really don't see how I can improve that, especially considering that vSetPixel is a macro.
My entire stock of ideas for optimization has been whittled down to precisely one:
Use an integer/fixed-point depth buffer.
The problem that I have with integer/fixed-point depth buffers is that interpolation can be very annoying, and I don't actually have a fixed-point number library yet. Any further thoughts out there? Any advice would be most appreciated.
You should have a look at the source code to something like Quake - considering what it could achieve on a Pentium, 15 years ago. Its z-buffer implementation used spans rather than per-pixel (or fragment) depth. Otherwise, you could look at the rasterization code in Mesa.
Hard to really tell what higher order optimizations can be done without seeing the rest of the code. I have a couple of minor observation, though.
There's no need to calculate x1 + y * VIDEO_WIDTH more than once in grInterpolateHLine. i.e.:
void grInterpolateHLine(int x1, int x2, int y, float z, float zstep, int colour) {
int offset = x1 + (y * VIDEO_WIDTH);
for(; x1 <= x2; x1 ++, z += zstep, offset++) {
if(z < grDepthBuffer[offset]) {
vSetPixel(x1, y, colour);
grDepthBuffer[offset] = z;
}
}
}
Likewise, I'm guessing that your vSetPixel does a similar calculation, so you should be able to use the same offset there as well, and then you only need to increment offset and not x1 in each loop iteration. Chances are this can be extended back to the function that calls grInterpolateHLine, and you would then only need to do the multiplication once per triangle.
There are some other things you could do with the depth buffer. Most of the time if the first pixel of the line either fails or passes the depth test, then the rest of the line will have the same result. So after the first test you can write a more efficient assembly block to test the entire line in one shot, then if it passes you can use a more efficient block memory setter to block-set the pixel and depth values instead of doing them one at a time. You would only need to test/set per pixel if the line is only partially occluded.
Also, not sure what you mean by older computer, but if your target computer is multi-core then you can break it up among multiple cores. You can do this for the buffer clearing function as well. It can help quite a bit.
I ended up solving this by replacing the Z-buffer with the Painter's Algorithm. I used SSE to write a Z-buffer implementation that created a bitmask w/the pixels to paint (plus the range optimization suggested by Gerald), and it still ran far too slowly.
Thank you, everyone, for your input.

Resources