Need an algorithm to detect large spikes in oscillating data - c
I am parsing through data on an sd card one large chunk at a time on a microcontroller. It is accelerometer data so it is oscillating constantly. At certain points huge oscillations occur (shown in graph). I need an algorithm to be able to detect these large oscillations, more so, determine the range of data that contains this spike.
I have some example data:
This is the overall graph, there is only one spike of interest, the first one.
Here it is zoomed in a bit
As you can see it is a large oscillation that produces the spike.
So any algorithm that can scan through a data set and determine the portion of data that contains a spike relative to some threshold would be great. This data set is about 50,000 samples, each sample is 32 bits long. I have ample RAM to be able to hold this much data.
Thanks!
For the following signal:
If you take the absolute value of the differential between two consecutive samples, you get:
That is not quite good enough to unambiguously distinguish from the minor "unsustained" disturbances. But if you then take a simple moving sum (a leaky integrator) of the abs-differentials. Here a window width of 4 diff-samples was used:
The moving average introduces a lag or phase shift, which in cases where the data is stored and processing is not real-time can easily be compensated for by subtracting half the window width from the timing:
For real-time processing if the lag is critical a more sophisticated IIR filter might be appropriate. Anyhow a clear threshold can be selected from this data.
In code for the above data set:
#include <stdio.h>
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
static int32_t dataset[] = { 0,0,0,0,0,3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,3,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,-10,-15,-5,20,25,50,-10,-20,-30,0,30,5,-5,
0,0,5,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,6,0,0,0,0,0,0,0} ;
#define DATA_LEN (sizeof(dataset)/sizeof(*dataset))
#define WINDOW_WIDTH 4
#define THRESHOLD 15
int main()
{
uint32_t window[WINDOW_WIDTH] = {0} ;
int window_index = 0 ;
int window_sum = 0 ;
bool spike = false ;
for( int s = 1; s < DATA_LEN ; s++ )
{
uint32_t diff = abs( dataset[s] - dataset[s-1] ) ;
window_sum -= window[window_index] ;
window[window_index] = diff ;
window_index++ ;
window_index %= WINDOW_WIDTH ;
window_sum += diff ;
if( !spike && window_sum >= THRESHOLD )
{
spike = true ;
printf( "Spike START # %d\n", s - WINDOW_WIDTH / 2 ) ;
}
else if( spike && window_sum < THRESHOLD )
{
spike = false ;
printf( "Spike END # %d\n", s - WINDOW_WIDTH / 2 ) ;
}
}
return 0;
}
The output is:
Spike START # 66
Spike END # 82
https://onlinegdb.com/ryEw69jJH
Comparing the original data with the detection threshold gives:
For your real data, you will need to select a suitable window width and threshold to get the desired result, both of which will depend on the bandwidth and amplitude of the disturbances you wish to detect.
Also you may need to guard against arithmetic overflow if your samples are of sufficient magnitude. They need to be less than 232 / window-width to guarantee no overflow in the integrator. Alternatively you could use floating-point or uint64_t for the window type, or add code to deal with saturation.
You could look at statistical analysis. Calculating the standard deviation over the data set and then checking when your data goes out of bound.
You can choose to do this in two way's; either you use a running average over a fixed (relatively small) number of samples or you take the average over the whole data set. As I see multiple spikes in your set I would suggest the first. This way you can possibly stop processing (and later continue) every time you find a spike.
For your purpose you do not really need to calculate the standard deviation sigma. You could actually leave it at the squared of sigma. This will give you a slight performance optimization not having to calculate the square root.
Some pseudo code:
// The data set.
int x[N];
// The number of samples in your mean and std calculation.
int M <= N;
// Simga at index i over the previous M samples.
int sigma_i = sqrt( sum( pow(x[i] - mean(x,M), 2) ) / M );
// Or the squared of sigma
int sigma_squared_i = sum( pow(x[i] - mean(x,M), 2) ) / M;
The disadvantage of this method is that you need to set a threshold for the value of sigma at which you trigger. However it is very safe to say that when setting the threshold at 4 or 5 times your average sigma you will have an usable system.
Managed to get a working algorithm. Basically, determine the average difference between data points. If my data starts to exceed some multiple of that value consecutively then most likely there is a spike occurring.
Related
may you explain this algorithm of calculate to average for noise
I am working on embedded programming with written code by other people. this algorithm be used in calculate average for mic and accelerometer sound_value_Avg = 0; sound_value = 0; memset((char *)soundRaw, 0x00, SOUND_COUNT*2); for(int i2=0; i2 < SOUND_COUNT; i2++) { soundRaw[i2] = analogRead(PIN_ANALOG_IN); if (i2 == 0) { sound_value_Avg = soundRaw[i2]; } else { sound_value_Avg = (sound_value_Avg + soundRaw[i2]) / 2; } } sound_value = sound_value_Avg; acceleromter is similar to this n1=p1 (n2+p1)/2 = p2 (n3+p2)/2 = p3 (n4+p3)/2 = p4 ... avg(n1~nx)=px it not seems to be correct. can someone explain why he used this algorithm? is it specific way for sin graph? like noise, vibration?
It appears to be a flawed attempt at maintaining a cumulative mean. The error is in believing that: An+1 = (An + sn) / 2 when in fact it should be: An+1 = ((An * n) + s) / (n + 1) However it is computationally simpler to maintain a running sum and generate an average in the usual manner: S = S + s An = S / n It is possible that the intent was to avoid overflow when the sum grows large, but the attempt is mathematically flawed. To see how wrong this statement is consider: True n s Running Avg. (An + sn) / 2 -------------------------------------- 1 20 20 20 2 21 20.5 20.25 3 22 21 20.625 In this case however, nothing is done with the intermediate mean value, so you don'e in fact need to maintain a running mean at all. You simply need to accumulate a running sum and calculate the average at the end. For example: sum = 0 ; sound_value = 0 ; for( int i2 = 0; i2 < SOUND_COUNT; i2++ ) { soundRaw[i2] = analogRead( PIN_ANALOG_IN ) ; sum += soundRaw[i2] ; } sound_value = sum / SOUND_COUNT ; In this you do need to make sure that the data type forsum can accommodate a value of the maximum analogRead() return multiplied by SOUND_COUNT. However you say that this is used for some sort of signal conditioning or processing of both a microphone and an accelerator. These devices have rather dissimilar bandwidth and dynamics, and it seems rather unlikely that the same filter would suit both. Applying robust DSP techniques such as IIR or FIR filters with suitably calculated coefficients would make a great deal more sense. You'd also need a suitable fixed sample rate that I am willing to bet is not achieved by simply reading the ADC in a loop with no specific timing
RMS calculation DC offset
I need to implement an RMS calculations of sine wave in MCU (microcontroller, resource constrained). MCU lacks FPU (floating point unit), so I would prefer to stay in integer realm. Captures are discrete via 10 bit ADC. Looking for a solution, I've found this great solution here by Edgar Bonet: https://stackoverflow.com/a/28812301/8264292 Seems like it completely suits my needs. But I have some questions. Input are mains 230 VAC, 50 Hz. It's transformed & offset by hardware means to become 0-1V (peak to peak) sine wave which I can capture with ADC getting 0-1023 readings. Hardware are calibrated so that 260 VRMS (i.e. about -368:+368 peak to peak) input becomes 0-1V peak output. How can I "restore" back original wave RMS value providing I want to stay in integer realm too? Units can vary, mV will do fine also. My first guess was subtracting 512 from the input sample (DC offset) and later doing this "magic" shift as in Edgar Bonet answer. But I've realized it's wrong because DC offset aren't fixed. Instead it's biased to start from 0V. I.e. 130 VAC input would produce 0-500 mV peak to peak output (not 250-750 mV which would've worked so far). With real RMS to subtract the DC offset I need to subtract squared sum of samples from the sum of squares. Like in this formula: So I've ended up with following function: #define INITIAL 512 #define SAMPLES 1024 #define MAX_V 368UL // Maximum input peak in V ( 260*sqrt(2) ) /* K is defined based on equation, where 64 = 2^6, * i.e. 6 bits to add to 10-bit ADC to make it 16-bit * and double it for whole range in -peak to +peak */ #define K (MAX_V*64*2) uint16_t rms_filter(uint16_t sample) { static int16_t rms = INITIAL; static uint32_t sum_squares = 1UL * SAMPLES * INITIAL * INITIAL; static uint32_t sum = 1UL * SAMPLES * INITIAL; sum_squares -= sum_squares / SAMPLES; sum_squares += (uint32_t) sample * sample; sum -= sum / SAMPLES; sum += sample; if (rms == 0) rms = 1; /* do not divide by zero */ rms = (rms + (((sum_squares / SAMPLES) - (sum/SAMPLES)*(sum/SAMPLES)) / rms)) / 2; return rms; } ... // Somewhere in a loop getSample(&sample); rms = rms_filter(sample); ... // After getting at least N samples (SAMPLES * X?) uint16_t vrms = (uint32_t)(rms*K) >> 16; printf("Converted Vrms = %d V\r\n", vrms); Does it looks fine? Or am I doing something wrong like this? How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? I.e. how much real samples do I need to feed to rms_filter() before I can get real RMS value providing my capture speed are X sps? At least how to evaluate required minimum N of samples?
I did not test your code, but it looks to me like it should work fine. Personally, I would not have implemented the function this way. I would instead have removed the DC part of the signal before trying to compute the RMS value. The DC part can be estimated by sending the raw signal through a low pass filter. In pseudo-code this would be rms = sqrt(low_pass(square(x - low_pass(x)))) whereas what you wrote is basically rms = sqrt(low_pass(square(x)) - square(low_pass(x))) It shouldn't really make much of a difference though. The first formula, however, spares you a multiplication. Also, by removing the DC component before computing the square, you end up multiplying smaller numbers, which may help in allocating bits for the fixed-point implementation. In any case, I recommend you test the filter on your computer with synthetic data before committing it to the MCU. How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? The constant SAMPLES controls the cut-off frequency of the low pass filters. This cut-off should be small enough to almost completely remove the 50 Hz part of the signal. On the other hand, if the mains supply is not completely stable, the quantity you are measuring will slowly vary with time, and you may want your cut-off to be high enough to capture those variations. The transfer function of these single-pole low-pass filters is H(z) = z / (SAMPLES * z + 1 − SAMPLES) where z = exp(i 2 π f / f₀), i is the imaginary unit, f is the signal frequency and f₀ is the sampling frequency If f₀ ≫ f (which is desirable for a good sampling), you can approximate this by the analog filter: H(s) = 1/(1 + SAMPLES * s / f₀) where s = i2πf and the cut-off frequency is f₀/(2π*SAMPLES). The gain at f = 50 Hz is then 1/sqrt(1 + (2π * SAMPLES * f/f₀)²) The relevant parameter here is (SAMPLES * f/f₀), which is the number of periods of the 50 Hz signal that fit inside your sampling window. If you fit one period, you are letting about 15% of the signal through the filter. Half as much if you fit two periods, etc. You could get perfect rejection of the 50 Hz signal if you design a filter with a notch at that particular frequency. If you don't want to dig into digital filter design theory, the simplest such filter may be a simple moving average that averages over a period of exactly 20 ms. This has a non trivial cost in RAM though, as you have to keep a full 20 ms worth of samples in a circular buffer.
Configuring and limiting output of PI controller
I have implemented simple PI controller, code is as follows: PI_controller() { // handling input value and errors previous_error = current_error; current_error = 0 - input_value; // PI regulation P = current_error //P is proportional value I += previous_error; //I is integral value output = Kp*P + Ki*I; //Kp and Ki are coeficients } Input value is always between -π and +π. Output value must be between -4000 and +4000. My question is - how to configure and (most importantly) limit the PI controller properly.
Too much to comment but not a definitive answer. What is "a simple PI controller"? And "how long is a piece of string"? I don't see why you (effectively) code P = (current_error = 0 - input_value); which simply negates the error of -π to π. You then aggregate the error with I += previous_error; but haven't stated the cumulative error bounds, and then calculate output = Kp*P + Ki*I; which must be -4000 <= output <= 4000. So you are looking for values of Kp and Ki that keep you within bounds, or perhaps don't keep you within bounds except in average conditions. I suggest an empirical solution. Try a series of runs, filing the results, stepping the values of Kp and Ki by 5 steps each, first from extreme neg to pos values. Limit the output as you stated, counting the number of results that break the limit. Next, halve the range of one of Kp and Ki and make a further informed choice as to which one to limit. And so on. "Divide and conquer". As to your requirement "how to limit the PI controller properly", are you sure that 4000 is the limit and not 4096 or even 4095? if (output < -4000) output = -4000; if (output > 4000) output = 4000;
To configure your Kp and Ki you really should analyze the frequency response of your system and design your PI to give the desired response. To simply limit the output decide if you need to freeze the integrator, or just limit the immediate output. I'd recommend freezing the integrator. I_tmp = previous_error + I; output_tmp = Kp*P + Ki*I_tmp; if( output_tmp < -4000 ) { output = -4000; } else if( output_tmp > 4000 ) { output = 4000; } else { I = I_tmp; output = output_tmp; } That's not a super elegant, vetted algorithm, but it gives you an idea.
If I understand your question correctly you are asking about anti windup for your integrator. There are more clever ways to to it, but a simple if ( abs (I) < x) { I += previous_error; } will prevent windup of the integrator. Then you need to figure out x, Kp and Ki so that abs(x*Ki) + abs(3.14*Kp) < 4000 [edit] Off cause as macduff states, you first need to analyse your system and choose the korrect Ki and Kp, x is the only really free variable in the above equation.
Division in C returns 0... sometimes
Im trying to make a simple RPM meter using an ATMega328. I have an encoder on the motor which has 306 interrupts per rotation (as the motor encoder has 3 spokes which interrupt on rising and falling edge, the motor is geared 51:1 and so 6 transitions * 51 = 306 interrupts per wheel rotation ) , and I am using a timer interrupting every 1ms, however in the interrupt it set to recalculate every 1 second. There seems to be 2 problems. 1) RPM never goes below 60, instead its either 0 or RPM >= 60 2) Reducing the time interval causes it to always be 0 (as far as I can tell) Here is the code int main(void){ while(1){ int temprpm = leftRPM; printf("Revs: %d \n",temprpm); _delay_ms(50); }; return 0; } ISR (INT0_vect){ ticksM1++; } ISR(TIMER0_COMPA_vect){ counter++; if(counter == 1000){ int tempticks = ticksM1; leftRPM = ((tempticks - lastM1)/306)*1*60; lastM1 = tempticks; counter = 0; } } Anything that is not declared in that code is declared globally and as an int, ticksM1 is also volatile. The macros are AVR macros for the interrupts. The purpose of the multiplying by 1 for leftRPM represents time, ideally I want to use 1ms without the if statement so the 1 would then be 1000
For a speed between 60 and 120 RPM the result of ((tempticks - lastM1)/306) will be 1 and below 60 RPM it will be zero. Your output will always be a multiple of 60 The first improvement I would suggest is not to perform expensive arithmetic in the ISR. It is unnecessary - store the speed in raw counts-per-second, and convert to RPM only for display. Second, perform the multiply before the divide to avoid unnecessarily discarding information. Then for example at 60RPM (306CPS) you have (306 * 60) / 306 == 60. Even as low as 1RPM you get (6 * 60) / 306 == 1. In fact it gives you a potential resolution of approximately 0.2RPM as opposed to 60RPM! To allow the parameters to be easily maintained; I recommend using symbolic constants rather than magic numbers. #define ENCODER_COUNTS_PER_REV 306 #define MILLISEC_PER_SAMPLE 1000 #define SAMPLES_PER_MINUTE ((60 * 1000) / MILLISEC_PER_SAMPLE) ISR(TIMER0_COMPA_vect){ counter++; if(counter == MILLISEC_PER_SAMPLE) { int tempticks = ticksM1; leftCPS = tempticks - lastM1 ; lastM1 = tempticks; counter = 0; } } Then in main(): int temprpm = (leftCPS * SAMPLES_PER_MINUTE) / ENCODER_COUNTS_PER_REV ; If you want better that 1RPM resolution you might consider int temprpm_x10 = (leftCPS * SAMPLES_PER_MINUTE) / (ENCODER_COUNTS_PER_REV / 10) ; then displaying: printf( "%d.%d", temprpm / 10, temprpm % 10 ) ; Given the potential resolution of 0.2 rpm by this method, higher resolution display is unnecessary, though you could use a moving-average to improve resolution at the expense of some "display-lag". Alternatively now that the calculation of RPM is no longer in the ISR you might afford a floating point operation: float temprpm = ((float)leftCPS * (float)SAMPLES_PER_MINUTE ) / (float)ENCODER_COUNTS_PER_REV ; printf( "%f", temprpm ) ; Another potential issue is that ticksM1++ and tempticks = ticksM1, and the reading of leftRPM (or leftCPS in my solution) are not atomic operations, and can result in an incorrect value being read if interrupt nesting is supported (and even if it is not in the case of the access from outside the interrupt context). If the maximum rate will be less that 256 cps (42RPM) then you might get away with an atomic 8 bit counter; you cal alternatively reduce your sampling period to ensure the count is always less that 256. Failing that the simplest solution is to disable interrupts while reading or updating non-atomic variables shared across interrupt and thread contexts.
It's integer division. You would probably get better results with something like this: leftRPM = ((tempticks - lastM1)/6);
Identifying a trend in C - Micro controller sampling
I'm working on an MC68HC11 Microcontroller and have an analogue voltage signal going in that I have sampled. The scenario is a weighing machine, the large peaks are when the object hits the sensor and then it stabilises (which are the samples I want) and then peaks again before the object roles off. The problem I'm having is figuring out a way for the program to detect this stable point and average it to produce an overall weight but can't figure out how :/. One way I have thought about doing is comparing previous values to see if there is not a large difference between them but I haven't had any success. Below is the C code that I am using: #include <stdio.h> #include <stdarg.h> #include <iof1.h> void main(void) { /* PORTA, DDRA, DDRG etc... are LEDs and switch ports */ unsigned char *paddr, *adctl, *adr1; unsigned short i = 0; unsigned short k = 0; unsigned char switched = 1; /* is char the smallest data type? */ unsigned char data[2000]; DDRA = 0x00; /* All in */ DDRG = 0xff; adctl = (unsigned char*) 0x30; adr1 = (unsigned char*) 0x31; *adctl = 0x20; /* single continuos scan */ while(1) { if(*adr1 > 40) { if(PORTA == 128) /* Debugging switch */ { PORTG = 1; } else { PORTG = 0; } if(i < 2000) { while(((*adctl) & 0x80) == 0x00); { data[i] = *adr1; } /* if(i > 10 && (data[(i-10)] - data[i]) < 20) */ i++; } if(PORTA == switched) { PORTG = 31; /* Print a delimeter so teemtalk can send to excel */ for(k=0;k<2000;k++) { printf("%d,",data[k]); } if(switched == 1) /*bitwise manipulation more efficient? */ { switched = 0; } else { switched = 1; } PORTG = 0; } if(i >= 2000) { i = 0; } } } } Look forward to hearing any suggestions :) (The graph below shows how these values look, the red box is the area I would like to identify.
As you sample sequence has glitches (short lived transients) try to improve the hardware ie change layout, add decoupling, add filtering etc. If that approach fails, then a median filter [1] of say five places long, which takes the last five samples, sorts them and outputs the middle one, so two samples of the transient have no effect on it's output. (seven places ...three transient) Then a computationally efficient exponential averaging lowpass filter [2] y(n) = y(n–1) + alpha[x(n) – y(n–1)] choosing alpha (1/2^n, division with right shifts) to yield a time constant [3] of less than the underlying response (~50samples), but still filter out the noise. Increasing the effective fractional bits will avoid the quantizing issues. With this improved sample sequence, thresholds and cycle count, can be applied to detect quiescent durations. Additionally if the end of the quiescent period is always followed by a large, abrupt change then using a sample delay "array", enables the detection of the abrupt change but still have available the last of the quiescent samples for logging. [1] http://en.wikipedia.org/wiki/Median_filter [2] http://www.dsprelated.com/showarticle/72.php [3] http://en.wikipedia.org/wiki/Time_constant Note Adding code for the above filtering operations will lower the maximum possible sample rate but printf can be substituted for something faster.
Continusously store the current value and the delta from the previous value. Note when the delta is decreasing as the start of weight application to the scale Note when the delta is increasing as the end of weight application to the scale Take the X number of values with the small delta and average them BTW, I'm sure this has been done 1M times before, I'm thinking that a search for scale PID or weight PID would find a lot of information.
Don't forget using ___delay_ms(XX) function somewhere between the reading values, if you will compare with the previous one. The difference in each step will be obviously small, if the code loop continuously.
Looking at your nice graphs, I would say you should look only for the falling edge, it is much consistent than leading edge. In other words, let the samples accumulate, calculate the running average all the time with predefined window size, remember the deviation of the previous values just for reference, check for a large negative bump in your values (like absolute value ten times smaller then current running average), your running average is your value. You could go back a little bit (disregarding last few values in your average, and recalculate) to compensate for small positive bump visible in your picture before each negative bump...No need for heavy math here, you could not model the reality better then your picture has shown, just make sure that your code detect the end of each and every sample. You have to be fast enough with sample to make sure no negative bump was missed (or you will have big time error in your data averaging). And you don't need that large arrays, running average is better based on smaller window size, smaller residual error in your case when you detect the negative bump.