I need to detect an accelerometer event when user hits the device on table/floor.
Device is having a STM32 low-power microcontroller at 8 MHz and an LIS3DH accelerometer.
The accelerometer is operating in +- 2G range. Sample numbers are signed 16-bit integers.
I have collected accelerometer data for such an event by reading from the accelerometer at 50 Hz. I have attached the graph of x, y and z samples. The "hit" event is clearly visible in the graphs, red dots on the time axis show the point when the event occurred. But I have no idea how to detect such event in code.
The DC offset changes for 3 axes according to orientation of device.
Again, sampled at 100 Hz, the graph is for the X axis only and shows 2 hit events. Such spikes will happen simultaneously on all 3 axes, but amplitude and direction may vary. The time scale is zoomed in, compared to the other graphs. Sampling at 100 Hz is not possible in the actual application code.
The device orientation change and movements in hand of user causes a lot of signal variation. Below is a graph for the Y axis with hand movement, orientation change and hit event. Such changes will happen across all axes.
As suggested by Martin James, you should measure the differences in accelerations between the current and last tick. You need to do this on each axis, because from your data, some of the hits don't affect every axis. One might suppose that you could use the total acceleration by using the sum of squares, but I don't think this will work.
To measure the difference, you will need to keep the last reading in a variable. You might need the previous two readings, depending on how fast the sampling rate is; if the rate is too high, then the differences may always be small. You should also keep a count of ticks since the last hit.
Then, when taking the current reading, compare the current readings with the previous readings. If the difference is above a threshold on any axis, mark it as a hit and reset the time_since_hit_count -- unless a hit happened recently. You want to avoid counting the same hit many times as the acceleration changes during a single hit. Your data suggests a threshold of around 5000.
If the difference is not above the threshold on any axis, increment the time_since_hit_count and replace the stored readings with the current ones.
(If you are storing the previous two hits, compare against each, and move the stored values appropriately.)
From your data, some hits take 3 ticks to occur, so you could discount hits if the time_since_hit_count is less than 5, say. That's 100 ms per hit. Depending on the application, that might be okay. A drum stick could bounce faster than that, but a finger probably not.
You'll probably have to experiment with the acc threshold and the hit count threshold as you collect data.
Related
How should one choose the sampled input for AWG so that it gives a waveform similar to the ideal waveform?
I have a data set for voltage signal sampled over time of 1 ns, which I am entering as an input for the AWG function which resamples the data at 0.05 ns and gives a waveform. There is a distortion in the AWG waveform with respect to the ideal one. For AWG, Vout(t) =∑ Vi h(𝑡 - 𝑖 ∆𝑡), where h is the impulse response function of the instrument.
It is given in the question that it is possible to correct for the distortion by transforming input Vi in a special way.
I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.
I would like to know how noise can be removed from data (say, radio data that is an array of rows and columns with each data point representing intensity of the radiation in the given frequency and time).The array can contain radio bursts. But many fixed frequency radio noise also exists(RFI=radio frequency intereference).How to remove such noise and bring out only the burst.
I don't mean to be rude, but this question isn't clear at all. Please sharpen it up.
The normal way to remove noise is first to define it exactly and then filter it out. Usually this is done in the frequency domain. For example, if you know the normalized power spectrum P(f) of the noise, build a filter with response
e/(e + P(f))
where e<1 is an attenuation factor.
You can implement the filter digitally using FFT or a convolution kernel.
When you don't know the spectrum of the noise or when it's white, then just use the inverse of the signal band.
I am making a finger plethysmograph(FP) using an LED and a receiver. The sensor produces an analog pulse waveform that is filtered, amplified and fed into a microcontroller input with a range of 3.3-0V. This signal is converted into its digital form.
Smapling rate is 8MHz, Processor frequency is 26MHz, Precision is 10 or 8 bit.
I am having problems coming up with a robust method for peak detection. I want to be able to detect heart pulses from the finger plethysmograph. I have managed to produce an accurate measurement of heart rate using a threshold method. However, the FP is extremely sensitive to movement and the offset of the signal can change based on movement. However, the peaks of the signal will still show up but with varying voltage offset.
Therefore, I am proposing a peak detection method that uses the slope to detect peaks. In example, if a peak is produced, the slope before and after the maximum point will be positive and negative respectively.
How feasible do you think this method is? Is there an easier way to perform peak detection using a microcontroller?
You can still introduce detection of false peaks when the device is moved. This will be present whether you are timing average peak duration or applying an FFT (fast Fourier Transform).
With an FFT you should be able to ignore peaks outside the range of frequencies you are considering (ie those < 30 bpm and > 300 bpm, say).
As Kenny suggests, 8MHz might overwhelm a 26MHz chip. Any particular reason for such a high sampling rate?
Like some of the comments, I would also recommend lowering your sample rate since you only care about pulse (i.e. heart rate) for now. So, assuming you're going to be looking at resting heart rate, you'll be in the sub-1Hz to 2Hz range (60 BPM = 1Hz), depending on subject health, age, etc.
In order to isolate the frequency range of interest, I would also recommend a simple, low-order digital filter. If you have access to Matlab, you can play around with Digital Filter Design using its Filter Design and Analysis Tool (Introduction to the FDATool). As you'll find out, Digital Filtering (wiki) is not computationally expensive since it is a matter of multiplication and addition.
To answer the detection part of your question, YES, it is certainly feasible to implement peak detection on the plethysmograph waveform within a microcontroller. Taking your example, a slope-based peak detection algorithm would operate on your waveform data, searching for changes in slope, essentially where the slope waveform crosses zero.
Here are a few other things to consider about your application:
Calculating slope can have a "spread" (i.e. do you find the slope between adjacent samples, or samples which are a few samples apart?)
What if your peak detection algorithm locates peaks that are too close together, or too far apart, in a physiological sense?
A Pulse Oximeter (wiki) often utilizes LEDs which emit Red and Infrared light. How does the frequency of the LED affect the plethysmograph? (HINT: It may not be significant, but I believe you'll find one wavelength to yield greater amplitudes in your frequency range of interest.)
Of course you'll find a variety of potential algorithms if you do a literature search but I think slope-based detection is great for its simplicity. Hope it helps.
If you can detect the period using zero crossing, even at 10x oversampling of 10 Hz, you can use a line fit of the quick-n-dirty-edge to find the exact period, and then subtract the new wave's samples in that period with the previous, and get a DC offset. The period measurement will have the precision of your sample rate. Doing operations on the time and amplitude-normalized data will be much easier.
This idea is computationally light compared to FFT, which still needs additional data processing.
I'm doing my my project for a course and my goal is to implement the Proportional Integrant Control over a robot to track a line with 12 simple phototransistors. Now I've been reading many PID tutorials but I'm still confused. Can someone help me to start like from what I have been thinking...
I should assign each state of sensors a binary value and then use that in implementing the PI equation for error.... can some friend throw some light?
Assuming the photo transistors are all in a line parallel to the front edge of your 'car', perpendicular to the edge of the track, and individually numbered from 0 - 11...
You want your car's center to follow the line. Sensors #5 and #6 should straddle the line, and therefore be used be used as fine-tuning adjustment. The sensors at the extreme ends (#0 and #11) should have the highest impact on your steering.
With those two bits of info, you should be able to set appropriate weights (multiplication factors) for your PI control to instruct your car to turn left a little, when sensors #7, #8 see the line, or turn left a lot when sensors #9, #10, #11 see the line. The extreme sensors may also affect the speed of your car.
Some things to consider: When implementing a front-wheel steering vehicle, it is often better to mount your sensor strip behind the front wheels. Also, rear-wheel steering vehicles can adjust to sharp corners more quickly, but are less stable at high-speeds.
I'd convert the 12 sensors into a number from 1 to 12. Then try and target a value of 6 in my PID. Then use the output to drive the wheels. Maybe normalize it so you get a +ve number means more right, and a negative means more left.