Identifying a trend in C - Micro controller sampling - c

I'm working on an MC68HC11 Microcontroller and have an analogue voltage signal going in that I have sampled. The scenario is a weighing machine, the large peaks are when the object hits the sensor and then it stabilises (which are the samples I want) and then peaks again before the object roles off.
The problem I'm having is figuring out a way for the program to detect this stable point and average it to produce an overall weight but can't figure out how :/. One way I have thought about doing is comparing previous values to see if there is not a large difference between them but I haven't had any success. Below is the C code that I am using:
#include <stdio.h>
#include <stdarg.h>
#include <iof1.h>
void main(void)
{
/* PORTA, DDRA, DDRG etc... are LEDs and switch ports */
unsigned char *paddr, *adctl, *adr1;
unsigned short i = 0;
unsigned short k = 0;
unsigned char switched = 1; /* is char the smallest data type? */
unsigned char data[2000];
DDRA = 0x00; /* All in */
DDRG = 0xff;
adctl = (unsigned char*) 0x30;
adr1 = (unsigned char*) 0x31;
*adctl = 0x20; /* single continuos scan */
while(1)
{
if(*adr1 > 40)
{
if(PORTA == 128) /* Debugging switch */
{
PORTG = 1;
}
else
{
PORTG = 0;
}
if(i < 2000)
{
while(((*adctl) & 0x80) == 0x00);
{
data[i] = *adr1;
}
/* if(i > 10 && (data[(i-10)] - data[i]) < 20) */
i++;
}
if(PORTA == switched)
{
PORTG = 31;
/* Print a delimeter so teemtalk can send to excel */
for(k=0;k<2000;k++)
{
printf("%d,",data[k]);
}
if(switched == 1) /*bitwise manipulation more efficient? */
{
switched = 0;
}
else
{
switched = 1;
}
PORTG = 0;
}
if(i >= 2000)
{
i = 0;
}
}
}
}
Look forward to hearing any suggestions :)
(The graph below shows how these values look, the red box is the area I would like to identify.

As you sample sequence has glitches (short lived transients) try to improve the hardware ie change layout, add decoupling, add filtering etc.
If that approach fails, then a median filter [1] of say five places long, which takes the last five samples, sorts them and outputs the middle one, so two samples of the transient have no effect on it's output. (seven places ...three transient)
Then a computationally efficient exponential averaging lowpass filter [2]
y(n) = y(n–1) + alpha[x(n) – y(n–1)]
choosing alpha (1/2^n, division with right shifts) to yield a time constant [3] of less than the underlying response (~50samples), but still filter out the noise. Increasing the effective fractional bits will avoid the quantizing issues.
With this improved sample sequence, thresholds and cycle count, can be applied to detect quiescent durations.
Additionally if the end of the quiescent period is always followed by a large, abrupt change then using a sample delay "array", enables the detection of the abrupt change but still have available the last of the quiescent samples for logging.
[1] http://en.wikipedia.org/wiki/Median_filter
[2] http://www.dsprelated.com/showarticle/72.php
[3] http://en.wikipedia.org/wiki/Time_constant
Note
Adding code for the above filtering operations will lower the maximum possible sample rate but printf can be substituted for something faster.

Continusously store the current value and the delta from the previous value.
Note when the delta is decreasing as the start of weight application to the scale
Note when the delta is increasing as the end of weight application to the scale
Take the X number of values with the small delta and average them
BTW, I'm sure this has been done 1M times before, I'm thinking that a search for scale PID or weight PID would find a lot of information.

Don't forget using ___delay_ms(XX) function somewhere between the reading values, if you will compare with the previous one. The difference in each step will be obviously small, if the code loop continuously.

Looking at your nice graphs, I would say you should look only for the falling edge, it is much consistent than leading edge.
In other words, let the samples accumulate, calculate the running average all the time with predefined window size, remember the deviation of the previous values just for reference, check for a large negative bump in your values (like absolute value ten times smaller then current running average), your running average is your value. You could go back a little bit (disregarding last few values in your average, and recalculate) to compensate for small positive bump visible in your picture before each negative bump...No need for heavy math here, you could not model the reality better then your picture has shown, just make sure that your code detect the end of each and every sample. You have to be fast enough with sample to make sure no negative bump was missed (or you will have big time error in your data averaging).
And you don't need that large arrays, running average is better based on smaller window size, smaller residual error in your case when you detect the negative bump.

Related

it is possible to filter out the occasional spikes in sensor readings in C?

psuedocode:
if(voltage < 2V)
{
reverse();
}
if(voltage > 2V)
{
forward();
}
Problem is my sensor tends to have some spikes in the voltage reading. Example: if it were to be going reverse, the sensor reading might jump past 2V for a split second, executing the forward function.
Is there a way to ignore those split second spikes in C?
If you expect the voltage to increase and decrease steadily over time you could add logic to enforce gradual increase/decrease.
This could be done by saving an average of the last values and combining this with the new value. You should try a couple of different magic numbers to see which one gives the best result.
int lastvalue = 0;
double magicnumber = 0.6;
while (looping) {
int thisvalue = (int)(voltage * magicnumber + lastvalue * (1 - magicnumber));
if (thisvalue > 2) {
forward();
else {
reverse();
}
}
This pattern will prevent sudden changes in the measured value by effectively averaging the current value with all the previous values and weighting the most recent measured values higher than measurements of longer ago.

Find two worst values and delete in sum

A microcontroller has the job to sample ADC Values (Analog to Digital Conversion). Since these parts are affected by tolerance and noise, the accuracy can be significantly increased by deleting the 4 worst values. The find and delete does take time, which is not ideal, since it will increase the cycle time.
Imagine a frequency of 100MHz, so each command of software does take 10ns to process, the more commands, the longer the controller is blocked from doing the next set of samples
So my goal is to do the sorting process as fast as possible for this i currently use this code, but this does only delete the two worst!
uint16_t getValue(void){
adcval[8] = {};
uint16_t min = 16383 //14bit full
uint16_t max = 1; //zero is physically almost impossible!
uint32_t sum = 0; //variable for the summing
for(uint8_t i=0; i<8;i++){
if(adc[i] > max) max = adc[i];
if(adc[i] < min) min = adc[i];
sum=sum+adcval[i];
}
uint16_t result = (sum-max-min)/6; //remove two worst and divide by 6
return result;
}
Now I would like to extend this function to delete the 4 worst values out of the 8 samples to get more precision. Any advice on how to do this?
Additionally, it would be wonderful to build an efficient function that finds the most deviating values, instead of the highest and lowest. For example, imagine the this two arrays
uint16_t adc1[8] {5,6,10,11,11,12,20,22};
uint16_t adc2[8] {5,6,7,7,10,11,15,16};
First case would gain precision by the described mechanism (delete the 4 worst). But the second case would have deleted the values 5 and 6 as well as 15 and 16. But this would theoretically make the calculation worse, since deleting 10,11,15,16 would be better. Is there any fast solution of deleting the 4 most deviating?
If your ADC is returning values from 5 to 16 14 bits and the voltage reference 3.3V, the voltage varies from 1mV to 3mV. It is very likely that it is the correct reading. It is very difficult to design good input circuit for 14 bits ADC.
It is better to run the running average. What is the running average? It is software low pass filter.
Blue are readings from the ADC, red -running average
Second signal is the very low amplitude sine wave (9-27mV - assuming 14 bits and 3.3Vref)
The algorithm:
static int average;
int running_average(int val, int level)
{
average -= average / level;
average += val * level;
return average / level;
}
void init_average(int val, int level)
{
average = val * level;
}
if the level is the power of 2. This version needs only 6 instructions (no branches) to calculate the average.
static int average;
int running_average(int val, int level)
{
average -= average >> level;
average += val << level;
return average >> level;
}
void init_average(int val, int level)
{
average = val << level;
}
I assume that average will no overflow. If yes you need to chose larger type
This answer is kinda of topic as it recommends a hardware solution but if performance is required and the MCU can't implement P__J__'s solution than this is your next best thing.
It seems you want to remove noise from your input signal. This can be done in software using DSP (digital signal processing) but it can also be done by configuring your hardware differently.
By adding the proper filter at the proper space before your ADC, it will be possible to remove much (outside) noise from your ADC output. (you can't of course go below a certain amount that is innate in the ADC but alas.)
There are several q&a on electronics.stackexchange.com.
One solution is adding a capacitor to filter some high frequency noise. As noted by DerStorm8
The Photon has another great solution here by suggesting RC, Sallen-Key and a cascade of Sallen-Key filters for a continuous signal filter.
Here (ADN007) is a Analog Design Note from Microchip on "Techniques that Reduce System Noise in ADC Circuits"
It may seem that designing a low noise, 12-bit Analog-to-Digital
Converter (ADC) board or even a 10-bit board is easy. This is
true, unless one ignores the basics of low noise design. For
instance, one would think that most amplifiers and resistors work
effectively in 12-bit or 10-bit environments. However, poor device
selection becomes a major factor in the success or failure of the
circuit. Another, often ignored, area that contributes a great deal
of noise, is conducted noise. Conducted noise is already in the
circuit board by the time the signal arrives at the input of the
ADC. The most effective way to remove this noise is by using a
low-pass (anti-aliasing) filter prior to the ADC. Including by-pass
capacitors and using a ground plane will also eliminate this type
of noise. A third source of noise is radiated noise. The major
sources of this type of noise are Electromagnetic Interference
(EMI) or capacitive coupling of signals from trace-to-trace.
If all three of these issues are addressed, then it is true that
designing a low noise 12-bit ADC board is easy.
And their recommended solution path:
It is easy to design a true 12-bit ADC system by using a few
key low noise guidelines. First, examine your devices (resistors
and amplifiers) to make sure they are low noise. Second, use a
ground plane whenever possible. Third, include a low-pass filter
in the signal path if you are changing the signal from analog to
digital. Finally, and always, include by-pass capacitors. These
capacitors not only remove noise but also foster circuit stability.
Here is a good paper by Analog Devices on input noise. They note in here that "there are some instances where input noise can actually be helpful in achieving higher resolution."
All analog-to-digital converters (ADCs) have a certain amount of input-referred noise—modeled as a noise source connected in series with the input of a noise-free ADC. Input-referred noise is not to be confused with quantization noise, which is only of interest when an ADC is processing time-varying signals. In most cases, less input noise is better; however, there are some instances where input noise can actually be helpful in achieving higher resolution. If this doesn’t seem to make sense right now, read on to find out how some noise can be good noise.
Given that you have a fixed size array, a hard-coded sorting network should be able to correctly sort the entire array with only 19 comparisons. Currently you have 8+2*8=24 comparisons already, although it is possible that the compiler unrolls the loop, leaving you with 16 comparisons. It is conceivable that, depending on the microcontroller hardware, a sorting network can be implemented with some degree of parallelism -- perhaps you also have to query the adc values sequentially which would give you opportunity to pre-sort them, while waiting for the comparison.
An optimal sorting network should be searchable online. Wikipedia has some pointers.
So, you would end up with some code like this:
sort_data(adcval);
return (adcval[2]+adcval[3]+adcval[4]+adcval[5])/4;
Update:
As you can take from this picture (source) of optimal sorting networks, a complete sort takes 19 comparisons. However 3 of those are not strictly needed if you only want to extract the middle 4 values. So you get down to 16 comparisons.
to delete the 4 worst values out of the 8 samples
The methods are described on geeksforgeeks k largest(or smallest) elements in an array and you can implement the best method that suits you.
I decided to use this good site to generate best sorting algorithm with SWAP() macros needed to sort the array of 8 elements. Then I created a small C program that will test any combination of 8 element array on my sorting function. Then, because we only care of groups of 4 elements, I did something bruteforce - for each of the SWAP() macros I tried to comment the macro and see if the program still succeeds. I could comment 5 SWAP macros, leaving 14 comparisons needed to identify the smallest 4 elements in the array of 8 samples.
/**
* Sorts the array, but only so that groups of 4 matter.
* So group of 4 smallest elements and 4 biggest elements
* will be sorted ok.
* s[0]...s[3] will have lowest 4 elements
* so they have to be "deleted"
* s[4]...s[7] will have the highest 4 values
*/
void sort_but_4_matter(int s[8]) {
#define SWAP(x, y) do { \
if (s[x] > s[y]) { \
const int t = s[x]; \
s[x] = s[y]; \
s[y] = t; \
} \
} while(0)
SWAP(0, 1);
//SWAP(2, 3);
SWAP(0, 2);
//SWAP(1, 3);
//SWAP(1, 2);
SWAP(4, 5);
SWAP(6, 7);
SWAP(4, 6);
SWAP(5, 7);
//SWAP(5, 6);
SWAP(0, 4);
SWAP(1, 5);
SWAP(1, 4);
SWAP(2, 6);
SWAP(3, 7);
//SWAP(3, 6);
SWAP(2, 4);
SWAP(3, 5);
SWAP(3, 4);
#undef SWAP
}
/* -------- testing code */
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
int cmp_int(const void *a, const void *b) {
return *(const int*)a - *(const int*)b;
}
void printit_arr(const int *arr, size_t n) {
printf("{");
for (size_t i = 0; i < n; ++i) {
printf("%d", arr[i]);
if (i != n - 1) {
printf(" ");
}
}
printf("}");
}
void printit(const char *pre, const int arr[8],
const int in[8], const int res[4]) {
printf("%s: ", pre);
printit_arr(arr, 8);
printf(" ");
printit_arr(in, 8);
printf(" ");
printit_arr(res, 4);
printf("\n");
}
int err = 0;
void test(const int arr[8], const int res[4]) {
int in[8];
memcpy(in, arr, sizeof(int) * 8);
sort_but_4_matter(in);
// sort for memcmp below
qsort(in, 4, sizeof(int), cmp_int);
if (memcmp(in, res, sizeof(int) * 4) != 0) {
printit("T", arr, in, res);
err = 1;
}
}
void test_all_combinations() {
const int result[4] = { 0, 1, 2, 3 }; // sorted
const size_t n = 8;
int num[8] = { 0, 1, 2, 3, 4, 5, 6, 7 };
for (size_t j = 0; j < n; j++) {
for (size_t i = 0; i < n-1; i++) {
int temp = num[i];
num[i] = num[i+1];
num[i+1] = temp;
test(num, result);
}
}
}
int main() {
test_all_combinations();
return err;
}
Tested on godbolt. The sort_but_4_matter with gcc -O2 on x86_64 compiles to less then 100 instruction.

Need an algorithm to detect large spikes in oscillating data

I am parsing through data on an sd card one large chunk at a time on a microcontroller. It is accelerometer data so it is oscillating constantly. At certain points huge oscillations occur (shown in graph). I need an algorithm to be able to detect these large oscillations, more so, determine the range of data that contains this spike.
I have some example data:
This is the overall graph, there is only one spike of interest, the first one.
Here it is zoomed in a bit
As you can see it is a large oscillation that produces the spike.
So any algorithm that can scan through a data set and determine the portion of data that contains a spike relative to some threshold would be great. This data set is about 50,000 samples, each sample is 32 bits long. I have ample RAM to be able to hold this much data.
Thanks!
For the following signal:
If you take the absolute value of the differential between two consecutive samples, you get:
That is not quite good enough to unambiguously distinguish from the minor "unsustained" disturbances. But if you then take a simple moving sum (a leaky integrator) of the abs-differentials. Here a window width of 4 diff-samples was used:
The moving average introduces a lag or phase shift, which in cases where the data is stored and processing is not real-time can easily be compensated for by subtracting half the window width from the timing:
For real-time processing if the lag is critical a more sophisticated IIR filter might be appropriate. Anyhow a clear threshold can be selected from this data.
In code for the above data set:
#include <stdio.h>
#include <stdint.h>
#include <stdbool.h>
#include <stdlib.h>
static int32_t dataset[] = { 0,0,0,0,0,3,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,3,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,-10,-15,-5,20,25,50,-10,-20,-30,0,30,5,-5,
0,0,5,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,1,0,0,0,0,6,0,0,0,0,0,0,0} ;
#define DATA_LEN (sizeof(dataset)/sizeof(*dataset))
#define WINDOW_WIDTH 4
#define THRESHOLD 15
int main()
{
uint32_t window[WINDOW_WIDTH] = {0} ;
int window_index = 0 ;
int window_sum = 0 ;
bool spike = false ;
for( int s = 1; s < DATA_LEN ; s++ )
{
uint32_t diff = abs( dataset[s] - dataset[s-1] ) ;
window_sum -= window[window_index] ;
window[window_index] = diff ;
window_index++ ;
window_index %= WINDOW_WIDTH ;
window_sum += diff ;
if( !spike && window_sum >= THRESHOLD )
{
spike = true ;
printf( "Spike START # %d\n", s - WINDOW_WIDTH / 2 ) ;
}
else if( spike && window_sum < THRESHOLD )
{
spike = false ;
printf( "Spike END # %d\n", s - WINDOW_WIDTH / 2 ) ;
}
}
return 0;
}
The output is:
Spike START # 66
Spike END # 82
https://onlinegdb.com/ryEw69jJH
Comparing the original data with the detection threshold gives:
For your real data, you will need to select a suitable window width and threshold to get the desired result, both of which will depend on the bandwidth and amplitude of the disturbances you wish to detect.
Also you may need to guard against arithmetic overflow if your samples are of sufficient magnitude. They need to be less than 232 / window-width to guarantee no overflow in the integrator. Alternatively you could use floating-point or uint64_t for the window type, or add code to deal with saturation.
You could look at statistical analysis. Calculating the standard deviation over the data set and then checking when your data goes out of bound.
You can choose to do this in two way's; either you use a running average over a fixed (relatively small) number of samples or you take the average over the whole data set. As I see multiple spikes in your set I would suggest the first. This way you can possibly stop processing (and later continue) every time you find a spike.
For your purpose you do not really need to calculate the standard deviation sigma. You could actually leave it at the squared of sigma. This will give you a slight performance optimization not having to calculate the square root.
Some pseudo code:
// The data set.
int x[N];
// The number of samples in your mean and std calculation.
int M <= N;
// Simga at index i over the previous M samples.
int sigma_i = sqrt( sum( pow(x[i] - mean(x,M), 2) ) / M );
// Or the squared of sigma
int sigma_squared_i = sum( pow(x[i] - mean(x,M), 2) ) / M;
The disadvantage of this method is that you need to set a threshold for the value of sigma at which you trigger. However it is very safe to say that when setting the threshold at 4 or 5 times your average sigma you will have an usable system.
Managed to get a working algorithm. Basically, determine the average difference between data points. If my data starts to exceed some multiple of that value consecutively then most likely there is a spike occurring.

Preventing torn reads with an HCS12 microcontroller

Summary
I'm trying to write an embedded application for an MC9S12VR microcontroller. This is a 16-bit microcontroller but some of the values I deal with are 32 bits wide and while debugging I've captured some anomalous values that seem to be due to torn reads.
I'm writing the firmware for this micro in C89 and running it through the Freescale HC12 compiler, and I'm wondering if anyone has any suggestions on how to prevent them on this particular microcontroller assuming that this is the case.
Details
Part of my application involves driving a motor and estimating its position and speed based on pulses generated by an encoder (a pulse is generated on every full rotation of the motor).
For this to work, I need to configure one of the MCU timers so that I can track the time elapsed between pulses. However, the timer has a clock rate of 3 MHz (after prescaling) and the timer counter register is only 16-bit, so the counter overflows every ~22ms. To compensate, I set up an interrupt handler that fires on a timer counter overflow, and this increments an "overflow" variable by 1:
// TEMP
static volatile unsigned long _timerOverflowsNoReset;
// ...
#ifndef __INTELLISENSE__
__interrupt VectorNumber_Vtimovf
#endif
void timovf_isr(void)
{
// Clear the interrupt.
TFLG2_TOF = 1;
// TEMP
_timerOverflowsNoReset++;
// ...
}
I can then work out the current time from this:
// TEMP
unsigned long MOTOR_GetCurrentTime(void)
{
const unsigned long ticksPerCycle = 0xFFFF;
const unsigned long ticksPerMicrosecond = 3; // 24 MHZ / 8 (prescaler)
const unsigned long ticks = _timerOverflowsNoReset * ticksPerCycle + TCNT;
const unsigned long microseconds = ticks / ticksPerMicrosecond;
return microseconds;
}
In main.c, I've temporarily written some debugging code that drives the motor in one direction and then takes "snapshots" of various data at regular intervals:
// Test
for (iter = 0; iter < 10; iter++)
{
nextWait += SECONDS(secondsPerIteration);
while ((_test2Snapshots[iter].elapsed = MOTOR_GetCurrentTime() - startTime) < nextWait);
_test2Snapshots[iter].position = MOTOR_GetCount();
_test2Snapshots[iter].phase = MOTOR_GetPhase();
_test2Snapshots[iter].time = MOTOR_GetCurrentTime() - startTime;
// ...
In this test I'm reading MOTOR_GetCurrentTime() in two places very close together in code and assign them to properties of a globally available struct.
In almost every case, I find that the first value read is a few microseconds beyond the point the while loop should terminate, and the second read is a few microseconds after that - this is expected. However, occasionally I find the first read is significantly higher than the point the while loop should terminate at, and then the second read is less than the first value (as well as the termination value).
The screenshot below gives an example of this. It took about 20 repeats of the test before I was able to reproduce it. In the code, <snapshot>.elapsed is written to before <snapshot>.time so I expect it to have a slightly smaller value:
For snapshot[8], my application first reads 20010014 (over 10ms beyond where it should have terminated the busy-loop) and then reads 19988209. As I mentioned above, an overflow occurs every 22ms - specifically, a difference in _timerOverflowsNoReset of one unit will produce a difference of 65535 / 3 in the calculated microsecond value. If we account for this:
A difference of 40 isn't that far off the discrepancy I see between my other pairs of reads (~23/24), so my guess is that there's some kind of tear going on involving an off-by-one read of _timerOverflowsNoReset. As in while busy-looping, it will perform one call to MOTOR_GetCurrentTime() that erroneously sees _timerOverflowsNoReset as one greater than it actually is, causing the loop to end early, and then on the next read after that it sees the correct value again.
I have other problems with my application that I'm having trouble pinning down, and I'm hoping that if I resolve this, it might resolve these other problems as well if they share a similar cause.
Edit: Among other changes, I've changed _timerOverflowsNoReset and some other globals from 32-bit unsigned to 16-bit unsigned in the implementation I now have.
You can read this value TWICE:
unsigned long GetTmrOverflowNo()
{
unsigned long ovfl1, ovfl2;
do {
ovfl1 = _timerOverflowsNoReset;
ovfl2 = _timerOverflowsNoReset;
} while (ovfl1 != ovfl2);
return ovfl1;
}
unsigned long MOTOR_GetCurrentTime(void)
{
const unsigned long ticksPerCycle = 0xFFFF;
const unsigned long ticksPerMicrosecond = 3; // 24 MHZ / 8 (prescaler)
const unsigned long ticks = GetTmrOverflowNo() * ticksPerCycle + TCNT;
const unsigned long microseconds = ticks / ticksPerMicrosecond;
return microseconds;
}
If _timerOverflowsNoReset increments much slower then execution of GetTmrOverflowNo(), in worst case inner loop runs only two times. In most cases ovfl1 and ovfl2 will be equal after first run of while() loop.
Calculate the tick count, then check if while doing that the overflow changed, and if so repeat;
#define TCNT_BITS 16 ; // TCNT register width
uint32_t MOTOR_GetCurrentTicks(void)
{
uint32_t ticks = 0 ;
uint32_t overflow_count = 0;
do
{
overflow_count = _timerOverflowsNoReset ;
ticks = (overflow_count << TCNT_BITS) | TCNT;
}
while( overflow_count != _timerOverflowsNoReset ) ;
return ticks ;
}
the while loop will iterate either once or twice no more.
Based on the answers #AlexeyEsaulenko and #jeb provided, I gained understanding into the cause of this problem and how I could tackle it. As both their answers were helpful and the solution I currently have is sort of a mixture of the two, I can't decide which of the two answers to accept, so instead I'll upvote both answers and keep this question open.
This is how I now implement MOTOR_GetCurrentTime:
unsigned long MOTOR_GetCurrentTime(void)
{
const unsigned long ticksPerMicrosecond = 3; // 24 MHZ / 8 (prescaler)
unsigned int countA;
unsigned int countB;
unsigned int timerOverflowsA;
unsigned int timerOverflowsB;
unsigned long ticks;
unsigned long microseconds;
// Loops until TCNT and the timer overflow count can be reliably determined.
do
{
timerOverflowsA = _timerOverflowsNoReset;
countA = TCNT;
timerOverflowsB = _timerOverflowsNoReset;
countB = TCNT;
} while (timerOverflowsA != timerOverflowsB || countA >= countB);
ticks = ((unsigned long)timerOverflowsA << 16) + countA;
microseconds = ticks / ticksPerMicrosecond;
return microseconds;
}
This function might not be as efficient as other proposed answers, but it gives me confidence that it will avoid some of the pitfalls that have been brought to light. It works by repeatedly reading both the timer overflow count and TCNT register twice, and only exiting the loop when the following two conditions are satisfied:
the timer overflow count hasn't changed while reading TCNT for the first time in the loop
the second count is greater than the first count
This basically means that if MOTOR_GetCurrentTime is called around the time that a timer overflow occurs, we wait until we've safely moved on to the next cycle, indicated by the second TCNT read being greater than the first (e.g. 0x0001 > 0x0000).
This does mean that the function blocks until TCNT increments at least once, but since that occurs every 333 nanoseconds I don't see it being problematic.
I've tried running my test 20 times in a row and haven't noticed any tearing, so I believe this works. I'll continue to test and update this answer if I'm wrong and the issue persists.
Edit: As Vroomfondel points out in the comments below, the check I do involving countA and countB also incidentally works for me and can potentially cause the loop to repeat indefinitely if _timerOverflowsNoReset is read fast enough. I'll update this answer when I've come up with something to address this.
The atomic reads are not the main problem here.
It's the problem that the overflow-ISR and TCNT are highly related.
And you get problems when you read first TCNT and then the overflow counter.
Three sample situations:
TCNT=0x0000, Overflow=0 --- okay
TCNT=0xFFFF, Overflow=1 --- fails
TCNT=0x0001, Overflow=1 --- okay again
You got the same problems, when you change the order to: First read overflow, then TCNT.
You could solve it with reading twice the totalOverflow counter.
disable_ints();
uint16_t overflowsA=totalOverflows;
uint16_t cnt = TCNT;
uint16_t overflowsB=totalOverflows;
enable_ints();
uint32_t totalCnt = cnt;
if ( overflowsA != overflowsB )
{
if (cnt < 0x4000)
totalCnt += 0x10000;
}
totalCnt += (uint32_t)overflowsA << 16;
If the totalOverflowCounter changed while reading the TCNT, then it's necessary to check if the value in tcnt is already greater 0 (but below ex. 0x4000) or if tcnt is just before the overflow.
One technique that can be helpful is to maintain two or three values that, collectively, hold overlapping portions of a larger value.
If one knows that a value will be monotonically increasing, and one will never go more than 65,280 counts between calls to "update timer" function, one could use something like:
// Note: Assuming a platform where 16-bit loads and stores are atomic
uint16_t volatile timerHi, timerMed, timerLow;
void updateTimer(void) // Must be only thing that writes timers!
{
timerLow = HARDWARE_TIMER;
timerMed += (uint8_t)((timerLow >> 8) - timerMed);
timerHi += (uint8_t)((timerMed >> 8) - timerHi);
}
uint32_t readTimer(void)
{
uint16_t tempTimerHi = timerHi;
uint16_t tempTimerMed = timerMed;
uint16_t tempTimerLow = timerLow;
tempTimerMed += (uint8_t)((tempTimerLow >> 8) - tempTimerMed);
tempTimerHi += (uint8_t)((tempTimerMed >> 8) - tempTimerHi);
return ((uint32_t)tempTimerHi) << 16) | tempTimerLow;
}
Note that readTimer reads timerHi before it reads timerLow. It's possible that updateTimer might update timerLow or timerMed between the time readTimer reads
timerHi and the time it reads those other values, but if that occurs, it will
notice that the lower part of timerHi needs to be incremented to match the upper
part of the value that got updated later.
This approach can be cascaded to arbitrary length, and need not use a full 8 bits
of overlap. Using 8 bits of overlap, however, makes it possible to form a 32-bit
value by using the upper and lower values while simply ignoring the middle one.
If less overlap were used, all three values would need to take part in the
final computation.
The problem is that the writes to _timerOverflowsNoReset isn't atomic and you don't protect them. This is a bug. Writing atomic from the ISR isn't very important, as the HCS12 blocks the background program during interrupt. But reading atomic in the background program is absolutely necessary.
Also, have in mind that Codewarrior/HCS12 generates somewhat ineffective code for 32 bit arithmetic.
Here is how you can fix it:
Drop unsigned long for the shared variable. In fact you don't need a counter at all, given that your background program can service the variable within 22ms real-time - should be very easy requirement. Keep your 32 bit counter local and away from the ISR.
Ensure that reads of the shared variable are atomic. Disassemble! It must be a single MOV instruction or similar; otherwise you must implement semaphores.
Don't read any volatile variable inside complex expressions. Not only the shared variable but also the TCNT. Your program as it stands has a tight coupling between the slow 32 bit arithmetic algorithm's speed and the timer, which is very bad. You won't be able to reliably read TCNT with any accuracy, and to make things worse you call this function from other complex code.
Your code should be changed to something like this:
static volatile bool overflow;
void timovf_isr(void)
{
// Clear the interrupt.
TFLG2_TOF = 1;
// TEMP
overflow = true;
// ...
}
unsigned long MOTOR_GetCurrentTime(void)
{
bool of = overflow; // read this on a line of its own, ensure this is atomic!
uint16_t tcnt = TCNT; // read this on a line of its own
overflow = false; // ensure this is atomic too
if(of)
{
_timerOverflowsNoReset++;
}
/* calculations here */
return microseconds;
}
If you don't end up with atomic reads, you will have to implement semaphores, block the timer interrupt or write the reading code in inline assembler (my recommendation).
Overall I would say that your design relying on TOF is somewhat questionable. I think it would be better to set up a dedicated timer channel and let it count up a known time unit (10ms?). Any reason why you can't use one of the 8 timer channels for this?
It all boils down to the question of how often you do read the timer and how long the maximum interrupt sequence will be in your system (i.e. the maximum time the timer code can be stopped without making "substantial" progress).
Iff you test for time stamps more often than the cycle time of your hardware timer AND those tests have the guarantee that the end of one test is no further apart from the start of its predecessor than one interval (in your case 22ms), all is well. In the case your code is held up for so long that these preconditions don't hold, the following solution will not work - the question then however is whether the time information coming from such a system has any value at all.
The good thing is that you don't need an interrupt at all - any try to compensate for the inability of the system to satisfy two equally hard RT problems - updating your overflow timer and delivering the hardware time is either futile or ugly plus not meeting the basic system properties.
unsigned long MOTOR_GetCurrentTime(void)
{
static uint16_t last;
static uint16_t hi;
volatile uint16_t now = TCNT;
if (now < last)
{
hi++;
}
last = now;
return now + (hi * 65536UL);
}
BTW: I return ticks, not microseconds. Don't mix concerns.
PS: the caveat is that such a function is not reentrant and in a sense a true singleton.

Average from error prone measurement samples without buffering

I got a µC which measures temperature with of a sensor with an ADC. Due to various circumstances it can happen, that the reading is 0 (-30°C) or a impossible large Value (500-1500°C). I can't fix the reasons why these readings are so bad (time critical ISRs and sometimes a bad wiring) so I have to fix it with a clever piece of code.
I've come up with this (code gets called OVERSAMPLENR-times in a ISR):
#define OVERSAMPLENR 16 //read value 16 times
#define TEMP_VALID_CHANGE 0.15 //15% change in reading is possible
//float raw_tem_bed_value = <sum of all readings>;
//ADC = <AVR ADC reading macro>;
if(temp_count > 1) { //temp_count = amount of samples read, gets increased elsewhere
float avgRaw = raw_temp_bed_value / temp_count;
float diff = (avgRaw > ADC ? avgRaw - ADC : ADC - avgRaw) / (avgRaw == 0 ? 1 : avgRaw); //pulled out to shorten the line for SO
if (diff > TEMP_VALID_CHANGE * ((OVERSAMPLENR - temp_count) / OVERSAMPLENR)) //subsequent readings have a smaller tollerance
raw_temp_bed_value += avgRaw;
else
raw_temp_bed_value += ADC;
} else {
raw_temp_bed_value = ADC;
}
Where raw_temp_bed_value is a static global and gets read and processed later, when the ISR got fired 16 times.
As you can see, I check if the difference between the current average and the new reading is less then 15%. If so I accept the reading, if not, I reject it and add the current average instead.
But this breaks horribly if the first reading is something impossible.
One solution I though of is:
In the last line the raw_temp_bed_value is reset to the first ADC reading. It would be better to reset this to raw_temp_bed_value/OVERSAMPLENR. So I don't run in a "first reading error".
Do you have any better solutions? I though of some solutions featuring a moving average and use the average of the moving average but this would require additional arrays/RAM/cycles which we want to prevent.
I've often used something what I call rate of change to the sampling. Use a variable that represents how many samples it takes to reach a certain value, like 20. Then keep adding your sample difference to a variable divided by the rate of change. You can still use a threshold to filter out unlikely values.
float RateOfChange = 20;
float PreviousAdcValue = 0;
float filtered = FILTER_PRESET;
while(1)
{
//isr gets adc value here
filtered = filtered + ((AdcValue - PreviousAdcValue)/RateOfChange);
PreviousAdcValue = AdcValue;
sleep();
}
Please note that this isn't exactly like a low pass filter, it responds quicker and the last value added has the most significance. But it will not change much if a single value shoots out too much, depending on the rate of change.
You can also preset the filtered value to something sensible. This prevents wild startup behavior.
It takes up to RateOfChange samples to reach a stable value. You may want to make sure the filtered value isn't used before that by using a counter to count the number of samples taken for example. If the counter is lower than RateOfChange, skip processing temperature control.
For a more advanced (temperature) control routine, I highly recommend looking into PID control loops. These add a plethora of functionality to get a fast, stable response and keep something at a certain temperature efficiently and keep oscillations to a minimum. I've used the one used in the Marlin firmware in my own projects and works quite well.

Resources