Average from error prone measurement samples without buffering - c

I got a µC which measures temperature with of a sensor with an ADC. Due to various circumstances it can happen, that the reading is 0 (-30°C) or a impossible large Value (500-1500°C). I can't fix the reasons why these readings are so bad (time critical ISRs and sometimes a bad wiring) so I have to fix it with a clever piece of code.
I've come up with this (code gets called OVERSAMPLENR-times in a ISR):
#define OVERSAMPLENR 16 //read value 16 times
#define TEMP_VALID_CHANGE 0.15 //15% change in reading is possible
//float raw_tem_bed_value = <sum of all readings>;
//ADC = <AVR ADC reading macro>;
if(temp_count > 1) { //temp_count = amount of samples read, gets increased elsewhere
float avgRaw = raw_temp_bed_value / temp_count;
float diff = (avgRaw > ADC ? avgRaw - ADC : ADC - avgRaw) / (avgRaw == 0 ? 1 : avgRaw); //pulled out to shorten the line for SO
if (diff > TEMP_VALID_CHANGE * ((OVERSAMPLENR - temp_count) / OVERSAMPLENR)) //subsequent readings have a smaller tollerance
raw_temp_bed_value += avgRaw;
else
raw_temp_bed_value += ADC;
} else {
raw_temp_bed_value = ADC;
}
Where raw_temp_bed_value is a static global and gets read and processed later, when the ISR got fired 16 times.
As you can see, I check if the difference between the current average and the new reading is less then 15%. If so I accept the reading, if not, I reject it and add the current average instead.
But this breaks horribly if the first reading is something impossible.
One solution I though of is:
In the last line the raw_temp_bed_value is reset to the first ADC reading. It would be better to reset this to raw_temp_bed_value/OVERSAMPLENR. So I don't run in a "first reading error".
Do you have any better solutions? I though of some solutions featuring a moving average and use the average of the moving average but this would require additional arrays/RAM/cycles which we want to prevent.

I've often used something what I call rate of change to the sampling. Use a variable that represents how many samples it takes to reach a certain value, like 20. Then keep adding your sample difference to a variable divided by the rate of change. You can still use a threshold to filter out unlikely values.
float RateOfChange = 20;
float PreviousAdcValue = 0;
float filtered = FILTER_PRESET;
while(1)
{
//isr gets adc value here
filtered = filtered + ((AdcValue - PreviousAdcValue)/RateOfChange);
PreviousAdcValue = AdcValue;
sleep();
}
Please note that this isn't exactly like a low pass filter, it responds quicker and the last value added has the most significance. But it will not change much if a single value shoots out too much, depending on the rate of change.
You can also preset the filtered value to something sensible. This prevents wild startup behavior.
It takes up to RateOfChange samples to reach a stable value. You may want to make sure the filtered value isn't used before that by using a counter to count the number of samples taken for example. If the counter is lower than RateOfChange, skip processing temperature control.
For a more advanced (temperature) control routine, I highly recommend looking into PID control loops. These add a plethora of functionality to get a fast, stable response and keep something at a certain temperature efficiently and keep oscillations to a minimum. I've used the one used in the Marlin firmware in my own projects and works quite well.

Related

Reducing firebase latency by only sending changed value?

I am using an Wemos D1 Mini (Arduino) to send sensor data to Firebase. It is one value I'm sending. I found that this makes the program slow down, so the sensor isn't able to get the data as fast, as data is being sent (which is kind of obvious).
Anyhow, I want to send the value to Firebase only when this value changed its property. It is an int value, but I'm not sure how to go around this. Should I use a listener? This is a portion of my code:
int n = 0; // will be used to store the count
Firebase.setInt("Reps/Value", n); // sends value to fb
delay(100); // Wait 1 second and scan again
I was hoping that the sensor could scan every second, which it does. But at this rate (pun intended) the value is being pushed every second to FB. This slows down the scanning to every 3 seconds. How can I only use the firebaseSetInt method when n changes its value?
You can check after every reading every new value whether the value has been changed or not by adding just a simple conditional statement.
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
if(n!=n_old) { //checks whether value is changed
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again
Or if you want to go for a tolerance approach, you can further do something like this:
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
int tolerance = 3; // tolerance upto 3%
if(abs((n-n_old)/((n+n_old)/2))*100 > tolerance) {
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again
Coming from a professional use of remote databases, you should go for a gliding average approach. You do this by creating a circle buffer with lets say 30 sensor values and calculate an average value. As long as a value is +/- 3% within the average recorded at time0 you do not update. If the value is above or under you send to Firebase and set a new time0 average. Depending on your precision and needs you ease the stress on the systems.Imho only life safers like current breakers or flow cutting (liquids) have to be real time, all hobby applications like measuring wind speed, heating etc are well designed with 20 - 60 sec intervals.
The event listener by the way is this approach, just do something if its out of the norm. If you have a fixed target value as a reference its much easier to check for the +/- difference. If the pricing of FB changes it will be an issue for devs - so plan ahead.

Getting more precise timing control in Linux

I am trying to create a low-jitter multicast source for digital TV. The program in question should buffer the input, calculate the intended times from the PCR values in the stream and then send the packets at relatively precise intervals. However, this is not running on a RTOS, so some timing variance is expected.
This is the basic code (the relevant variables are initialized, I just omitted the code here):
while (!sendstop) {
//snip
//put 7 MPEG packets in one UDP packet buffer "outpkt"
//snip
waittime = //calculate from PCR values - value is in microseconds
//waittime is in the order of 2000 -> 2ms
sleeptime=curtime;
sleeptime.tv_nsec += waittime * 1000L;
sleeptime.tv_sec += sleeptime.tv_nsec / 1000000000;
sleeptime.tv_nsec %= 1000000000;
while (clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &sleeptime, NULL) && errno == EINTR) {
printf("I");
}
sendto(sck,outpkt,1316,0,res->ai_addr,res->ai_addrlen); //send the packet
clock_gettime(CLOCK_MONOTONIC,&curtime);
}
However, this results in the sending being too slow (since there is some processing that also takes time), so the buffer fills up. So, I thought that I should get the difference between "sleeptime" (the time that should have been) and "curtime" (the actual time) and the subtract it from the future "waittime". This almost works, but now is a bit too fast and now I get an empty buffer.
My next idea was to multiply the difference by some value before subtracting it, like this (just above "while..."):
difn=curtime.tv_nsec-ostime.tv_nsec;
if (difn<0) difn+=1000000000;
sleeptime.tv_nsec = sleeptime.tv_nsec-(difn*difnc)/1000; //difnc - adjustment
if (sleeptime.tv_nsec<0) {
sleeptime.tv_nsec+=1000000000;
sleeptime.tv_sec--;
}
However, different values of difnc work at different times of day, servers and so on. There needs to be some kind of automatic adjustment based on the operation of the program. The best I could figure out was to increment/decrement it every time the buffer is full or empty, however, this leads to slow cycles of "too fast" - "too slow". I tried to adjust the "difnc" value based on how full/empty the buffer is but that too just leads of "slow"-"fast" cycles.
How can I properly automatically derive the "difnc" value or is there some other method of getting a more precise timing than with just the "clock_nanosleep" function but without busy waits (the server has other things to do)?

Moving Average for ADC

Hi All i am working on a project where I have to calculate the moving average of ADC readings. The data coming out from ADC represent an Sinusoidal wave.
This is the code I am using to get moving average of a given signal.
longNew = (8 bit data from ADC);
longNew = longNew << 8;
//Division
longNew = longNew >> 8; //255 Samples
longTemp = avgALong >> 8;
avgALong -= longTemp;// Old data
avgALong += longNew;// New Data
avgA = avgALong >> 8;//256 Point Average
Reference Image
Please refer this image for upper limit and lower limit relative to reference (or avgA)
Currently I am using a constant value to obtain the upper limit and lower limit of voltage for my application
which I am calculating as follows
upper_limit = avgA + Delta(x);
lower_limit = avgA - Delta(x);
In my case I am taking Delta(x) = 15.
I want to calculate this constant expression or Delta(x) based on signal strength.
The maximum voltage level of signal is 255 or 5Volt.
The minimum voltage level of signal varies frequently because of that a constant value is not useful for my application which determines the lower and upper limit.
Please help
Thank you
Now with the description of what's going on, I think you want three running averages:
The input signal. Lightly average it to help tamp down noise.
upper_limit When you determine local maximums, push them into this average.
lower_limit When you determine local minimums, push them into this average.
Your delta would be (upper_limit-lower_limit)/8 (or 4, or whatever). Your hysteresis points would be upper_limit - delta and lower_limit + delta.
Every time you transition to '1', push the current local minimum into the lower_limit moving average and then begin searching for a new local maximum. When you transition to '0', push the local maximum into the upper_limit moving average and begin searching for a new local minimum.
There is a problem if your signal strength is wildly varying (you could get to a point where your signal suddenly drops into the hysteresis band and you never get any more transitions). You could solve this a few ways:
Count how much time you spend in the hysteresis band and reset everything if you spend too much time.
Or
for each sample in the hysteresis band, bring upper_limit and lower_limit slightly closer together. Eventually they'd collapse to the point where you start detecting transitions again.
Take this with a grain of salt. If you're doing this for a school project, it almost certainly wont match whatever scholarly method your professor is looking for.

Identifying a trend in C - Micro controller sampling

I'm working on an MC68HC11 Microcontroller and have an analogue voltage signal going in that I have sampled. The scenario is a weighing machine, the large peaks are when the object hits the sensor and then it stabilises (which are the samples I want) and then peaks again before the object roles off.
The problem I'm having is figuring out a way for the program to detect this stable point and average it to produce an overall weight but can't figure out how :/. One way I have thought about doing is comparing previous values to see if there is not a large difference between them but I haven't had any success. Below is the C code that I am using:
#include <stdio.h>
#include <stdarg.h>
#include <iof1.h>
void main(void)
{
/* PORTA, DDRA, DDRG etc... are LEDs and switch ports */
unsigned char *paddr, *adctl, *adr1;
unsigned short i = 0;
unsigned short k = 0;
unsigned char switched = 1; /* is char the smallest data type? */
unsigned char data[2000];
DDRA = 0x00; /* All in */
DDRG = 0xff;
adctl = (unsigned char*) 0x30;
adr1 = (unsigned char*) 0x31;
*adctl = 0x20; /* single continuos scan */
while(1)
{
if(*adr1 > 40)
{
if(PORTA == 128) /* Debugging switch */
{
PORTG = 1;
}
else
{
PORTG = 0;
}
if(i < 2000)
{
while(((*adctl) & 0x80) == 0x00);
{
data[i] = *adr1;
}
/* if(i > 10 && (data[(i-10)] - data[i]) < 20) */
i++;
}
if(PORTA == switched)
{
PORTG = 31;
/* Print a delimeter so teemtalk can send to excel */
for(k=0;k<2000;k++)
{
printf("%d,",data[k]);
}
if(switched == 1) /*bitwise manipulation more efficient? */
{
switched = 0;
}
else
{
switched = 1;
}
PORTG = 0;
}
if(i >= 2000)
{
i = 0;
}
}
}
}
Look forward to hearing any suggestions :)
(The graph below shows how these values look, the red box is the area I would like to identify.
As you sample sequence has glitches (short lived transients) try to improve the hardware ie change layout, add decoupling, add filtering etc.
If that approach fails, then a median filter [1] of say five places long, which takes the last five samples, sorts them and outputs the middle one, so two samples of the transient have no effect on it's output. (seven places ...three transient)
Then a computationally efficient exponential averaging lowpass filter [2]
y(n) = y(n–1) + alpha[x(n) – y(n–1)]
choosing alpha (1/2^n, division with right shifts) to yield a time constant [3] of less than the underlying response (~50samples), but still filter out the noise. Increasing the effective fractional bits will avoid the quantizing issues.
With this improved sample sequence, thresholds and cycle count, can be applied to detect quiescent durations.
Additionally if the end of the quiescent period is always followed by a large, abrupt change then using a sample delay "array", enables the detection of the abrupt change but still have available the last of the quiescent samples for logging.
[1] http://en.wikipedia.org/wiki/Median_filter
[2] http://www.dsprelated.com/showarticle/72.php
[3] http://en.wikipedia.org/wiki/Time_constant
Note
Adding code for the above filtering operations will lower the maximum possible sample rate but printf can be substituted for something faster.
Continusously store the current value and the delta from the previous value.
Note when the delta is decreasing as the start of weight application to the scale
Note when the delta is increasing as the end of weight application to the scale
Take the X number of values with the small delta and average them
BTW, I'm sure this has been done 1M times before, I'm thinking that a search for scale PID or weight PID would find a lot of information.
Don't forget using ___delay_ms(XX) function somewhere between the reading values, if you will compare with the previous one. The difference in each step will be obviously small, if the code loop continuously.
Looking at your nice graphs, I would say you should look only for the falling edge, it is much consistent than leading edge.
In other words, let the samples accumulate, calculate the running average all the time with predefined window size, remember the deviation of the previous values just for reference, check for a large negative bump in your values (like absolute value ten times smaller then current running average), your running average is your value. You could go back a little bit (disregarding last few values in your average, and recalculate) to compensate for small positive bump visible in your picture before each negative bump...No need for heavy math here, you could not model the reality better then your picture has shown, just make sure that your code detect the end of each and every sample. You have to be fast enough with sample to make sure no negative bump was missed (or you will have big time error in your data averaging).
And you don't need that large arrays, running average is better based on smaller window size, smaller residual error in your case when you detect the negative bump.

how to measure serial receive byte speed. eg bytes per second.

I'm receiving byte by byte via serial at baud rate of 115200. How to calculate bytes per sec im receiving in a c program?
There are only 3 ways to measure bytes actually received per second.
The first way is to keep track of how many bytes you receive in a fixed length of time. For example, each time you receive bytes you might do counter += number_of_bytes, and then every 5 seconds you might do rate = counter/5; counter = 0;.
The second way is to keep track of how much time passed to receive a fixed number of bytes. For example, every time you receive one byte you might do temp = now(); rate = 1/(temp - previous); previous = temp;.
The third way is to combine both of the above. For example, each time you receive bytes you might do temp = now(); rate = number_of_bytes/(temp - previous); previous = temp;.
For all of the above, you end up with individual samples and not an average. To convert the samples into an average you'd need to do something like average = sum_of_samples / number_of_samples. The best way to do this (e.g. if you want nice/smooth looking graphs) would be to store a lot of samples; where you'd replace the oldest sample with a new sample and recalculate the average.
For example:
double sampleData[1024];
int nextSlot = 0;
double average;
addSample(double value) {
double sum = 0;
sampleData[nextSlot] = value;
nextSlot++;
if(nextSlot >= 1024) nextSlot = 0;
for(int i = 0; i < 1024; i++) sum += sampleData[1024];
average = sum/1024;
}
Of course the final thing (collecting the samples using one of the 3 methods, then finding the average) would need some fiddling to get the resolution how you want it.
Assuming you have some fairly continuous input, just count the number of bytes you receive, and after some number of characters have been received, print out the time and number of characters over that time. You'll need a fairly good timestamp - clock() may be one reasonable source, but it depends on what system you are on what is the "best" option - as well as how portable you want it, but serial comms tend to not be very portable anyways, or your error will probably be large. Each time you print, reset the count.
To correct some odd comments in this thread about the theoretical maximum:
Around the time that 14400 Baud modems came to the pre-web world, the measure of Baud changed from Baud (wiki it) to match emerging digital technologies such as ISDN 64kbit. At that time, Baud became to mean Bits/second.
Being serial data in the format of 8N1, a common shorthand notation, there are eight bits, no parity bit, and one stop bit for every byte. There is no start bit.
So a theoretical maximum for 8N1 serial over 115200 Baud (bits/sec) = 115200/(8+1) = 12800 bytes/sec.
Similar (but not the same) to watching your download speeds, the rough ball-park way to work out bytes/sec from bits/sec, without a calculator, is to divide by 10.
Baud rate is measurement of how many times per second a signal is able to change. In one of that cycles, depending on the modulation you are using, you can send one or more bits (if you are using no modulation - bit rate is the same as baud rate).
Let's say you are using QPSK modulation, so you can transmit/receive 2 bits per baud. So, if you are receiving data at 115200 baud rate, 2 bits per symbol, you are receiving data with 115200 * 2 = 230400bps.

Resources