ESP8266 hadrware timer, uS to ticks strange macro - c

Can someone explain me why this macro is written like this?!?...
Trying to understand how the ESP8266 hardware timer works, since manufacturer doesn't provide much data and their examples are just spagetti code.
Now a timer is a timer, just counts (down to 0 in ESP8266) based on it's HW clock and in this case is APB/4 or 20 MHz.
The uS to TICKS should be as simple as that:
ticks = uS * MHz
Now the Espressif examples shows a macro which basically do the same as above, depending on magic number 0x35A... ???
The magic path would be equivalent to:
ticks = uS/4 (no float, rounded dwn) * MHz * 4
+
us%4 * MHz
Why is that? Am I missing something?
Original one:
#define US_TO_RTC_TIMER_TICKS(t) \
((t) ? \
(((t) > 0x35A) ? \
(((t)>>2) * ((APB_CLK_FREQ>>4)/250000) + ((t)&0x3) * ((APB_CLK_FREQ>>4)/1000000)) : \
(((t) *(APB_CLK_FREQ>>4)) / 1000000)) : \
0)
Now breaking into it, it turns out that:
#define US_TO_RTC_TIMER_TICKS(t)
(
(t) ? ( // if t, ok for now
// hmmm... magic number for a timer 0x35A or 858 dec
((t) > 0x35A) ?
( // if greater than magic number
((t)>>2) * ((APB_CLK_FREQ>>4)/250000) + ((t)&0x3) * ((APB_CLK_FREQ>>4)/1000000)
// t/4[uS] * 80 equivalent with t * 20 but with match round
// Example
// 103uS normal would be 103uS * 20 = 2060 (as count)
//
// 103/4 = 25 * 80 = 2000 + 3 * 20 = 2000 + 60 = 2060
// hmmm.. correct but why?
) :
(
// if lower than MAGIC
// t / period = count <=> t[uS] * freq[MHz] = count
//
// t * 20 this is understandable :)
//
((t) *(APB_CLK_FREQ>>4)) / 1000000
)
) : 0
)

This is simply to prevent overflow. If the start value is greater than "not a magic number" then the multiplication with 'clock frequency / 4' can cause an overflow. To prevent this you can divide the number by four first (by bit-shifting so: '>> 2') but that would lose the resolution of the lower two bits because they would be lost. So the lower two bits are separated and then accounted for.
You can get the magic number by dividing the max value of an int by APB_CLK_FREQ >> 4 i.e.
(2 ^ 32 - 1) / ((80 * 10 ^ 6) / 16) = 858.99
We'll round off to a lower value so the max value we can handle directly is 858 or 0x35A.
I'd also like to correct you on the tics front. The hardware timer runs on the CPU frequency and the hw_timer source uses a prescaler of 16 though you can modify the source (with some effort - read the register definitions in the technical manual of esp8266) to use a prescaler of 1 or 256 too apart from 16.
So the tics per us would be
us * 80 * 10 ^ 6 / <pre-scaler>

Related

RMS calculation DC offset

I need to implement an RMS calculations of sine wave in MCU (microcontroller, resource constrained). MCU lacks FPU (floating point unit), so I would prefer to stay in integer realm. Captures are discrete via 10 bit ADC.
Looking for a solution, I've found this great solution here by Edgar Bonet: https://stackoverflow.com/a/28812301/8264292
Seems like it completely suits my needs. But I have some questions.
Input are mains 230 VAC, 50 Hz. It's transformed & offset by hardware means to become 0-1V (peak to peak) sine wave which I can capture with ADC getting 0-1023 readings. Hardware are calibrated so that 260 VRMS (i.e. about -368:+368 peak to peak) input becomes 0-1V peak output. How can I "restore" back original wave RMS value providing I want to stay in integer realm too? Units can vary, mV will do fine also.
My first guess was subtracting 512 from the input sample (DC offset) and later doing this "magic" shift as in Edgar Bonet answer. But I've realized it's wrong because DC offset aren't fixed. Instead it's biased to start from 0V. I.e. 130 VAC input would produce 0-500 mV peak to peak output (not 250-750 mV which would've worked so far).
With real RMS to subtract the DC offset I need to subtract squared sum of samples from the sum of squares. Like in this formula:
So I've ended up with following function:
#define INITIAL 512
#define SAMPLES 1024
#define MAX_V 368UL // Maximum input peak in V ( 260*sqrt(2) )
/* K is defined based on equation, where 64 = 2^6,
* i.e. 6 bits to add to 10-bit ADC to make it 16-bit
* and double it for whole range in -peak to +peak
*/
#define K (MAX_V*64*2)
uint16_t rms_filter(uint16_t sample)
{
static int16_t rms = INITIAL;
static uint32_t sum_squares = 1UL * SAMPLES * INITIAL * INITIAL;
static uint32_t sum = 1UL * SAMPLES * INITIAL;
sum_squares -= sum_squares / SAMPLES;
sum_squares += (uint32_t) sample * sample;
sum -= sum / SAMPLES;
sum += sample;
if (rms == 0) rms = 1; /* do not divide by zero */
rms = (rms + (((sum_squares / SAMPLES) - (sum/SAMPLES)*(sum/SAMPLES)) / rms)) / 2;
return rms;
}
...
// Somewhere in a loop
getSample(&sample);
rms = rms_filter(sample);
...
// After getting at least N samples (SAMPLES * X?)
uint16_t vrms = (uint32_t)(rms*K) >> 16;
printf("Converted Vrms = %d V\r\n", vrms);
Does it looks fine? Or am I doing something wrong like this?
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? I.e. how much real samples do I need to feed to rms_filter() before I can get real RMS value providing my capture speed are X sps? At least how to evaluate required minimum N of samples?
I did not test your code, but it looks to me like it should work fine.
Personally, I would not have implemented the function this way. I would
instead have removed the DC part of the signal before trying to
compute the RMS value. The DC part can be estimated by sending the raw
signal through a low pass filter. In pseudo-code this would be
rms = sqrt(low_pass(square(x - low_pass(x))))
whereas what you wrote is basically
rms = sqrt(low_pass(square(x)) - square(low_pass(x)))
It shouldn't really make much of a difference though. The first formula,
however, spares you a multiplication. Also, by removing the DC component
before computing the square, you end up multiplying smaller numbers,
which may help in allocating bits for the fixed-point implementation.
In any case, I recommend you test the filter on your computer with
synthetic data before committing it to the MCU.
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC
capture rate (samples per second)?
The constant SAMPLES controls the cut-off frequency of the low pass
filters. This cut-off should be small enough to almost completely remove
the 50 Hz part of the signal. On the other hand, if the mains
supply is not completely stable, the quantity you are measuring will
slowly vary with time, and you may want your cut-off to be high enough
to capture those variations.
The transfer function of these single-pole low-pass filters is
H(z) = z / (SAMPLES * z + 1 − SAMPLES)
where
z = exp(i 2 π f / f₀),
i is the imaginary unit,
f is the signal frequency and
f₀ is the sampling frequency
If f₀ ≫ f (which is desirable for a good sampling), you can approximate
this by the analog filter:
H(s) = 1/(1 + SAMPLES * s / f₀)
where s = i2πf and the cut-off frequency is f₀/(2π*SAMPLES). The gain
at f = 50 Hz is then
1/sqrt(1 + (2π * SAMPLES * f/f₀)²)
The relevant parameter here is (SAMPLES * f/f₀), which is the number of
periods of the 50 Hz signal that fit inside your sampling window.
If you fit one period, you are letting about 15% of the signal through
the filter. Half as much if you fit two periods, etc.
You could get perfect rejection of the 50 Hz signal if you design a
filter with a notch at that particular frequency. If you don't want
to dig into digital filter design theory, the simplest such filter may
be a simple moving average that averages over a period of exactly
20 ms. This has a non trivial cost in RAM though, as you have to
keep a full 20 ms worth of samples in a circular buffer.

Calculation with preprocessor-defines yields incorrect value

I am pretty sure it is a common mistake of a beginner but I don't know the correct words for finding an existing answer.
I have defined some consecutive preprocessor macros:
#define DURATION 10000
#define NUMPIXELS 60
#define NUMTRANSITION 15
#define UNITS_PER_PIXEL 128
#define R ((NUMPIXELS-1)/2 + NUMTRANSITION) * UNITS_PER_PIXEL
Later I use these values to calculate a value and assign it to a variable. Here two examples:
// example 1
uint16_t temp; // ranges from 500 to 10000
temp = 255 * (temp - 500) / (10000 - 500);
Here, temp is always 0. Since my guess was/is an issue with the datatype, I also tried uint32_t temp. However, temp was always 255 in this case.
// example 2
uint32_t h = millis() + offset - t0 * R / DURATION;
millis() returns an increasing unsigned long value (milliseconds since the start). Here, h increases a factor of 4 too fast. The same for unsigned long h. When I tried a workaround by dividing by 4*DURATION, h was always 0.
Is it an datatype issue? If or if not, how can I solve it?
This code works on Arduino Uno and an ESP32 as expected
#define DURATION 10000
#define NUMPIXELS 60
#define NUMTRANSITION 15
#define UNITS_PER_PIXEL 128
#define R ((NUMPIXELS-1)/2 + NUMTRANSITION) * UNITS_PER_PIXEL
uint32_t t0 = millis();
void setup() {
// put your setup code here, to run once:
Serial.begin (115200);
//Later I use these values to calculate a value and assign it to a variable. Here two examples:
// example 1
randomSeed(721);
}
void loop() {
// For UNO /ESP use uint16_t temp = random(500,10000); // ranges from 500 to 10000
// for ATtiny85 (DigiSpark)
uint32_t temp = random(500,10000); // ranges from 500 to 10000
temp = 255 * (temp - 500) / (10000 - 500);
// Here, temp is always 0. Since my guess was / is an issue with the datatype, I also tried uint32_t temp. However, temp was always 255 in this case.
// example 2
Serial.println(temp);
uint16_t offset = random(2000, 5000);
uint32_t h = millis() + offset - t0 * R / DURATION;
Serial.println(h);
delay (5000); // Only to prevent too much Serial data, never use in production
}
Environment ArduinoIDE 1.8.12/ avr core 1.8.2 and ESP32 1.04. What hardware are you compiling to? Try the test program, it does (at least on the tested hardware) what it should.
EDIT: For reference OP uses Attiny85 (Digispark) where var size defs are more critical than on UNO /ESP -instead of Serial you would use SerialUSB. Tip for future support requesters -> always reference your environment (HW & SW) with microcontrollers because of possible hardware specific issues

calculation in C for C2000 device

I'm having some trouble with my C code. I have an ADC which will be used to determine whether to shut down (trip zone) the PWM which I'm using. But my calculation seem to not work as intended, because the ADC shuts down the PWM at the wrong voltage levels. I initiate my variables as:
float32 current = 5;
Uint16 shutdown = 0;
and then I calculate as:
// Save the ADC input to variable
adc_info->adc_result0 = AdcRegs.ADCRESULT0>>4; //bit shift 4 steps because adcresult0 is effectively a 12-bit register not 16-bit, ADCRESULT0 defined as Uint16
current = -3.462*((adc_info->adc_result0/1365) - 2.8);
// Evaluate if too high or too low
if(current > 9 || current < 1)
{
shutdown = 1;
}
else
{
shutdown = 0;
}
after which I use this if statement:
if(shutdown == 1)
{
EALLOW; // EALLOW protected register
EPwm1Regs.TZFRC.bit.OST = 1; // Force a one-shot trip-zone interrupt
EDIS; // end write to EALLOW protected register
}
So I want to trip the PWM if current is above 9 or below 1 which should coincide with an adc result of <273 (0x111) and >3428 (0xD64) respectively. The ADC values correspond to the voltages 0.2V and 2.51V respectively. The ADC measure with a 12-bit accuracy between the voltages 0 and 3V.
However, this is not the case. Instead, the trip zone goes off at approximately 1V and 2.97V. So what am I doing wrong?
adc_info->adc_result0/1365
Did you do integer division here while assuming float?
Try this fix:
adc_info->adc_result0/1365.0
Also, the #pmg's suggestion is good. Why spending cycles on calculating the voltage, when you can compare the ADC value immediately against the known bounds?
if (adc_info->adc_result0 < 273 || adc_info->adc_result0 > 3428)
{
shutdown = 1;
}
else
{
shutdown = 0;
}
If you don't want to hardcode the calculated bounds (which is totally understandable), then define them as calculated from values which you'd want to hardcode literally:
#define VOLTAGE_MIN 0.2
#define VOLTAGE_MAX 2.51
#define AREF 3.0
#define ADC_PER_VOLT (4096 / AREF)
#define ADC_MIN (VOLTAGE_MIN * ADC_PER_VOLT) /* 273 */
#define ADC_MAX (VOLTAGE_MAX * ADC_PER_VOLT) /* 3427 */
/* ... */
shutdown = (adcresult < ADC_MIN || adcresult > ADC_MAX) ? 1 : 0;
/* ... */
When you've made sure to grasp the integer division rules in C, consider a little addition to your code style: to always write constant coefficients and divisors with a decimal (to make sure they get a floating point type), e.g. 10.0 instead of 10 — unless you specifically mean integer division with truncation. Sometimes it may also be a good idea to specify float literal precision with appropriate suffix.

Division in C returns 0... sometimes

Im trying to make a simple RPM meter using an ATMega328.
I have an encoder on the motor which has 306 interrupts per rotation (as the motor encoder has 3 spokes which interrupt on rising and falling edge, the motor is geared 51:1 and so 6 transitions * 51 = 306 interrupts per wheel rotation ) , and I am using a timer interrupting every 1ms, however in the interrupt it set to recalculate every 1 second.
There seems to be 2 problems.
1) RPM never goes below 60, instead its either 0 or RPM >= 60
2) Reducing the time interval causes it to always be 0 (as far as I can tell)
Here is the code
int main(void){
while(1){
int temprpm = leftRPM;
printf("Revs: %d \n",temprpm);
_delay_ms(50);
};
return 0;
}
ISR (INT0_vect){
ticksM1++;
}
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == 1000){
int tempticks = ticksM1;
leftRPM = ((tempticks - lastM1)/306)*1*60;
lastM1 = tempticks;
counter = 0;
}
}
Anything that is not declared in that code is declared globally and as an int, ticksM1 is also volatile.
The macros are AVR macros for the interrupts.
The purpose of the multiplying by 1 for leftRPM represents time, ideally I want to use 1ms without the if statement so the 1 would then be 1000
For a speed between 60 and 120 RPM the result of ((tempticks - lastM1)/306) will be 1 and below 60 RPM it will be zero. Your output will always be a multiple of 60
The first improvement I would suggest is not to perform expensive arithmetic in the ISR. It is unnecessary - store the speed in raw counts-per-second, and convert to RPM only for display.
Second, perform the multiply before the divide to avoid unnecessarily discarding information. Then for example at 60RPM (306CPS) you have (306 * 60) / 306 == 60. Even as low as 1RPM you get (6 * 60) / 306 == 1. In fact it gives you a potential resolution of approximately 0.2RPM as opposed to 60RPM! To allow the parameters to be easily maintained; I recommend using symbolic constants rather than magic numbers.
#define ENCODER_COUNTS_PER_REV 306
#define MILLISEC_PER_SAMPLE 1000
#define SAMPLES_PER_MINUTE ((60 * 1000) / MILLISEC_PER_SAMPLE)
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == MILLISEC_PER_SAMPLE)
{
int tempticks = ticksM1;
leftCPS = tempticks - lastM1 ;
lastM1 = tempticks;
counter = 0;
}
}
Then in main():
int temprpm = (leftCPS * SAMPLES_PER_MINUTE) / ENCODER_COUNTS_PER_REV ;
If you want better that 1RPM resolution you might consider
int temprpm_x10 = (leftCPS * SAMPLES_PER_MINUTE) / (ENCODER_COUNTS_PER_REV / 10) ;
then displaying:
printf( "%d.%d", temprpm / 10, temprpm % 10 ) ;
Given the potential resolution of 0.2 rpm by this method, higher resolution display is unnecessary, though you could use a moving-average to improve resolution at the expense of some "display-lag".
Alternatively now that the calculation of RPM is no longer in the ISR you might afford a floating point operation:
float temprpm = ((float)leftCPS * (float)SAMPLES_PER_MINUTE ) / (float)ENCODER_COUNTS_PER_REV ;
printf( "%f", temprpm ) ;
Another potential issue is that ticksM1++ and tempticks = ticksM1, and the reading of leftRPM (or leftCPS in my solution) are not atomic operations, and can result in an incorrect value being read if interrupt nesting is supported (and even if it is not in the case of the access from outside the interrupt context). If the maximum rate will be less that 256 cps (42RPM) then you might get away with an atomic 8 bit counter; you cal alternatively reduce your sampling period to ensure the count is always less that 256. Failing that the simplest solution is to disable interrupts while reading or updating non-atomic variables shared across interrupt and thread contexts.
It's integer division. You would probably get better results with something like this:
leftRPM = ((tempticks - lastM1)/6);

How usecs_to_jiffies transforms usecs to jiffies if jiffies are in resolution of msecs?

from here:
The value of HZ varies across kernel versions and
hardware platforms. On i386
the situation is as follows: on kernels up to and including 2.4.x, HZ was 100,
giving a jiffy value of 0.01 seconds; starting with 2.6.0, HZ was raised to
1000, giving a jiffy of 0.001 seconds. Since kernel 2.6.13, the HZ value is a
kernel configuration parameter and can be 100, 250 (the default) or 1000,
yielding a jiffies value of, respectively, 0.01, 0.004, or 0.001 seconds.
Since kernel 2.6.20, a further frequency is available: 300, a number that
divides evenly for the common video frame rates (PAL, 25 HZ; NTSC, 30 HZ).
So how can I transform 5usec to jiffies?
extern unsigned long usecs_to_jiffies(const unsigned int u);
it seems to be useless since jiffies resolution is not high enough to measure useconds.
When in doubt, read the code!
Here it is (a version of it can be found here):
unsigned long usecs_to_jiffies(const unsigned int u)
{
if (u > jiffies_to_usecs(MAX_JIFFY_OFFSET))
return MAX_JIFFY_OFFSET;
#if HZ <= USEC_PER_SEC && !(USEC_PER_SEC % HZ)
return (u + (USEC_PER_SEC / HZ) - 1) / (USEC_PER_SEC / HZ);
#elif HZ > USEC_PER_SEC && !(HZ % USEC_PER_SEC)
return u * (HZ / USEC_PER_SEC);
#else
return (USEC_TO_HZ_MUL32 * u + USEC_TO_HZ_ADJ32)
>> USEC_TO_HZ_SHR32;
#endif
}
So, it does some stuff to check if there is a shortcut, and if nothing else works, figures it out with some 64-bit math.
But 5usec will be one jiffies, no matter which bit of code it runs.

Resources