Changing the timer frequency dynamically - timer

I'm trying to modify the frequency of a pwm timer at runtime but I don't know how exactly the counter overflow is triggered described in the reference manual.
The output should be a short pulse of constant width (constant TIMx_CCRx) with a variable frequency (TIMx_ARR) in upcounting, edge-aligned mode (p.354 and 372 in the reference manual).
I want to be able to adjust the frequency faster than the minimum archivable output frequency should be.
Eg. the tick time (CK_CNT) is 1 ms, the maximum time is 1000 ms and I want to be able to update the ARR value every 100 ms.
When the new ARR value is higher than the current counter register value the timer should continue counting up.
When the new ARR value is smaller than the current counter register value the timer should create a counter overflow and restart from 0.
To be able to update the auto-reload register every 100 ms I disabled ARR-preloading (ARPE=0).
What would happen when I write a value into the ARR register smaller than the current counter register value?
There is only an example for when the new ARR value is bigger than the counter value on p.356.
Would a counter overflow trigger and the timer starts from 0?
Do I have to create an update event (UEV) manually?
Do I have to check the counter value and restart the timer manually if the new ARR value would be lower?

I didn't find anything in the reference manual but finally got to test the behaviour on actual hardware:
Setting the ARR to a value lower than the current counter value doesn't create an interrupt/update event/etc, the timer keeps counting up
Manually creating an update event automatically restarts the counter
Setting the ARR to the current counter value creates an update event
I don't want to restart the counter when setting the ARR value to something higher than the counter value.
The solution is to create a manual update event when the ARR value is lower than the counter value.
There shouldn't be a problem with the counter reaching ARR while setting it because that automatically creates an update event.
This seems to work as expected:
void update_arr (TIM_HandleTypeDef* htim, uint16_t arr) {
__HAL_TIM_SET_AUTORELOAD(htim, arr);
if (__HAL_TIM_GET_COUNTER(htim) >= __HAL_TIM_GET_AUTORELOAD(htim)) {
htim->Instance->EGR |= TIM_EGR_UG;
}
}

Related

Protecting against overflow in a delay function

I have in a project of mine a small delay function that I have written myself by making use of a timer peripheral of my MCU:
static void delay100Us(void)
{
uint_64 ctr = TIMER_read(0); //10ns resolution
uint_64 ctr2 = ctr + 10000;
while(ctr <= ctr2) //wait 100 microseconds(10000)
{
ctr = TIMER_read(0);
}
}
The counter is a freerunning hw counter with 10ns resolution so I wrote that function as to give approximately 100us delay.
I think this should work in principle however there could be the situation where the timer is less than 10000 from overflowing and so ctr2 will get assigned a value which is more than ctr can actually reach and therefore I would end up getting stuck into an infinite loop.
I need to generate a delay using this timer in my project so I need to somehow make sure that I always get the same delay(100us) while at the same time protect myself from getting stuck there.
Is there any way I can do this or is this just a limitation that I can't get passed?
Thank you!
Edit:
ctr_start = TimerRead(); //get initial value for the counter
interval = TimerRead() - ctr_start;
while(interval <= 10000)
{
interval = ( TimerRead() - ctr_start + countersize ) % countersize;
}
Where countersize = 0xFFFFFFFFFFFFFFFF;
It can be dangerous to wait for a specific timer value in case an interrupt happens at just that moment and you miss the required count. So it is better to wait until the counter has reached at least the target value. But as noticed, comparing the timer with a target value creates a problem when the target is lower than the initial value.
One way to avoid this problem is to consider the interval that has elapsed with unsigned variables and arithmetic. Their behaviour is well defined when values wrap.
A hardware counter is almost invariably of size 8, 16, 32 or 64 bits, so choose a variable type to suit. Suppose the counter is 32-bit:
void delay(uint32_t period)
{
uint32_t mark = TIMER_read(0);
uint32_t interval;
do {
interval = TIMER_read(0) - mark; // underflow is well defined
} while(interval < period);
}
Obviously, the required period must be less than the counter's period. If not, either scale the timer's clock, or use another method (such as a counter maintained by interrupt).
Sometimes a one-shot timer is used to count down the required period, but using a free-run counter is easy, and using a one-shot timer means it can't be used by another process at the same time.

Reducing firebase latency by only sending changed value?

I am using an Wemos D1 Mini (Arduino) to send sensor data to Firebase. It is one value I'm sending. I found that this makes the program slow down, so the sensor isn't able to get the data as fast, as data is being sent (which is kind of obvious).
Anyhow, I want to send the value to Firebase only when this value changed its property. It is an int value, but I'm not sure how to go around this. Should I use a listener? This is a portion of my code:
int n = 0; // will be used to store the count
Firebase.setInt("Reps/Value", n); // sends value to fb
delay(100); // Wait 1 second and scan again
I was hoping that the sensor could scan every second, which it does. But at this rate (pun intended) the value is being pushed every second to FB. This slows down the scanning to every 3 seconds. How can I only use the firebaseSetInt method when n changes its value?
You can check after every reading every new value whether the value has been changed or not by adding just a simple conditional statement.
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
if(n!=n_old) { //checks whether value is changed
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again
Or if you want to go for a tolerance approach, you can further do something like this:
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
int tolerance = 3; // tolerance upto 3%
if(abs((n-n_old)/((n+n_old)/2))*100 > tolerance) {
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again
Coming from a professional use of remote databases, you should go for a gliding average approach. You do this by creating a circle buffer with lets say 30 sensor values and calculate an average value. As long as a value is +/- 3% within the average recorded at time0 you do not update. If the value is above or under you send to Firebase and set a new time0 average. Depending on your precision and needs you ease the stress on the systems.Imho only life safers like current breakers or flow cutting (liquids) have to be real time, all hobby applications like measuring wind speed, heating etc are well designed with 20 - 60 sec intervals.
The event listener by the way is this approach, just do something if its out of the norm. If you have a fixed target value as a reference its much easier to check for the +/- difference. If the pricing of FB changes it will be an issue for devs - so plan ahead.

Average from error prone measurement samples without buffering

I got a µC which measures temperature with of a sensor with an ADC. Due to various circumstances it can happen, that the reading is 0 (-30°C) or a impossible large Value (500-1500°C). I can't fix the reasons why these readings are so bad (time critical ISRs and sometimes a bad wiring) so I have to fix it with a clever piece of code.
I've come up with this (code gets called OVERSAMPLENR-times in a ISR):
#define OVERSAMPLENR 16 //read value 16 times
#define TEMP_VALID_CHANGE 0.15 //15% change in reading is possible
//float raw_tem_bed_value = <sum of all readings>;
//ADC = <AVR ADC reading macro>;
if(temp_count > 1) { //temp_count = amount of samples read, gets increased elsewhere
float avgRaw = raw_temp_bed_value / temp_count;
float diff = (avgRaw > ADC ? avgRaw - ADC : ADC - avgRaw) / (avgRaw == 0 ? 1 : avgRaw); //pulled out to shorten the line for SO
if (diff > TEMP_VALID_CHANGE * ((OVERSAMPLENR - temp_count) / OVERSAMPLENR)) //subsequent readings have a smaller tollerance
raw_temp_bed_value += avgRaw;
else
raw_temp_bed_value += ADC;
} else {
raw_temp_bed_value = ADC;
}
Where raw_temp_bed_value is a static global and gets read and processed later, when the ISR got fired 16 times.
As you can see, I check if the difference between the current average and the new reading is less then 15%. If so I accept the reading, if not, I reject it and add the current average instead.
But this breaks horribly if the first reading is something impossible.
One solution I though of is:
In the last line the raw_temp_bed_value is reset to the first ADC reading. It would be better to reset this to raw_temp_bed_value/OVERSAMPLENR. So I don't run in a "first reading error".
Do you have any better solutions? I though of some solutions featuring a moving average and use the average of the moving average but this would require additional arrays/RAM/cycles which we want to prevent.
I've often used something what I call rate of change to the sampling. Use a variable that represents how many samples it takes to reach a certain value, like 20. Then keep adding your sample difference to a variable divided by the rate of change. You can still use a threshold to filter out unlikely values.
float RateOfChange = 20;
float PreviousAdcValue = 0;
float filtered = FILTER_PRESET;
while(1)
{
//isr gets adc value here
filtered = filtered + ((AdcValue - PreviousAdcValue)/RateOfChange);
PreviousAdcValue = AdcValue;
sleep();
}
Please note that this isn't exactly like a low pass filter, it responds quicker and the last value added has the most significance. But it will not change much if a single value shoots out too much, depending on the rate of change.
You can also preset the filtered value to something sensible. This prevents wild startup behavior.
It takes up to RateOfChange samples to reach a stable value. You may want to make sure the filtered value isn't used before that by using a counter to count the number of samples taken for example. If the counter is lower than RateOfChange, skip processing temperature control.
For a more advanced (temperature) control routine, I highly recommend looking into PID control loops. These add a plethora of functionality to get a fast, stable response and keep something at a certain temperature efficiently and keep oscillations to a minimum. I've used the one used in the Marlin firmware in my own projects and works quite well.

MSP430 TI while loop duration

I am programming a simple program on the TI MSP430.
I have a counter set up in C:
while (P1IN & BIT1)
{
counter++;
}
So when the pin is high, it counts up by one. I am wondering how long this takes?
I need to do some calculations with counter and need the duration of one while loop. In other words, say counter = 1234 in the end, how can I get a value of seconds?
How can I get this? Should I export the ASM code and see how long each instruction set takes? This seems tedious.
You can try:
1. Toggle any free port pin at the start and end of the loop and monitor the duration on CRO(If you have necessary equipment).
OR
2.Look into disassembly listing(ASM code),read instruction manual and based on CPU clock calculate the loop time.

How to get the precise time of the consumption function by using SYSTICK timer?

As far as I knew, SYSTICK Timer is a 24-bit down counter. For now, I need to know the precise consumption time for the memcpy function. Suppose I set SysTick->RELOAD = 511, it will turn out two cases as the following description.
Define:
1. One cycle means 511 to 0 must be finish.
2. Two or more cycles mean 511 to 0, 511 to 0, ... , 511 to 0, 511 to i, i in [0, 511].
Case 1: Offset is small or normal, so the memcpy will finish in one cycles.
Case 2: Offset is very big, e.g. 16K. So the memcpy will finish in two or more cycles.
How do I get the "cycles" ?
See the documentation
SysTick has 24bit counter, RELOAD#0xE000E014 and CURRENT#0xE000E018. There is nothing easier than RELOAD=0x00FFFFFF; call_the_func(); unsigned diff = RELOAD-CURRENT.
Enable interrupts and configure SysTick_Handler() to increment some other value (to get bigger range) if needed. You can then calculate my_global*(RELOAD-few) + (RELOAD-CURRENT).
I use SysTick for 10ms timers - just incrementing global counter every 10ms (from the interrupt) and observe the difference from value stored when started ;)

Resources