Time scheduler in C on STM32 - c

I would like to design a time scheduler that would do some specific stuff at specific time using a timer/state machine.
I would like to do something like this :
As you can see on the picture above, I would need to do a request every 4 ms, this request would have a turnaround delay of 1ms and during this window of 4 ms, I would have 3ms to make 6 other request every 500microseconds. And at the end I would like to do this constantly.
What would be the best way to do it in C for STM32 ? Right now I did this :
void TIM2_Init (uint32_t timeout)
{
uint8_t retVal = false;
// Enable the TIM2 clock
__HAL_RCC_TIM2_CLK_ENABLE();
// Configure the TIM2 handle
htim2.Instance = TIM2;
htim2.Init.Prescaler = (uint32_t)(240000000 / 1000000) - 1;
htim2.Init.Period = timeout - 1;
htim2.Init.CounterMode = TIM_COUNTERMODE_UP;
htim2.Init.RepetitionCounter = 0;
htim2.Init.AutoReloadPreload = TIM_AUTORELOAD_PRELOAD_DISABLE;
if (HAL_OK == HAL_TIM_Base_Init(&htim2))
{
// Enable update interrupt for TIM2
__HAL_TIM_ENABLE_IT(&htim2, TIM_IT_UPDATE);
// Enable the TIM2 interrupt in the NVIC
HAL_NVIC_EnableIRQ(TIM2_IRQn);
}
// Start the TIM2 counter
if (HAL_OK == HAL_TIM_Base_Start(&htim2))
{
retVal = true;
}
return retVal;
}
void changeTIM2Timeout (uint32_t timeout)
{
// Change the TIM2 period
TIM2->ARR = timeout - 1;
}
I created a timer with a timeout parameter to initialize it. Then I have a function to change the period, cause for example I can have a response timeout of 500us sometimes and also 1ms some other times.
Where I'm stuck is having the whole working, meaning first request is done, then 1ms later followed by second request (2), then (3), etc. I think it's on the state machine that would handle the different request with the timer that I need help.

You can create a queue data structure for the tasks and assign each task time there. When it's done, if it's a repetitive task, it can be added to the end of the queue when it's done with its current run and you're fetching the next task. You can have fixed maximum size queue or dynamic maximum size queue, it's up to you.
When the task is invoked, the timer is set to the value, associated with that task. So there will be no waste ticks. This also gives you tickless idle with a bit of effort, if you need it. Also, you can add some more parameters in that queue if you want to customize scheduling somehow, such as priority level.
Edit: this obviously makes the timer unusable as a time measurement device for any other purpose, because you change its settings all the time.

I would configure the timer to 500us on autoreload. Then just keep an index up to 8 that will increment with wrapping. Because 8 is a multiple of 2, you could use an unsigned number and forget about wrapping.
void do_stuff(int idx) {
switch (idx) {
case 0: request(1); break;
case 2: case 3: case 4: case 5: case 6: case 7: etc.
request(idx); break;
}
void TIM2_Init() {
// generate from STM32CubeFX autoreload with 500us
}
void TIM2_IRQHandler(void) {
static unsigned idx = 0;
do_stuff(idx++ % 8);
}

Related

Avoid Input Latency while reading input from keyboard loop

I'm just starting win STM32 and am trying to write some code which tries to accept input from user keypress, and vary the blink-rate of 2 LEDs or blink them alternatively, or at the same time.
So, I have 3 Keys(2 keys each for inc/dec, and one key for mode), and 2 LEDs.
The loop section looks something like this:
/* USER CODE BEGIN WHILE */
const uint32_t step = 50u;
const uint32_t max_interval = 500u;
uint32_t interval = max_interval;
short mode = 0;
while (1)
{
volatile GPIO_PinState wakeup = HAL_GPIO_ReadPin(WAKEUP_GPIO_Port, WAKEUP_Pin);
mode = (wakeup == GPIO_PIN_RESET? 0: 1);
if(mode == 1) {
HAL_GPIO_WritePin(LED2_GPIO_Port, LED2_Pin, GPIO_PIN_RESET);
}
HAL_GPIO_TogglePin(LED2_GPIO_Port, LED2_Pin);
HAL_GPIO_TogglePin(LED2_GPIO_Port, LED3_Pin);
volatile GPIO_PinState key0 = HAL_GPIO_ReadPin(KEY0_GPIO_Port, KEY0_Pin);
volatile GPIO_PinState key1 = HAL_GPIO_ReadPin(KEY1_GPIO_Port, KEY1_Pin);
if(key0 == GPIO_PIN_RESET)
interval -= (interval==step? 0: step);
if(key1 == GPIO_PIN_RESET)
interval += (interval==max_interval? 0: step);
HAL_Delay(interval);
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
}
Now, depending on the HAL_Delay(interval), the loop would get a chance to check for key-input, whose purpose is to control the blink rate. Is there some way, I can untie this latency for key-input? The microcontroller in question is STM32F407VET6, and I'm using CubeIDE. It would be nice to have a single-threaded solution.
Some delay for key input is unavoidable, unless you have some hardware-debounced keys. Normally, when hitting a key, the transition is not a single edge, but some burst of level changes until the mechanics settle. One way to do debouncing would be to have a periodic interval timer (e.g. at 1khz rate) and you check each time the level of a key. If it is high, you count a counter up. If it is low, you count a counter down and then you have 2 thresholds (hysteresis) in the count values, when you consider it a button down or button up transition. Since all that works in the interrupt, you could then push the key event into a FIFO (queue) and in your "main thread", you can pull the events at convenient occasions.
If there is also a programmable timer on your hardware, you could use that to toggle the output pins for the LED and your main loop would then simply be along the lines:
void mainloop() {
while (true) {
KeyEvent_t key;
nextKeyEvent(&key);
switch (key) {
case BUTTON1_DOWN:
reduceBlinkRate();
break;
case BUTTON2_DOWN:
increaseBlinkRate();
break;
default:
// skillful doing nothing (e.g. to save power)
break;
}
}
}

ATSAML21 Hardware Timer

I've been trying to configure & run hardware timer over SAML21 MCU to generate a 100ms delay i.e. ISR is supposed to hit at every 100ms. But it is observed that after starting the timer ISR is hitting at every 10us and changing the Prescaler & Compare register values isn't creating any difference in the 10us interval. Please review my code and let me know where I'm doing wrong.
I'm trying to configure Timer1(TC1) in 16bit mode, using GCLK_GENERATOR_1 as its clock source running at 8MHz frequency(CPU Main Clock:16MHz). The timer is expected to cause overflow interrupt every 100ms.
TcCount16 *tc_hw1 = NULL; /* Pointer to TC1 hardware registers Initilized later */
void init_timer1(void)
{
struct tc_module tc_inst1;
struct tc_config conf_tc1;
tc_get_config_defaults(&conf_tc1);
conf_tc1.clock_source = GCLK_GENERATOR_1;
conf_tc1.clock_prescaler = TC_CLOCK_PRESCALER_DIV64; /* 8MHz/64 = 125KHz*/
conf_tc1.reload_action = TC_RELOAD_ACTION_GCLK;
conf_tc1.counter_size = TC_COUNTER_SIZE_16BIT;
conf_tc1.count_direction = TC_COUNT_DIRECTION_UP;
conf_tc1.counter_16_bit.value = 0x0000;
/** Rest of the settings are used as defaults **/
while (tc_init(&tc_inst1, TC1, &conf_tc1) != STATUS_OK){
}
tc_set_top_value(&tc_inst1, 12500); /* Set counter compare top value */
/* Enable interrupt & Set Priority */
tc_hw1 = &(tc_inst1.hw->COUNT16); /* Initialize pointer to TC1 hardware register */
tc_hw1->INTENSET.reg |= TC_INTFLAG_OVF; /* Enable Overflow Interrupt */
NVIC_SetPriority(TC1_IRQn, 2);
NVIC_EnableIRQ(TC1_IRQn);
tc_enable(&tc_inst1); /*Start The TIMER*/
}
void TC1_Handler(void)
{
if((tc_hw1->INTFLAG.reg) & (TC_INTFLAG_OVF))
{
port_pin_toggle_output_level(PIN_PB03);
}
system_interrupt_clear_pending(SYSTEM_INTERRUPT_MODULE_TC1);
}
Debugger Information: I can see that the timer register is configured correctly but the COUNT register is not incrementing itself every time I pause to capture the debug info it shows only 0x0000 values.
Please help. Thanks!
I resolved the issue actually it's kind of mandatory to clear the timer OVF bit in the INTFLAG register. So the interrupt handler should've been like this:
void TC1_Handler(void)
{
if((tc_hw1->INTFLAG.reg) & (TC_INTFLAG_OVF))
{
tc_hw1->INTFLAG.reg = TC_INTFLAG_OVF; /*Clears the flag by writing 1 to it*/
port_pin_toggle_output_level(PIN_PB03);
}
system_interrupt_clear_pending(SYSTEM_INTERRUPT_MODULE_TC1); /*Not necessarily needed*/
}

STM32 TIM callback to raise flag

I've read multiple times that it is usually good practice to minimize the amount of time to spend in a timer interrupt, and the advice of only raising a flag came up several times.
I am using a timer to run a bit of code (conversion of sensor data into usable data). It is important in my application to read and manipulate this data at fairly high-speed (8KHz).
Here's how I am approaching the problem:
I am using an STM32 H743
I am using RTOS with two threads, with slightly different priority levels
I am using 2 timers (TIM2 and TIM3) in my case
TIM2 is set to trigger a callback at 1KHz, and is started in my main thread (slightly higher priority than the secondary thread)
TIM3 is set to trigger a callback at 8KHz, and is started in the secondary thread
the HAL_TIM_PeriodElapsedCallback is used for both Timers and is looking like this:
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
/* USER CODE BEGIN Callback 0 */
/* USER CODE END Callback 0 */
if (htim->Instance == TIM6) {
HAL_IncTick();
}
/* USER CODE BEGIN Callback 1 */
else if (htim->Instance == TIM2) {
TIM3_flag = 1;
}
else if (htim->Instance == TIM3) {
TIM3_flag = 1;
}
/* USER CODE END Callback 1 */
}
And then each of the 2 threads have a simple test on the flag, here's what it looks like for the secondary thread:
void StartSecondaryThread(void *argument)
{
/* USER CODE BEGIN StartSecondaryThread */
HAL_TIM_Base_Start_IT(&htim3);
/* Infinite loop */
for(;;)
{
if (TIM3_flag == 1) {
runCALC();
//MORE USER CODE HERE
TIM3_flag = 0;
}
}
/* USER CODE END StartSecondaryThread */
}
Per the autogenerated code from CubeMX, both the mainThread and secondaryThread infinite for(;;) loops had a osDelay(1).
am I supposed keep these days? outside of the if statement for the raised flag?
I have some concerns that if I don't it will crash the MCU because outside nothing to do when the flag isn't raised. And I am concerns that keeping the osDelay(1) will be "too long" (1ms vs 125 us). Is there a way to apply a shorter delay that would not slow down my 8KHz polling?
Of course the runCAL() stuff will take significantly less time than the 125 us period.
It would make sense for me to remove the delay all together but I have a feeling that it will crash severely.
What should I do?
cheers
Flags not are net very good way of the thread synchronisation when you use RTOS.
In this case use semaphores, mutexes or direct task notifications.
slightly higher priority than the secondary thread
It does not make any difference in code you shown The different priorities RTOS tasks are not preempted by the scheduler and context switch happens only when you pass the control yourself. The only task which will actually run all the time is the last started one, as your task does not pass the control to the RTOS and the ISR also does not. Your code is not actually the correct RTOS code.
You can have it in one task.
void StartSecondaryThread(void *argument)
{
/* USER CODE BEGIN StartSecondaryThread */
HAL_TIM_Base_Start_IT(&htim3);
HAL_TIM_Base_Start_IT(&htim2);
/* Infinite loop */
for(;;)
{
switch(ulTaskNotifyTake(pdTRUE, portMAX_DELAY))
{
case 3:
runCALC();
//MORE USER CODE HERE for timer 3
break;
case 2:
//MORE USER CODE HERE for timer 2
break;
default:
//MORE USER CODE HERE for other timers
break;
}
}
/* USER CODE END StartSecondaryThread */
}
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
BaseType_t xHigherPriorityTaskWoken = pdFalse;
switch((uint32_t)htim->Instance)
{
case (uint32_t)TIM6:
HAL_IncTick();
break;
case (uint32_t)TIM2:
xTaskNotifyFromISR( xThreadHndl, 2, eSetValueWithOverwrite, &xHigherPriorityTaskWoken );
break;
case (uint32_t)TIM3:
xTaskNotifyFromISR( xThreadHndl, 3, eSetValueWithOverwrite, &xHigherPriorityTaskWoken );
break;
}
portYIELD_FROM_ISR( xHigherPriorityTaskWoken );
}

STM32 Using TIMer to break loop

So I've initialized a timer which I want to count for 100
main section where stm32 works.
#include "stm32l0xx.h"
#define SYSCLK_FREQ 131072
void timer_tick(uint16_t n_ms);
int main(void)
{
TIM6->PSC = (SYSCLK_FREQ/1000)-1; //100us
TIM6->CR1 = TIM_CR1_CEN | TIM_CR1_OPM; //one-pulse mode
TIM6->EGR = TIM_EGR_UG; //generate update
TIM6->SR=0; //clear update - after few instructions
...
}
first time I am using the timer there, its declared right after main
void delay(uint16_t n_ms)
{
//upcounting timer - 16bit
TIM6->CNT = 65535-n_ms;
TIM6->CR1 = TIM_CR1_CEN | TIM_CR1_OPM; //one-pulse mode
while(!(TIM6->SR & TIM_SR_UIF)); //wait
TIM6->SR = 0;
}
than Iam using the same timer (becouse I have only TIM3 which I cannot interupt and TIM6 used only for delay function)
void timer_tick(uint16_t n_ms)
{
//upcounting timer - 16bit
TIM6->CNT = 65535-n_ms;
TIM6->CR1 = TIM_CR1_CEN | TIM_CR1_OPM; //one-pulse mode
TIM6->ARR = n_ms-1; // Auto reload value
TIM6->CCR1 = n_ms; // Start PWM duty for channel 3
//TIM6->CCR2 = n_ms; // Start PWM duty for channel 4
TIM6->CCMR1 = TIM_CCMR1_OC1M_2 | TIM_CCMR1_OC1M_1 | TIM_CCMR1_OC2M_2 | TIM_CCMR1_OC2M_1; // PWM mode on channel 1 & 2
TIM6->CCER = TIM_CCER_CC1E | TIM_CCER_CC2E; // Enable compare on channel 1 & 2
TIM6->DIER = TIM_DIER_UIE; // Enable TIM6 interrupt
TIM6->SR = 0;
}
And here working on interupt. Iam calling the function to start the clock counting to 100 (Iam assuiming its right) than interuption realising switch that can work only 100 ms - after its gotta stop working on it and break the loop.
void USART1_IRQHandler(void)
{
static enum {00, 01, 02, 03, CRC} next_frame = 00; // frame construction
if(!sending_flag) //half-duplex
{
if(USART1->ISR&USART_ISR_FE) //frame error check&clear
USART1->ICR = USART_ICR_FECF;
else
timer_tick(100);
do {
switch(next_frame)
{
case 00:{ DOSTUFF; next_frame=01; break; } //starting marker
case 01:{ DOSTUFF; next_frame=02; break; }
case 02:{ DOSTUFF; next_frame=03; break; }
case 03:{ DOSTUFF; next_frame=CRC; break; }
case CRC:{ TIM6->SR = ~TIM_SR_CC2IF; break; } // clear flag
default: break;
}
} while (!(TIM6->SR&TIM_SR_CC2IF)); //TIM_SR_UIF
}
if(USART1->ISR & USART_ISR_TC)
{
USART1->ICR = USART_ICR_TCCF;
GPIOA->BSRR = GPIO_BSRR_BR_11 | GPIO_BSRR_BR_12;
sending_flag=0;
}
}
I dont realy understend documentation of my STM about the timers.
Having this line set like that TIM6->CCR1 = n_ms; // Start PWM duty for channel 3 Iam assuming there should be a flag at TIM6->SR&TIM_SR_CC2IF after timer reach TIM6->ARR = n_ms-1; // Auto reload value
After adding this do while loop my STM stopped responding and Iam not able to debug it.
Is the counter set right?
Can I use declared timer twice and call it like I do?
Is the counter set right?
Not really. There are a lot of problems in your timer configuration, lets try to spot some of them:
#define SYSCLK_FREQ 131072
TIM6->PSC = (SYSCLK_FREQ/1000)-1; //100us
Timer is connected to some APB bus (1 or 2), which can be also
divided. If You set new APB divider value, your timer will no longer
work as you would expect.
Clock frequency = 131072 ? Independent of unit, you will not achieve 100us period by dividing it by 1000.
void delay(uint16_t n_ms)
{
//upcounting timer - 16bit
TIM6->CNT = 65535-n_ms;
TIM6->CR1 = TIM_CR1_CEN | TIM_CR1_OPM; //one-pulse mode
while(!(TIM6->SR & TIM_SR_UIF)); //wait
TIM6->SR = 0;
}
This is not the way you use timer. If you want to measure some time, just set ARR to right value and start the counter.
Your timer_tick(uint16_t n_ms) is totaly wrong. Counter is upcounting from 0 to ARR value, then stops (if one pulse mode is set). Firstly, set all the timer configuration registers, then start the counter. If you start the counter, then modify ARR, CCRX or other registers, you can be 100% sure, that timer will fall.

Interrupt timer stuck when run parallel with while(1)

first code:
//------------------------------------------------------------------------------
/// Interrupt handlers for TC interrupts. Toggles the state of LEDs
//------------------------------------------------------------------------------
char token = 0;
void TC0_IrqHandler(void) {
volatile unsigned int dummy;
dummy = AT91C_BASE_TC0->TC_SR;
if(token == 1) {
PIO_Clear(&leds[0]);
PIO_Set(&leds[1]);
token = 0;
}
else {
PIO_Set(&leds[0]);
PIO_Clear(&leds[1]);
token = 1;
}
}
//------------------------------------------------------------------------------
/// Configure Timer Counter 0 to generate an interrupt every 250ms.
//------------------------------------------------------------------------------
void ConfigureTc(void) {
unsigned int div;
unsigned int tcclks;
AT91C_BASE_PMC->PMC_PCER = 1 << AT91C_ID_TC0; // Enable peripheral clock
TC_FindMckDivisor(1, BOARD_MCK, &div, &tcclks); // Configure TC for a 4Hz frequency and trigger on RC compare
TC_Configure(AT91C_BASE_TC0, tcclks | AT91C_TC_CPCTRG);
AT91C_BASE_TC0->TC_RC = (BOARD_MCK / div) / 1; // timerFreq / desiredFreq
IRQ_ConfigureIT(AT91C_ID_TC0, 0, TC0_IrqHandler); // Configure and enable interrupt on RC compare
AT91C_BASE_TC0->TC_IER = AT91C_TC_CPCS;
IRQ_EnableIT(AT91C_ID_TC0);
printf(" -- timer has started \n\r");
TC_Start(AT91C_BASE_TC0);
}
it's just interrupt timer and it's event (handler) but when I run some
while(1) {
// action
after ConfigureTc() it both cycle and interrupt timer are freezes... Why could that be? Should I add another timer and avoid while(1) ?
while(1) {
printf("hello");
}
-- this breaks (freeze) loops (yes, if I don't use timer it works as it must)
I'll venture an actual answer here. IME, 99% of the time my boards 'go out' with no response on any input and no 'heartbeat' LED-flash from the low-priority 'blinky' thread, the CPU has flown off to a prefetch or data abort handler. These handlers are entered by interrupt and most library-defined default handlers do not re-enable interrupts, so stuffing the entire system. Often, they're just endless loops and, with interrupts disabled, that's the end of the story:(
I have changed my default handlers to output suitable 'CRITICAL ERROR' messages to the UART, (by polling it - the OS/interrupts are stuft!).

Resources