STM32 TIM callback to raise flag - timer

I've read multiple times that it is usually good practice to minimize the amount of time to spend in a timer interrupt, and the advice of only raising a flag came up several times.
I am using a timer to run a bit of code (conversion of sensor data into usable data). It is important in my application to read and manipulate this data at fairly high-speed (8KHz).
Here's how I am approaching the problem:
I am using an STM32 H743
I am using RTOS with two threads, with slightly different priority levels
I am using 2 timers (TIM2 and TIM3) in my case
TIM2 is set to trigger a callback at 1KHz, and is started in my main thread (slightly higher priority than the secondary thread)
TIM3 is set to trigger a callback at 8KHz, and is started in the secondary thread
the HAL_TIM_PeriodElapsedCallback is used for both Timers and is looking like this:
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
/* USER CODE BEGIN Callback 0 */
/* USER CODE END Callback 0 */
if (htim->Instance == TIM6) {
HAL_IncTick();
}
/* USER CODE BEGIN Callback 1 */
else if (htim->Instance == TIM2) {
TIM3_flag = 1;
}
else if (htim->Instance == TIM3) {
TIM3_flag = 1;
}
/* USER CODE END Callback 1 */
}
And then each of the 2 threads have a simple test on the flag, here's what it looks like for the secondary thread:
void StartSecondaryThread(void *argument)
{
/* USER CODE BEGIN StartSecondaryThread */
HAL_TIM_Base_Start_IT(&htim3);
/* Infinite loop */
for(;;)
{
if (TIM3_flag == 1) {
runCALC();
//MORE USER CODE HERE
TIM3_flag = 0;
}
}
/* USER CODE END StartSecondaryThread */
}
Per the autogenerated code from CubeMX, both the mainThread and secondaryThread infinite for(;;) loops had a osDelay(1).
am I supposed keep these days? outside of the if statement for the raised flag?
I have some concerns that if I don't it will crash the MCU because outside nothing to do when the flag isn't raised. And I am concerns that keeping the osDelay(1) will be "too long" (1ms vs 125 us). Is there a way to apply a shorter delay that would not slow down my 8KHz polling?
Of course the runCAL() stuff will take significantly less time than the 125 us period.
It would make sense for me to remove the delay all together but I have a feeling that it will crash severely.
What should I do?
cheers

Flags not are net very good way of the thread synchronisation when you use RTOS.
In this case use semaphores, mutexes or direct task notifications.
slightly higher priority than the secondary thread
It does not make any difference in code you shown The different priorities RTOS tasks are not preempted by the scheduler and context switch happens only when you pass the control yourself. The only task which will actually run all the time is the last started one, as your task does not pass the control to the RTOS and the ISR also does not. Your code is not actually the correct RTOS code.
You can have it in one task.
void StartSecondaryThread(void *argument)
{
/* USER CODE BEGIN StartSecondaryThread */
HAL_TIM_Base_Start_IT(&htim3);
HAL_TIM_Base_Start_IT(&htim2);
/* Infinite loop */
for(;;)
{
switch(ulTaskNotifyTake(pdTRUE, portMAX_DELAY))
{
case 3:
runCALC();
//MORE USER CODE HERE for timer 3
break;
case 2:
//MORE USER CODE HERE for timer 2
break;
default:
//MORE USER CODE HERE for other timers
break;
}
}
/* USER CODE END StartSecondaryThread */
}
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
BaseType_t xHigherPriorityTaskWoken = pdFalse;
switch((uint32_t)htim->Instance)
{
case (uint32_t)TIM6:
HAL_IncTick();
break;
case (uint32_t)TIM2:
xTaskNotifyFromISR( xThreadHndl, 2, eSetValueWithOverwrite, &xHigherPriorityTaskWoken );
break;
case (uint32_t)TIM3:
xTaskNotifyFromISR( xThreadHndl, 3, eSetValueWithOverwrite, &xHigherPriorityTaskWoken );
break;
}
portYIELD_FROM_ISR( xHigherPriorityTaskWoken );
}

Related

Time scheduler in C on STM32

I would like to design a time scheduler that would do some specific stuff at specific time using a timer/state machine.
I would like to do something like this :
As you can see on the picture above, I would need to do a request every 4 ms, this request would have a turnaround delay of 1ms and during this window of 4 ms, I would have 3ms to make 6 other request every 500microseconds. And at the end I would like to do this constantly.
What would be the best way to do it in C for STM32 ? Right now I did this :
void TIM2_Init (uint32_t timeout)
{
uint8_t retVal = false;
// Enable the TIM2 clock
__HAL_RCC_TIM2_CLK_ENABLE();
// Configure the TIM2 handle
htim2.Instance = TIM2;
htim2.Init.Prescaler = (uint32_t)(240000000 / 1000000) - 1;
htim2.Init.Period = timeout - 1;
htim2.Init.CounterMode = TIM_COUNTERMODE_UP;
htim2.Init.RepetitionCounter = 0;
htim2.Init.AutoReloadPreload = TIM_AUTORELOAD_PRELOAD_DISABLE;
if (HAL_OK == HAL_TIM_Base_Init(&htim2))
{
// Enable update interrupt for TIM2
__HAL_TIM_ENABLE_IT(&htim2, TIM_IT_UPDATE);
// Enable the TIM2 interrupt in the NVIC
HAL_NVIC_EnableIRQ(TIM2_IRQn);
}
// Start the TIM2 counter
if (HAL_OK == HAL_TIM_Base_Start(&htim2))
{
retVal = true;
}
return retVal;
}
void changeTIM2Timeout (uint32_t timeout)
{
// Change the TIM2 period
TIM2->ARR = timeout - 1;
}
I created a timer with a timeout parameter to initialize it. Then I have a function to change the period, cause for example I can have a response timeout of 500us sometimes and also 1ms some other times.
Where I'm stuck is having the whole working, meaning first request is done, then 1ms later followed by second request (2), then (3), etc. I think it's on the state machine that would handle the different request with the timer that I need help.
You can create a queue data structure for the tasks and assign each task time there. When it's done, if it's a repetitive task, it can be added to the end of the queue when it's done with its current run and you're fetching the next task. You can have fixed maximum size queue or dynamic maximum size queue, it's up to you.
When the task is invoked, the timer is set to the value, associated with that task. So there will be no waste ticks. This also gives you tickless idle with a bit of effort, if you need it. Also, you can add some more parameters in that queue if you want to customize scheduling somehow, such as priority level.
Edit: this obviously makes the timer unusable as a time measurement device for any other purpose, because you change its settings all the time.
I would configure the timer to 500us on autoreload. Then just keep an index up to 8 that will increment with wrapping. Because 8 is a multiple of 2, you could use an unsigned number and forget about wrapping.
void do_stuff(int idx) {
switch (idx) {
case 0: request(1); break;
case 2: case 3: case 4: case 5: case 6: case 7: etc.
request(idx); break;
}
void TIM2_Init() {
// generate from STM32CubeFX autoreload with 500us
}
void TIM2_IRQHandler(void) {
static unsigned idx = 0;
do_stuff(idx++ % 8);
}

STM32- RTOS -Task Notify From ISR

I want to notify my task to run from an ISR. I red the RTOS docs but I could not do it. I would really appreciate if you tell me what I am supposed to do and give an example if it is possible. I used cmsis-V2.
Inside the ISR which I am sure the ISR works correctly I wrote:
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
/* USER CODE BEGIN Callback 0 */
/* USER CODE END Callback 0 */
if (htim->Instance == TIM15) {
HAL_IncTick();
}
/* USER CODE BEGIN Callback 1 */
if (htim == &htim16)
{
BaseType_t xHigherPriorityTaskWoken;
xHigherPriorityTaskWoken = pdFALSE;
vTaskNotifyGiveFromISR(ADXL_HandlerHandle , &xHigherPriorityTaskWoken);
portYIELD_FROM_ISR( xHigherPriorityTaskWoken );
}
/* USER CODE END Callback 1 */
}
I also used systick timer for FREE RTOS and timer 15 as the system timer . is it possible that the problem is related to this part ? I dout because task_notify_give function only add up and is not a blocking mechanism like semaphore.
and inside the thask, inside the for loop the first lines are:
ulNotifiedValue = ulTaskNotifyTake( pdFALSE, portMAX_DELAY);
if( ulNotifiedValue > 0 ){
//my codes ....
}
before for loop I defined:
uint32_t ulNotifiedValue;
but the task is not executed. even once.
I use Nucleo H755ZIQ.
before the definition of global variable, tasks are defined like this:
/* Definitions for ADXL_Handler */
osThreadId_t ADXL_HandlerHandle;
const osThreadAttr_t ADXL_Handler_attributes = {
.name = "ADXL_Handler",
.priority = (osPriority_t) osPriorityNormal,
.stack_size = 1024 * 4
};
then inside the main function initializing the schduler is as follows :
osKernelInitialize();
ADXL_HandlerHandle = osThreadNew(ADXL_Handler_TaskFun, NULL, &ADXL_Handler_attributes);
osKernelStart();
Then the timers will be started:
HAL_TIM_Base_Start_IT(&htim16);
In CMSIS there is no such a thing like task notification, I took a short look. The functions I used inside the ISR routine are from FreeRTOS. will not there a contradiction? should I use only Free RTOS task create function instead of CMSIS functions?
Thanks in advance.

Avoid Input Latency while reading input from keyboard loop

I'm just starting win STM32 and am trying to write some code which tries to accept input from user keypress, and vary the blink-rate of 2 LEDs or blink them alternatively, or at the same time.
So, I have 3 Keys(2 keys each for inc/dec, and one key for mode), and 2 LEDs.
The loop section looks something like this:
/* USER CODE BEGIN WHILE */
const uint32_t step = 50u;
const uint32_t max_interval = 500u;
uint32_t interval = max_interval;
short mode = 0;
while (1)
{
volatile GPIO_PinState wakeup = HAL_GPIO_ReadPin(WAKEUP_GPIO_Port, WAKEUP_Pin);
mode = (wakeup == GPIO_PIN_RESET? 0: 1);
if(mode == 1) {
HAL_GPIO_WritePin(LED2_GPIO_Port, LED2_Pin, GPIO_PIN_RESET);
}
HAL_GPIO_TogglePin(LED2_GPIO_Port, LED2_Pin);
HAL_GPIO_TogglePin(LED2_GPIO_Port, LED3_Pin);
volatile GPIO_PinState key0 = HAL_GPIO_ReadPin(KEY0_GPIO_Port, KEY0_Pin);
volatile GPIO_PinState key1 = HAL_GPIO_ReadPin(KEY1_GPIO_Port, KEY1_Pin);
if(key0 == GPIO_PIN_RESET)
interval -= (interval==step? 0: step);
if(key1 == GPIO_PIN_RESET)
interval += (interval==max_interval? 0: step);
HAL_Delay(interval);
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
}
Now, depending on the HAL_Delay(interval), the loop would get a chance to check for key-input, whose purpose is to control the blink rate. Is there some way, I can untie this latency for key-input? The microcontroller in question is STM32F407VET6, and I'm using CubeIDE. It would be nice to have a single-threaded solution.
Some delay for key input is unavoidable, unless you have some hardware-debounced keys. Normally, when hitting a key, the transition is not a single edge, but some burst of level changes until the mechanics settle. One way to do debouncing would be to have a periodic interval timer (e.g. at 1khz rate) and you check each time the level of a key. If it is high, you count a counter up. If it is low, you count a counter down and then you have 2 thresholds (hysteresis) in the count values, when you consider it a button down or button up transition. Since all that works in the interrupt, you could then push the key event into a FIFO (queue) and in your "main thread", you can pull the events at convenient occasions.
If there is also a programmable timer on your hardware, you could use that to toggle the output pins for the LED and your main loop would then simply be along the lines:
void mainloop() {
while (true) {
KeyEvent_t key;
nextKeyEvent(&key);
switch (key) {
case BUTTON1_DOWN:
reduceBlinkRate();
break;
case BUTTON2_DOWN:
increaseBlinkRate();
break;
default:
// skillful doing nothing (e.g. to save power)
break;
}
}
}

This simple ARM Cortex-M SysTick based task scheduler wont work. Should I manage preemption myself?

So, I am doing a very simple time triggered pattern based on ARM Cortex M3.
The idea is:when SysTick is serviced, the task array index is incremented at systick, and so is a function pointer to the task. PendSV handler is called, and calls the task. I am using a Atmel ICE JTAG to debug it.
What happens is that it stucks at first task, and does not even increment the counter. It does not go anywhere.
Code pattern:
#include <asf.h> // atmel software framework. cmsis and board support package.
#define NTASKS 3
typedef void (*TaskFunction)(void);
void task1(void);
void task2(void);
void task3(void);
TaskFunction run = NULL;
uint32_t count1 = 0; //counter for task1
uint32_t count2 = 0; // 2
uint32_t count3 = 0; // 3
TaskFunction tasks[NTASKS] = {task1, task2, task3};
volatile uint8_t tasknum = 0;
void task1(void)
{
while(1)
{
count1++;
}
}
void task2(void)
{
while(1)
{
count2++;
}
}
void task3(void)
{
while(1)
{
count3++;
}
}
void SysTick_Handler(void)
{
tasknum = (tasknum == NTASKS-1) ? 0 : tasknum+1;
run = tasks[tasknum];
SCB->ICSR |= SCB_ICSR_PENDSVSET_Msk;
}
void PendSV_Handler(void)
{
run();
}
int main(void)
{
sysclk_init();
board_init();
pmc_enable_all_periph_clk();
SysTick_Config(1000);
while(1);
}
This design pattern is fundamentally flawed, I'm afraid.
At the first SysTick event, task1() will be called from within the PendSV handler, which will consequently not return. Further SysTick events will interrupt the PendSV handler and set the PendSV bit again, but unless the running task ends and the PendSV handler is allowed to return it can't possibly be invoked again.
The good news is that a proper context switch on the M3 is only a small amount of assembly language - perhaps 10 lines. You need to do some setup too, to get user mode code to use the process stack pointer and so on, and you need to set up a stack per task, but it's not really all that involved.
If you really want to cancel the running task when a SysTick arrives and launch another, they could all share the same stack; but it would be much easier if this was the process stack so that its stack pointer can be reset from within PendSV without affecting the return from handler mode. You'd also need to do some stack poking to convince PendSV to 'return' to the start of the next task you wanted to run.

Simple Interrupt Handling/ Multi-threading program?

I'm new to Embedded programming and multi-threading and I'm trying to understand how Interrupt handlers work in different contexts/scenarios. For the current question, I just want to know how a interrupt handler would work in the following scenario.
We have a data stream coming from a RS232 interface that is processed by some microcontroller. An interrupt handler(of void type) has a read() function which reads the incoming data bytes. If a character is detected then the interrupt handler invokes a function called detectString() which returns TRUE if the string matches the reference string which is "ON". If detectString() returns boolean TRUE it invokes a function called LED_ON() which should turn on an LED for 1 minute. If it returns false it should turn off the LED. Lets say the microcontroller has a clock frequency of 20MHz and an addition operation taken 5 clock cycles.
My questions are as follows
How do we approach this problem with an FSM?
The RS232 interface keeps transmitting data even after the LED is turned on. So am I correct in assuming that the interrupt handler should work with a one thread and the functions that it invokes should work from a different threads?
How would a skeletal program implementing this FSM look like? (a C pseudocode might really help to understand the backbone of the design)
If you are doing this in an interrupt handler, why would you need different threads? It shouldn't matter what else you're doing, as long as interrupts are enabled.
As for FSM, I wouldn't call a "detect_string". RS232 is going to give you one character at a time. It's possible your UART interrupts you only when you've received more than one, but there's usually a time component as well so it would be unwise to count on that. Make your FSM take one input character at a time. Your states would be something like:
=> new state = [Init] (turn LED off if on)
Init: (Get 'O') => new state = [GotO]
Init: (Get anything else) => new state = [Init]
Init: (Timer expires) => who cares? new state = [Init]
GotO: (Get 'N') => new state = [GotON] (turn on LED, set timer)
GotO: (Get anything else) => new state = [Init]
GotO: (Timer expires) => who cares? new state = [GotO]
GotON: (Get anything) => who cares? new state = [GotON]
GotON: (Timer expires) => turn LED off, new state = [Init]
Obviously lots of tinkering you could do with details, but that's the general idea.
A preemptive kernel will usually provide the ability for an interrupt to set an event that a higher priority thread is pending on.
As for the interrupts, one way of implementing something like a state machine is to use nested pointers to function, similar to an asynchronous callback, but with optional nesting: For example:
typedef void (*PFUN)(void);
/* ... */
PFUN pFunInt = UnexpectedInt; /* ptr to function for interrupt */
PFUN pFunIntSeqDone;
/* ... */
void DoSeq(void)
{
pFunIntSeqDone = IntSeqDone;
pFunInt = IntStep0;
/* enable interrupt, start I/O */
}
void IntStep0(void)
{
pFunInt = IntStep1;
/* handle interrupt */
}
void IntStep1(void)
{
pFunInt = IntStep2;
/* handle interrupt */
}
void IntStep2(void)
{
/* done with sequence, disable interrupt */
pFunInt = UnexpectedInt;
pFunIntSeqDone(); /* call end action handler */
}
void IntSeqDone(void)
{
/* interrupt sequence done handling code */
/* set event for pending thread */
}
void UnexpectedInt(void)
{
/* ... error handling code */
}

Resources