STM32 RTOS timer interrupt and threads - timer

I am working on a project where I need to execute 2 pieces of code off TIM interrupts. One of them has a slightly higher priority than the other, and both will be running on 2 different timers (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
if so, what would be a better strategy to achieve something close to what I need?
Cheers

Interrupts will triger. But remember:
Its priority (not the RTOS priority as they are unrelated) must be lower the SVC interrupt if you want to use any ...fromISR RTOS functions
They will not happen at the same time (as you have only one core)

I am working on a project where I need to execute 2 pieces of code off
TIM interrupts. One of them has a slightly higher priority than the
other, and both will be running on 2 different timers...
What exactly do you mean by "one of them has a [..] higher priority" - the HW timer events will occur just when the timer underflows occur. I think you mean, the handler code servicing the timeout events.
... (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
In embedded realtime programming, you should never build on the assumption that IRQ events are not occurring at the same time: Your ISR handlers may be suppressed at the moment when a trigger event occurs. This way, even if two concurrent events trigger closely after each other, it may look for your software code as if they had triggered at the same time. The solution is what your question points at: Context priorities (of tasks (= "threads") and ISRs (= "Interrupt handlers")) let you avoid the question which event came earlier and control which event to treat first.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
You are free to deploy code to an RTOS task or to an ISR, but keep in mind that any ISR will have a higher priority than any task. Your TIM event will trigger an ISR (= interrupt context), but you can (and often should) use the ISR to send a notification (or event, or semaphore, or queue message) to a task in order to have the main part of the timer event processed at the lower priority of a task.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
CubeMX is not limiting you to use or not use tasks. The question is rather how far CubeMX will generate the code you need, and how much you have to add manually. Please note that you don't have to use the CubeMX feature to generate tasks through its configuration, but this can be done by your own C code, too.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
Yes, you are. The question who started the timer is not relevant to the context type/selection triggered by the timer. In any case, the TIM will trigger its ISR (at the interrupt priority configured for that interrupt).
If you use the CubeHAL library, it will implement the root of that ISR, check which of the TIMs related to that ISR have elapsed, and invoke the code you printed. Here, you can insert your user code to the different TIM instances (like TIM2 in your case).
if so, what would be a better strategy to achieve something close to what I need?
Re-check your favourite textbook on RTOS and microcontrollers. Any SO answer cannot include all the theory to solve the problem properly.
Decide whether there will be any more urgent reaction on your system than treating the timeout events. If no, you may implement the timeout reaction in the ISR handler. If yes (or in cases of doubt), implement the ISR with a task notification that goes to a task where you do what the timeout event requires. This may be the task from where you started the timer, or another one.

Related

why there is many schedule() call in different place?

I am tracing Linux 0.11
https://mirrors.edge.kernel.org/pub/linux/kernel/Historic/old-versions/
I see there are many schedule() call in different place, not just the one inside do_timer().
Few questions here:
do_timer() (#sched.c) will be called every time the timer timeout? This timer is based on an x86 interrupt call?
Since there are many schedule() calls outside of do_timer(), can I say that is kind of preempting? or what's the purpose?
Any operation that blocks calls schedule() to yield control.
Some tasks' state has changed, it needs to be updated in schedule().
Some tasks' are working and still a lot of work, schedule() for balance.
Since there are many schedule() calls outside of do_timer(), can I say that is kind of preempting? or what's the purpose?
For a real OS; most task switches occur because a task blocks waiting for something (user input, network packet, disk IO, ..) or a task unblocks because something it was waiting for happened (and the unblocked task has higher priority and preempts the currently running lower priority task).
The whole "task switch caused by timer IRQ" thing is mostly just a fallback to guard against malicious CPU hogs (denial of service attacks); and for normal software under normal conditions you could disable it (delete the schedule() from the timer IRQ handler) and nobody would notice or care. Note: Some people will say it's also for "non-malicious" CPU bound tasks, but CPU bound tasks are relatively rare, and (ignoring the fact that the Linux scheduler has never been good for task priorities) for CPU bound tasks it's better to rely on an effective system of task priorities (e.g. give the CPU bound tasks a low priority so that almost everything will preempt them).
Also note that various courses on OS theory start with "so simple it never actually happens in practice" concepts, which is almost always a pure round-robin scheduler with tasks that never block (often with "Hey, we can accurately predict the future and know exactly how long each task will run for" nonsense), which is mostly fine as a first step (in a "learn to walk before you run" way) but sucks big salty dog balls if it's not followed by more realistic and more complex concepts (better scheduling algorithms, task priorities, multiple simultaneous scheduling algorithms/"scheduler policies", multi-CPU, interactive/latency sensitive tasks, ..) because it leaves the student/victim with little more than misinformation (e.g. the ever re-occurring "all tasks switches are caused by timer IRQ" misconception).
do_timer() (#sched.c) will be called every time the timer timeout? This timer is based on an x86 interrupt call?
I'm guessing that the timer was the raw PIT chip's IRQ (given that Linux version 0.11 was "absolute beginner developer with no intention of making it portable" historical memorabilia from before thousands of volunteers fixed half of the worst parts).
Also don't forget that the scheduler uses time for two different things - the "current task has used too much CPU time" thing that almost never matters, and figuring out when tasks that are blocked/sleeping (e.g. because they called sleep()) should unblock/wake up. The do_timer() might be for either of these things and might be for both (I don't know without looking at it).

ARM GIC Interrupt starvation

Not sure if there are similar questions. I tried to backread but can't find any, so here it is.
In my bare-metal application that uses ARM Cortex-A9 (dual core with GIC), some of the interrupt sources are 4 FPGA interrupts (let's say IRQ ID 58, 59, 60, 61) that have the same priority and the idea is that all simultaneously trigger continuously in run-time. I can say the interrupt handlers may qualify as long, but not very long.
All interrupts fire and are detected by GIC and all are flagged as PENDING. The problem is, only the two higher ID'ed interrupts (58, 59) get handled by CPU, starving the other two. Once 58 or 59 are done, their source will trigger again and grab the CPU over and over again. My other interrupts are indefinitely being starved.
I played around with priority, assigning higher interrupts to 60 and 61. Sure enough, 60 and 61 triggered and got handled by CPU, but 58 and 59 are starved. So it's really an issue of starvation.
Is there any way out of here, such that the other two will still be processed given their triggering rate?
Assuming the GIC implementation is one of ARM's designs, then the arbitration scheme for multiple interrupts at the same priority is fixed at "dispatch the lowest-numbered one", so if you were hoping it could be changed to some kind of round-robin scheme you're probably out of luck.
That said, if these interrupts are more or less permanently asserted and you're taking them back-to-back then that's a sign that you probably don't need to use interrupts, or at least that the design of your code is inappropriate. Depending on the exact nature of the task, here are some ideas I'd consider:
Just run a continuous polling loop cycling through each device in turn. If there are periods when each device might not need servicing and it's not straightforward to tell, retain a trivial interrupt handler that just atomically sets a flag/sequence number/etc. to inform the loop who's ready.
Handle all the interrupts on one core, and the actual processing on the other. The handler just grabs the necessary data, stuffs it into a queue, and returns as quickly as possible, while the other guy just steadily chews through the queue.
If catching every single interrupt is less important than just getting "enough" of each of them on average, leave each one disabled for a suitable timeout after handling it. Alternatively, hack up your own round-robin scheduling by having only one enabled at a time, and the handler reenables the next interrupt instead of the one just taken.
In my bare-metal application that uses ARM Cortex-A9 (dual core with GIC)...
Is there any way out of here, such that the other two will still be processed given their triggering rate?
Of course there are many ways.
You have a dual CPU so you can route a set to each CPU; 58/59 to CPU0 and 60/61 to CPU1. It is not clear how you have handled things with the distributor nor the per-CPU interfaces.
A 2nd way is to just read the status in the 58/59 handlers of the 60/61 and do the work. Ie, you can always read a status of another interrupt from the IRQ handler.
You can also service each and every pending interrupt recorded at the start of the IRQ before acknowledging the original source. A variant of '2' implemented at the IRQ controller layer.
I believe that most of these solutions avoid needless context save/restores and should also be more efficient.
Of course if you are asking the CPU to do more work than it can handle, priorities don't matter. The issue may be your code is not efficient; either the bare metal interrupt infrastructure or your FPGA IRQ handler. It is also quite likely the FPGA to CPU interface is not designed well. You may need to add FIFOs in the FPGA to buffer the data so the CPU can handle more data at a time. I have worked with several FPGA designers. They have a lot of flexibility and usually if you ask for something that will make the IRQ handler more efficient, they can implement it.

How to Disable/Delay the watchDog Timer for a certain Task in an embedded system

I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.

what's wrong in using interrupt handlers as event listeners

My system is simple enough that it runs without an OS, I simply use interrupt handlers like I would use event listener in a desktop program. In everything I read online, people try to spend as little time as they can in interrupt handlers, and give the control back to the tasks. But I don't have an OS or real task system, and I can't really find design information on OS-less targets.
I have basically one interrupt handler that reads a chunk of data from the USB and write the data to memory, and one interrupt handler that reads the data, sends the data on GPIO and schedule itself on an hardware timer again.
What's wrong with using the interrupts the way I do, and using the NVIC (I use a cortex-M3) to manage the work hierarchy ?
First of all, in the context of this question, let's refer to the OS as a scheduler.
Now, unlike threads, interrupt service routines are "above" the scheduling scheme.
In other words, the scheduler has no "control" over them.
An ISR enters execution as a result of a HW interrupt, which sets the PC to a different address in the code-section (more precisely, to the interrupt-vector, where you "do a few things" before calling the ISR).
Hence, essentially, the priority of any ISR is higher than the priority of the thread with the highest priority.
So one obvious reason to spend as little time as possible in an ISR, is the "side effect" that ISRs have on the scheduling scheme that you design for your system.
Since your system is purely interrupt-driven (i.e., no scheduler and no threads), this is not an issue.
However, if nested ISRs are not allowed, then interrupts must be disabled from the moment an interrupt occurs and until the corresponding ISR has completed. In that case, if any interrupt occurs while an ISR is in execution, then your program will effectively ignore it.
So the longer you spend inside an ISR, the higher the chances are that you'll "miss out" on an interrupt.
In many desktop programs, events are send to queue and there is some "event loop" that handle this queue. This event loop handles event by event so it is not possible to interrupt one event by other one. It also is good practise in event driven programming to have all event handlers as short as possible because they are not interruptable.
In bare metal programming, interrupts are similar to events but they are not send to queue.
execution of interrupt handlers is not sequential, they can be interrupted by interrupt with higher priority (numerically lower number in Cortex-M3)
there is no queue of same interrupts - e.g. you can't detect multiple GPIO interrupts while you are in that interrupt - this is the reason you should have all routines as short as possible.
It is possible to implement queues by yourself, feed these queues by interrupts and consume these queues in your super loop (consume while disabling all interrupts). By this approach, you can get sequential processing of interrupts. If you keep your handlers short, this is mostly not needed and you can do the work in handlers directly.
It is also good practise in OS based systems that they are using queues, semaphores and "interrupt handler tasks" to handle interrupts.
With bare metal it is perfectly fine to design for application bound or interrupt/event bound so long as you do your analysis. So if you know what events/interrupts are coming at what rate and you can insure that you will handle all of them in the desired/designed amount of time, you can certainly take your time in the event/interrupt handler rather than be quick and send a flag to the foreground task.
The common approach of course is to get in and out fast, saving just enough info to handle the thing in the foreground task. The foreground task has to spin its wheels of course looking for event flags, prioritizing, etc.
You could of course make it more complicated and when the interrupt/event comes, save state, and return to the forground handler in the forground mode rather than interrupt mode.
Now that is all general but specific to the cortex-m3 I dont think there are really modes like big brother ARMs. So long as you take a real-time approach and make sure your handlers are deterministic, and you do your system engineering and insure that no situation happens where the events/interrupts stack up such that the response is not deterministic, not too late or too long or loses stuff it is okay
What you have to ask yourself is whether all events can be services in time in all circumstances:
For example;
If your interrupt system were run-to-completion, will the servicing of one interrupt cause unacceptable delay in the servicing of another?
On the other hand, if the interrupt system is priority-based and preemptive, will the servicing of a high priority interrupt unacceptably delay a lower one?
In the latter case, you could use Rate Monotonic Analysis to assign priorities to assure the greatest responsiveness (the shortest execution-time handlers get the highest priority). In the first case your system may lack a degree of determinism, and performance will be variable under both event load, and code changes.
One approach is to divide the handler into real-time critical and non-critical sections, the time-critical code can be done in the handler, then a flag set to prompt the non-critical action to be performed in the "background" non-interrupt context in a "big-loop" system that simply polls event flags or shared data for work to complete. Often all that might be necessary in the interrupt handler is to copy some data to timestamp some event - making data available for background processing without holding up processing of new events.
For more sophisticated scheduling, there are a number of simple, low-cost or free RTOS schedulers that provide multi-tasking, synchronisation, IPC and timing services with very small footprints and can run on very low-end hardware. If you have a hardware timer and 10K of code space (sometimes less), you can deploy an RTOS.
I am taking your described problem first
As I interpret it your goal is to create a device which by receiving commands from the USB, outputs some GPIO, such as LEDs, relays etc. For this simple task, your approach seems to be fine (if the USB layer can work with it adequately).
A prioritizing problem exists though, in this case it may be that if you overload the USB side (with data from the other end of the cable), and the interrupt handling it is higher priority than that triggered by the timer, handling the GPIO, the GPIO side may miss ticks (like others explained, interrupts can't queue).
In your case this is about what could be considered.
Some general guidance
For the "spend as little time in the interrupt handler as possible" the rationale is just what others told: an OS may realize a queue, etc., however hardware interrupts offer no such concepts. If the event causing the interrupt happens, the CPU enters your handler. Then until you handle it's source (such as reading a receive holding register in the case of a UART), you lose any further occurrences of that event. After this point, until exiting the handler, you may receive whether the event happened, but not how many times (if the event happened again while the CPU was still processing the handler, the associated interrupt line goes active again, so after you return from the handler, the CPU immediately re-enters it provided nothing higher priority is waiting).
Above I described the general concept observable on 8 bit processors and the AVR 32bit (I have experience with these).
When designing such low-level systems (no OS, one "background" task, and some interrupts) it is fundamental to understand what goes on on each priority level (if you utilize such). In general, you would make the most real-time critical tasks the highest priority, taking the most care of serving those fast, while being more relaxed with the lower priority levels.
From an other aspect usually at design phase it can be planned how the system should react to missed interrupts, since where there are interrupts, missing one will eventually happen anyway. Critical data going across communication lines should have adequate checksums, an especially critical timer should be sourced from a count register, not from event counting, and the likes.
An other nasty part of interrupts is their asynchronous nature. If you fail to design the related locks properly, they will eventually corrupt something giving nightmares to that poor soul who will have to debug it. The "spend as little time in the interrupt handler as possible" statement also encourages you to keep the interrupt code reasonably short which means less code to consider for this problem as well. If you also worked with multitasking assisted by an RTOS you should know this part (there are some differences though: a higher priority interrupt handler's code does not need protection against a lower priority handler's).
If you can properly design your architecture regarding the necessary asynchronous tasks, getting around without an OS (from the no multitasking aspect) may even prove to be a nicer solution. It needs way more thinking to design it properly, however later there are much less locking related problems. I got through some mid-sized safety critical projects designed over a single background "task" with very few and little interrupts, and the experience and maintenance demands regarding those (especially the tracing of bugs) were quite satisfactory compared to some others in the company built over multitasking concepts.

How do you test your interrupt handling module?

I've got an interrupt handling module which controls the interrupt controller hardware on an embedded processor. Now I want to add more tests to it. Currently, the tests only tests if nesting of interrupts works by making two software interrupts from within an ISR, one with low priority and one with high priority. How can I test this module further?
I suggest that you try to create other stimuli as well.
Often, also hardware interrupts can be triggered by software (automatic testing) or the debugger by setting a flag. Or as an interrupt via I/O. Or a timer interrupt. Or you can just set the interrupt bit in an interrupt controller via the debugger while you are single stepping.
You can add some runtime checks on things which are not supposed to happen. Sometimes I elect to set output pins to monitor externally (nice if you have an oscilloscope or logic analyser...)
low_prio_isr(void)
{
LOW_PRIO_ISR=1;
if (1 == HIGH_PRIO_ISR)
{ this may never happen. dummy statement to allow breakpoint in debugger }
}
high_prio_isr(void)
{
HIGH_PRIO_ISR=1
}
The disadvantage of the software interrupt is that the moment is fixed; always the same instruction. I believe you would like to see evidence that it always works; deadlock free.
For interrupt service routines I find code reviews very valuable. In the end you can only test the situations you've imagined and at some point the effort of testing will be very high. ISRs are notoriously difficult to debug.
I think it is useful to provide tests for the following:
- isr is not interrupted for lower priority interrupt
- isr is not interrupted for same priority interrupt
- isr is interrupted for higher priority interrupt
- maximum nesting count within stack limitations.
Some of your tests may stay in the code as instrumentation (so you can monitor for instance maximum nesting level.
Oh, and one more thing: I've generally managed to keep ISRs so short that I can refrain from nesting.... if you can this will gain you additional simplicity and more performance.
[EDIT]
Of course, ISRs need to be tested on hardware in system too. Apart from the bit-by-bit, step-by-step approach you may want to prove:
- stability of system at maximum interrupt load (preferably several times the predicted maximum load; if your 115kbps serial driver can also handle 2MBps you'll be ok!)
- correct moment of enabling / disabling isr, especially if system also enters a sleep mode
- # of interrupts. Can be surprising if you add mechanical switches, mechanical rotary (hundreds of break/contact moments before reaching steady situation)
I recommend real-hardware testing. Interrupt handling is inherently random and unpredictable.
Use a signal generator and feed a square wave into the appropriate interrupt pin. Use multiple generators (or one with multiple outputs) to test multiple IRQ lines and verify priority handling.
Experiment with dialing the frequency up & down on the signal generators (vary the rates between them), and see what happens. Have lots of diagnostic code to verify the state of the interrupt controller in the various states.
Alternative: If your platform has timers that can trigger interrupts, you can use them instead of external hardware.
I'm not an embedded developer, so I don't know if this is possible, but how about decoupling the code that handles the interrupts from the callback-registration mechanism? This would allow you to write simulator code fireing interrupt-events as you like it...
For stuff like this I highly recommend something like the SPIN model checker. You wind up testing the algorithm, not the code, but the testing is exhaustive. Back in the day, I found a bug in gdb using this technique.

Resources