How do you test your interrupt handling module? - c

I've got an interrupt handling module which controls the interrupt controller hardware on an embedded processor. Now I want to add more tests to it. Currently, the tests only tests if nesting of interrupts works by making two software interrupts from within an ISR, one with low priority and one with high priority. How can I test this module further?

I suggest that you try to create other stimuli as well.
Often, also hardware interrupts can be triggered by software (automatic testing) or the debugger by setting a flag. Or as an interrupt via I/O. Or a timer interrupt. Or you can just set the interrupt bit in an interrupt controller via the debugger while you are single stepping.
You can add some runtime checks on things which are not supposed to happen. Sometimes I elect to set output pins to monitor externally (nice if you have an oscilloscope or logic analyser...)
low_prio_isr(void)
{
LOW_PRIO_ISR=1;
if (1 == HIGH_PRIO_ISR)
{ this may never happen. dummy statement to allow breakpoint in debugger }
}
high_prio_isr(void)
{
HIGH_PRIO_ISR=1
}
The disadvantage of the software interrupt is that the moment is fixed; always the same instruction. I believe you would like to see evidence that it always works; deadlock free.
For interrupt service routines I find code reviews very valuable. In the end you can only test the situations you've imagined and at some point the effort of testing will be very high. ISRs are notoriously difficult to debug.
I think it is useful to provide tests for the following:
- isr is not interrupted for lower priority interrupt
- isr is not interrupted for same priority interrupt
- isr is interrupted for higher priority interrupt
- maximum nesting count within stack limitations.
Some of your tests may stay in the code as instrumentation (so you can monitor for instance maximum nesting level.
Oh, and one more thing: I've generally managed to keep ISRs so short that I can refrain from nesting.... if you can this will gain you additional simplicity and more performance.
[EDIT]
Of course, ISRs need to be tested on hardware in system too. Apart from the bit-by-bit, step-by-step approach you may want to prove:
- stability of system at maximum interrupt load (preferably several times the predicted maximum load; if your 115kbps serial driver can also handle 2MBps you'll be ok!)
- correct moment of enabling / disabling isr, especially if system also enters a sleep mode
- # of interrupts. Can be surprising if you add mechanical switches, mechanical rotary (hundreds of break/contact moments before reaching steady situation)

I recommend real-hardware testing. Interrupt handling is inherently random and unpredictable.
Use a signal generator and feed a square wave into the appropriate interrupt pin. Use multiple generators (or one with multiple outputs) to test multiple IRQ lines and verify priority handling.
Experiment with dialing the frequency up & down on the signal generators (vary the rates between them), and see what happens. Have lots of diagnostic code to verify the state of the interrupt controller in the various states.
Alternative: If your platform has timers that can trigger interrupts, you can use them instead of external hardware.

I'm not an embedded developer, so I don't know if this is possible, but how about decoupling the code that handles the interrupts from the callback-registration mechanism? This would allow you to write simulator code fireing interrupt-events as you like it...

For stuff like this I highly recommend something like the SPIN model checker. You wind up testing the algorithm, not the code, but the testing is exhaustive. Back in the day, I found a bug in gdb using this technique.

Related

How to know whether an IRQ was served immediately on ARM Cortex M0+ (or any other MCU)

For my application (running on an STM32L082) I need accurate (relative) timestamping of a few types of interrupts. I do this by running a timer at 1 MHz and taking its count as soon as the ISR is run. They are all given the highest priority so they pre-empt less important interrupts. The problem I'm facing is that they may still be delayed by other interrupts at the same priority and by code that disables interrupts, and there seems to be no easy way to know this happened. It is no problem that the ISR was delayed, as long as I know that the particular timestamp is not accurate because of this.
My current approach is to let each ISR and each block of code with interrupts disabled check whether interrupts are pending using NVIC->ISPR[0] and flagging this for the pending ISR. Each ISR checks this flag and, if needed, flags the timestamp taken as not accurate.
Although this works, it feels like it's the wrong way around. So my question is: is there another way to know whether an IRQ was served immediately?
The IRQs in question are EXTI4-15 for a GPIO pin change and RTC for the wakeup timer. Unfortunately I'm not in the position to change the PCB layout and use TIM input capture on the input pin, nor to change the MCU used.
update
The fundamental limit to accuracy in the current setup is determined by the nature of the internal RTC calibration, which periodically adds/removes 32kHz ticks, leading to ~31 µs jitter. My goal is to eliminate (or at least detect) additional timestamping inaccuracies where possible. Having interrupts blocked incidentally for, say, 50+ µs is hard to avoid and influences measurements, hence the need to at least know when this occurs.
update 2
To clarify, I think this is a software question, asking if a particular feature exists and if so, how to use it. The answer I am looking for is one of: "yes it is possible, just check bit X of register Y", or "no it is not possible, but MCU ... does have such a feature, called ..." or "no, such a feature is generally not available on any platform (but the common workaround is ...)". This information will guide me (and future readers) towards a solution in software, and/or requirements for better hardware design.
In general
The ideal solution for accurate timestamping is to use timer capture hardware (built-in to the microcontroller, or an external implementation). Aside from that, using a CPU with enough priority levels to make your ISR always the highest priority could work, or you might be able to hack something together by making the DMA engine sample the GPIO pins (specifics below).
Some microcontrollers have connections between built-in peripherals that allow one peripheral to trigger another (like a GPIO pin triggering timer capture even though it isn't a dedicated timer capture input pin). Manufacturers have different names for this type of interconnection, but a general overview can be found on Wikipedia, along with a list of the various names. Exact capabilities vary by manufacturer.
I've never come across a feature in a microcontroller for indicating if an ISR was delayed by a higher priority ISR. I don't think it would be a commonly-used feature, because your ISR can be interrupted by a higher priority ISR at any moment, even after you check the hypothetical was_delayed flag. A higher priority ISR can often check if a lower priority interrupt is pending though.
For your specific situation
A possible approach is to use a timer and DMA (similar to audio streaming, double-buffered/circular modes are preferred) to continuously sample your GPIO pins to a buffer, and then you scan the buffer to determine when the pins changed. Note that this means the CPU must scan the buffer before it is overwritten again by DMA, which means the CPU can only sleep in short intervals and must keep the timer and DMA clocks running. ST's AN4666 is a relevant document, and has example code here (account required to download example code). They're using a different microcontroller, but they claim the approach can be adapted to others in their lineup.
Otherwise, with your current setup, I don't think there is a better solution than the one you're using (the flag that's set when you detect a delay). The ARM Cortex-M0+ NVIC does not have a feature to indicate if an ISR was delayed.
A refinement to your current approach might be making the ISRs as short as possible, so they only do the timestamp collection and then put any other work into a queue for processing by the main application at a lower priority (only applicable if the work is more complex than the enqueue operation, and if the work isn't time-sensitive). Eliminating or making the interrupts-disabled regions short should also help.

STM32 RTOS timer interrupt and threads

I am working on a project where I need to execute 2 pieces of code off TIM interrupts. One of them has a slightly higher priority than the other, and both will be running on 2 different timers (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
if so, what would be a better strategy to achieve something close to what I need?
Cheers
Interrupts will triger. But remember:
Its priority (not the RTOS priority as they are unrelated) must be lower the SVC interrupt if you want to use any ...fromISR RTOS functions
They will not happen at the same time (as you have only one core)
I am working on a project where I need to execute 2 pieces of code off
TIM interrupts. One of them has a slightly higher priority than the
other, and both will be running on 2 different timers...
What exactly do you mean by "one of them has a [..] higher priority" - the HW timer events will occur just when the timer underflows occur. I think you mean, the handler code servicing the timeout events.
... (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
In embedded realtime programming, you should never build on the assumption that IRQ events are not occurring at the same time: Your ISR handlers may be suppressed at the moment when a trigger event occurs. This way, even if two concurrent events trigger closely after each other, it may look for your software code as if they had triggered at the same time. The solution is what your question points at: Context priorities (of tasks (= "threads") and ISRs (= "Interrupt handlers")) let you avoid the question which event came earlier and control which event to treat first.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
You are free to deploy code to an RTOS task or to an ISR, but keep in mind that any ISR will have a higher priority than any task. Your TIM event will trigger an ISR (= interrupt context), but you can (and often should) use the ISR to send a notification (or event, or semaphore, or queue message) to a task in order to have the main part of the timer event processed at the lower priority of a task.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
CubeMX is not limiting you to use or not use tasks. The question is rather how far CubeMX will generate the code you need, and how much you have to add manually. Please note that you don't have to use the CubeMX feature to generate tasks through its configuration, but this can be done by your own C code, too.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
Yes, you are. The question who started the timer is not relevant to the context type/selection triggered by the timer. In any case, the TIM will trigger its ISR (at the interrupt priority configured for that interrupt).
If you use the CubeHAL library, it will implement the root of that ISR, check which of the TIMs related to that ISR have elapsed, and invoke the code you printed. Here, you can insert your user code to the different TIM instances (like TIM2 in your case).
if so, what would be a better strategy to achieve something close to what I need?
Re-check your favourite textbook on RTOS and microcontrollers. Any SO answer cannot include all the theory to solve the problem properly.
Decide whether there will be any more urgent reaction on your system than treating the timeout events. If no, you may implement the timeout reaction in the ISR handler. If yes (or in cases of doubt), implement the ISR with a task notification that goes to a task where you do what the timeout event requires. This may be the task from where you started the timer, or another one.

Why not put task context in interrupt

Here is the story.
Its a safety critical project and needs to run a time critical functional routine in 20KHz. Now the design is to put functional routine in a 20KHz FIQ interrupt, meanwhile safety interrupt also in FIQ. Thats the only two FIQ in system. (Surely there are couples of IRQ enabled in the MCU)
I know that its not good to put task context in interrupt ISR, the proper way of doing this to set mark and run in OS task. But seems current design harm nobody.
The routine takes about 10us (main clock 300MHz), so basically it will not blocks IRQ/FIQ for unacceptable time. It even save time for extra context switch compare with using OS task to run the functional routine. To me, currently it feels like the design is against every principle written on text book in university but can not find a reason to say no to it.
How could I convince myself to move functional routine from ISR to OS? Should I?
Let's recollect your situation:
you are coding a safety critical system
the software architecture isn't specified otherwise you wouldn't ask the question at hand
the system requirements weren't processed correctly otherwise 2) wouldn't be in question
someone told you to "use minimum interrupt if possible in safety critical system"
you want to use the highest priority & non-interruptible code for "just some math work"
Sorry for being a bit harsh but I wouldn't want to use/be in your safety critical system.
For your actual problem:
you have to make sure two things
the code in the FIQ must be deterministic and WCET tested
the registers of the timer must be protected and supervised. Why? An unwanted/erroneous manipulation of the timers registers by a lower safety level code can congest the CPU so much that effectively nothing else but the interrupt is processed.
All this under the assumption that your safe state depends entirely on an external hardware watchdog.
PS: Which are the hazards for users of your system? Annoyance? Injury? Lethal? Are you in a SIL or ASIL context?
The reason to move complex code away from ISR is precisely to avoid lengthy processing in the ISR and thus timing jitter and delayed interrupt servicing resulting from it.
You are stating the your processing is not lengthy so do it in the ISR! Otherwise you are just adding bloat.
20Khz = 50us between interrupts, with 10us of processing time it gives you roughly 20% of CPU time just for this "task", and a jitter of 10us in any other routine that runs in your CPU, it will also sum 10us of processing time for each 40us that any other task will consum, if it is ok for your project, and you keep your total CPU processing time below 70% (which is the common maximum acceptable for critical systems), IMHO it should work without any issue.

what's wrong in using interrupt handlers as event listeners

My system is simple enough that it runs without an OS, I simply use interrupt handlers like I would use event listener in a desktop program. In everything I read online, people try to spend as little time as they can in interrupt handlers, and give the control back to the tasks. But I don't have an OS or real task system, and I can't really find design information on OS-less targets.
I have basically one interrupt handler that reads a chunk of data from the USB and write the data to memory, and one interrupt handler that reads the data, sends the data on GPIO and schedule itself on an hardware timer again.
What's wrong with using the interrupts the way I do, and using the NVIC (I use a cortex-M3) to manage the work hierarchy ?
First of all, in the context of this question, let's refer to the OS as a scheduler.
Now, unlike threads, interrupt service routines are "above" the scheduling scheme.
In other words, the scheduler has no "control" over them.
An ISR enters execution as a result of a HW interrupt, which sets the PC to a different address in the code-section (more precisely, to the interrupt-vector, where you "do a few things" before calling the ISR).
Hence, essentially, the priority of any ISR is higher than the priority of the thread with the highest priority.
So one obvious reason to spend as little time as possible in an ISR, is the "side effect" that ISRs have on the scheduling scheme that you design for your system.
Since your system is purely interrupt-driven (i.e., no scheduler and no threads), this is not an issue.
However, if nested ISRs are not allowed, then interrupts must be disabled from the moment an interrupt occurs and until the corresponding ISR has completed. In that case, if any interrupt occurs while an ISR is in execution, then your program will effectively ignore it.
So the longer you spend inside an ISR, the higher the chances are that you'll "miss out" on an interrupt.
In many desktop programs, events are send to queue and there is some "event loop" that handle this queue. This event loop handles event by event so it is not possible to interrupt one event by other one. It also is good practise in event driven programming to have all event handlers as short as possible because they are not interruptable.
In bare metal programming, interrupts are similar to events but they are not send to queue.
execution of interrupt handlers is not sequential, they can be interrupted by interrupt with higher priority (numerically lower number in Cortex-M3)
there is no queue of same interrupts - e.g. you can't detect multiple GPIO interrupts while you are in that interrupt - this is the reason you should have all routines as short as possible.
It is possible to implement queues by yourself, feed these queues by interrupts and consume these queues in your super loop (consume while disabling all interrupts). By this approach, you can get sequential processing of interrupts. If you keep your handlers short, this is mostly not needed and you can do the work in handlers directly.
It is also good practise in OS based systems that they are using queues, semaphores and "interrupt handler tasks" to handle interrupts.
With bare metal it is perfectly fine to design for application bound or interrupt/event bound so long as you do your analysis. So if you know what events/interrupts are coming at what rate and you can insure that you will handle all of them in the desired/designed amount of time, you can certainly take your time in the event/interrupt handler rather than be quick and send a flag to the foreground task.
The common approach of course is to get in and out fast, saving just enough info to handle the thing in the foreground task. The foreground task has to spin its wheels of course looking for event flags, prioritizing, etc.
You could of course make it more complicated and when the interrupt/event comes, save state, and return to the forground handler in the forground mode rather than interrupt mode.
Now that is all general but specific to the cortex-m3 I dont think there are really modes like big brother ARMs. So long as you take a real-time approach and make sure your handlers are deterministic, and you do your system engineering and insure that no situation happens where the events/interrupts stack up such that the response is not deterministic, not too late or too long or loses stuff it is okay
What you have to ask yourself is whether all events can be services in time in all circumstances:
For example;
If your interrupt system were run-to-completion, will the servicing of one interrupt cause unacceptable delay in the servicing of another?
On the other hand, if the interrupt system is priority-based and preemptive, will the servicing of a high priority interrupt unacceptably delay a lower one?
In the latter case, you could use Rate Monotonic Analysis to assign priorities to assure the greatest responsiveness (the shortest execution-time handlers get the highest priority). In the first case your system may lack a degree of determinism, and performance will be variable under both event load, and code changes.
One approach is to divide the handler into real-time critical and non-critical sections, the time-critical code can be done in the handler, then a flag set to prompt the non-critical action to be performed in the "background" non-interrupt context in a "big-loop" system that simply polls event flags or shared data for work to complete. Often all that might be necessary in the interrupt handler is to copy some data to timestamp some event - making data available for background processing without holding up processing of new events.
For more sophisticated scheduling, there are a number of simple, low-cost or free RTOS schedulers that provide multi-tasking, synchronisation, IPC and timing services with very small footprints and can run on very low-end hardware. If you have a hardware timer and 10K of code space (sometimes less), you can deploy an RTOS.
I am taking your described problem first
As I interpret it your goal is to create a device which by receiving commands from the USB, outputs some GPIO, such as LEDs, relays etc. For this simple task, your approach seems to be fine (if the USB layer can work with it adequately).
A prioritizing problem exists though, in this case it may be that if you overload the USB side (with data from the other end of the cable), and the interrupt handling it is higher priority than that triggered by the timer, handling the GPIO, the GPIO side may miss ticks (like others explained, interrupts can't queue).
In your case this is about what could be considered.
Some general guidance
For the "spend as little time in the interrupt handler as possible" the rationale is just what others told: an OS may realize a queue, etc., however hardware interrupts offer no such concepts. If the event causing the interrupt happens, the CPU enters your handler. Then until you handle it's source (such as reading a receive holding register in the case of a UART), you lose any further occurrences of that event. After this point, until exiting the handler, you may receive whether the event happened, but not how many times (if the event happened again while the CPU was still processing the handler, the associated interrupt line goes active again, so after you return from the handler, the CPU immediately re-enters it provided nothing higher priority is waiting).
Above I described the general concept observable on 8 bit processors and the AVR 32bit (I have experience with these).
When designing such low-level systems (no OS, one "background" task, and some interrupts) it is fundamental to understand what goes on on each priority level (if you utilize such). In general, you would make the most real-time critical tasks the highest priority, taking the most care of serving those fast, while being more relaxed with the lower priority levels.
From an other aspect usually at design phase it can be planned how the system should react to missed interrupts, since where there are interrupts, missing one will eventually happen anyway. Critical data going across communication lines should have adequate checksums, an especially critical timer should be sourced from a count register, not from event counting, and the likes.
An other nasty part of interrupts is their asynchronous nature. If you fail to design the related locks properly, they will eventually corrupt something giving nightmares to that poor soul who will have to debug it. The "spend as little time in the interrupt handler as possible" statement also encourages you to keep the interrupt code reasonably short which means less code to consider for this problem as well. If you also worked with multitasking assisted by an RTOS you should know this part (there are some differences though: a higher priority interrupt handler's code does not need protection against a lower priority handler's).
If you can properly design your architecture regarding the necessary asynchronous tasks, getting around without an OS (from the no multitasking aspect) may even prove to be a nicer solution. It needs way more thinking to design it properly, however later there are much less locking related problems. I got through some mid-sized safety critical projects designed over a single background "task" with very few and little interrupts, and the experience and maintenance demands regarding those (especially the tracing of bugs) were quite satisfactory compared to some others in the company built over multitasking concepts.

Embedded Programming, Wait for 12.5 us

I'm programming on the C2000 F28069 Experimenters Kit. I'm toggling a GPIO output every 12.5 microseconds 5 times in a row. I decided I don't want to use interrupts (though I will if I absolutely have to). I want to just wait that amount of times in terms of clock cycles.
My clock is running at 80MHz, so 12.5 us should be 1000 clock cycles. When I use a loop:
for(i=0;i<1000;i++)
I get a result that is way too long (not 12.5 us). What other techniques can I use?
Is sleep(n); something that I can use on a microcontroller? If so, which header file do I need to download and where can I find it? Also, now that I think about it, sleep(n); takes an int input, so that wouldn't even work... any other ideas?
Summary: Use the PWM or Timer peripherals to generate output pulses.
First, the clock speed of the CPU has a complex relationship to actual code execution speed, and in many CPUs there is more than one clock rate involved in different stages of the execution. The chip you reference has several internal clock sources, for instance. Further, each individual instruction will likely take a different number of clocks to execute, and some cores can execute part of (or all of) several instructions simultaneously.
To rigorously create a loop that required 12.5 µs to execute without using a timing interrupt or other hardware device would require careful hand coding in assembly language along with careful accounting of the execution time of each instruction.
But you are writing in C, not assembler.
So the first question you have to ask is what machine code was actually generated for your loop. And the second question is did you enable the optimizer, and to what level.
As written, a decent optimizer will determine that the loop for (i=0; i<1000; i++) ; has no visible side effects, and therefore is just a slow way of writing ;, and can be completely removed.
If it does compile the loop, it could be written naively using perhaps as many as 5 instructions, or as few as one or two. I am not personally familiar with this particular TI CPU architecture, so I won't attempt to guess at the best possible implementation.
All that said, learning about the CPU architecture and its efficiency is important to building reliable and efficient embedded systems. But given that the chip has peripheral devices built-in that provide hardware support for PWM (pulse width modulated) outputs as well as general purpose hardware timer/counters you would be far better off learning to use the hardware to generate the waveform for you.
I would start by collecting every document available on the CPU core and its peripherals, especially app notes and sample code.
The C compiler will have an option to emit and preserve an assembly language source file. I would use that as a guide to study the structure of the code generated for critical loops and other bottlenecks, as well as the effects of the compiler's various optimization levels.
The tool suite should have a mechanism for profiling your running code. Before embarking on heroic measures in pursuit of optimizations, use that first to identify the actual bottlenecks. Even if it lacks decent profiling, you are likely to have spare GPIO pins that can be toggled around critical sections of code and measured with a logic analyzer or oscilloscope.
The chip you refer has PWM (pulse width modulation) hardware declared as one of major winning features. You should rely on this. Please refer to appropriate application guide. Generally you cannot guarantee 12.5uS periods from application layer (and should not try to do so). Even if you managed to do so directly from application layer it's bad idea. Any change in your firmware code can break this.
If you use a timer peripheral with PWM output capability as suggested by #RBerteig already, then you can generate an accurate timing signal with zero software overhead. If you need to do other work synchronously with the clock, then you can use the timer interrupt to trigger that too. However if you process interrupts at an interval of 12.5us you may find that your processor spends a great deal of time context switching rather than performing useful work.
If you simply want an accurate delay, then you should still use a hardware timer and poll its reload flag rather than process its interrupt. This allows consistent timing independent of the compiler's code generation or processor speed and allows you to add other code within the loop without extending the total loop time. You would poll it in a loop during which you might do other work as well. The timing jitter and determinism will depend on what other work you do in the loop, but for an empty loop, reaction to the timer even will probably be faster than the latency on an interrupt handler.

Resources