I'm running on a Raspberry Pi Pico (RP2040, Cortex-M0+ core, debugging via VSCode cortex-debug using JLink SWD), and I'm seeing strange behaviour regarding PendSV.
Immediately prior, the SVCall exception handler requested PendSV via the ICSR register. But on exception return, rather than tail-chaining the PendSV, execution instead returns to the calling code and continues non-exception execution.
All the while the ICSR register shows the pending PendSV, even while thread code instructions are repeatedly stepped. System handler priorities are all zero, IRQ priorities are lower.
According to the ARMv6-M reference manual, PendSV cannot be disabled.
So, what am I missing that would cause this behaviour?
Edited to add:
Perhaps it's a debugger interaction? The JLink software (v4.95d) is still in Beta...
I see that the debugger can actually disable PendSV and Systick - C1.5.1 Debug Stepping: "Optionally, the debugger can set DHCSR.C_MASKINTS to 1 to prevent PendSV, SysTick, and external configurable interrupts from occurring. This is described as masking these interrupts. Table C1-7 on page C1-326 summarizes instruction stepping control."
It turns out that the problem is caused by single-stepping the instruction that writes to the PENDSVSET bit in the ICSR: the bit is set, and the VECTPENDING field shows 0xe, but the PendSV never fires.
Free-running over that instruction to a later breakpoint sees the PendSV fire correctly.
So it is indeed a debugger interaction.
Whether that's to do with interrupts being inhibited as #cooperised suggests isn't clear - the DHCSR's C_MASKINTS bit reads as zero throughout, but how that bit is manipulated during the actual step operation isn't visible at this level.
Which makes me wonder whether the way the JLink is performing the step induces unpredictable/indeterminate behaviour - e.g. as per the warning in the C_MASKINTS description. Or perhaps this is simply what happens in an M0+ under these circumstances, and I've never single-stepped this instruction before.
In any case, the workaround is simply to not single-step the instruction that sets PENDSVSET.
Edited to add:
Finally, #cooperised was correct.
On taking more care to distinguish exactly between stepping (including stepping over function calls) and running (including running to the very next instruction), it's clear that stepping disables interrupts including PendSV.
The same thing happened to me but I found that the reason was that I was not closing the previous PensSV interrupt by returning through LR containing 0xFFFFFFF9. Instead I was returning via the PC to a previous routine's return address.
Since I did not return via 0xFFFFFFF9 it was not properly closing the previous PendSV and did not recognize subsequent ones.
Related
I'm working on one of Freesacle micro controller. This microcontroller has several reset sources (e.g. clock monitor reset, watchdog reset and ...).
Suppose that because of watchdog, my micro controller is reset. How can I save some data just before reset happens. I mean for example how can I understand that where had been the program counter just before watchdog reset. With this method I want to know where I have error (in another words long process) that causes watchdog reset.
Most Freescale MCUs work like this:
RAM is preserved after watchdog reset. But probably not after LVD reset and certainly not after power-on reset. This is in most cases completely undocumented.
The MCU will either have a status register where you can check the reset cause (for example HCS08, MPC5x, Kinetis), or it will have special reset vectors for different reset causes (for example HC11, HCS12, Coldfire).
There is no way to save anything upon reset. Reset happens and only afterwards can you find out what caused the reset.
It is however possible to reserve a chunk of RAM as a special segment. Upon power-on reset, you can initialize this segment by setting everything to zero. If you get a watchdog reset, you can assume that this RAM segment is still valid and intact. So you don't initialize it, but leave it as it is. This method enables you to save variable values across reset. Probably - this is not well documented for most MCU families. I have used this trick at least on HCS08, HCS12 and MPC56.
As for the program counter, you are out of luck. It is reset with no means to recover it. Meaning that the only way to find out where a watchdog reset occurred is the tedious old school way of moving a breakpoint bit by bit down your code, run the program and check if it reached the breakpoint.
Though in case of modern MCUs like MPC56 or Cortex M, you simply check the trace buffer and see what code that caused the reset. Not only do you get the PC, you get to see the C source code. But you might need a professional, Eclipse-free tool chain to do this.
Depending on your microcontroller you may get Reset Reason, but getting previous program counter (PC/IP) after reset is not possible.
Most of modern microcontrollers have provision for Watchdog Interrupt Instead of reset.
You can configure watchdog peripheral to enable interrupt , In that ISR you can check stored context on stack. ( You can take help from JTAG debugger to check call stack).
There are multiple debugging methods available if your micro-controller dosent support above method.
e.g
In simple while(1) based architecture you can use a HW timer and restart it after some section of code. In Timer ISR you will know which code section is consuming long enough than the timer.
Two things:
Write a log! And rotate that log to keep the last 30 min. or whatever reasonable amount of time you think you need to reproduce the error. Where the log stops, you can see what happened just before that. Even in production-level devices there is some level of logging.
(Less, practical) You can attach a debugger to nearly every micrcontroller and step through the code. Probably put a break-point that is hit just before you enter the critical section of code. Some IDEs/uCs allow having "data-breakpoints" that get triggered when certain variables contain certain values.
Disclaimer: I am not familiar with the exact microcontroller that you are using.
It is written in your manual.
I don't know that specific processor but in most microprocessors a watchdog reset is a soft reset, meaning that certain registers will keep information about the reset source and sometimes reason.
You need to post more specific information on your Freescale μC for this be answered properly.
Even if you could get the Program Counter before reset, it wouldn't be advisable to blindly set the program counter to another after reset --- as there would likely have been stack and heap information as well as the data itself may also have changed.
It depends on what you want to preserve after reset, certain behaviour or data? Volatile memory may or may not have been cleared after watchdog (see your uC datasheet) and you will be able to detect a reset after checking reset registers (again see your uC datasheet). By detecting a reset and checking volatile memory you may be able to prepare your uC to restart in a way that you'd prefer after the unlikely event of a reset occurring. You could create a global value and set it to a particular value in global scope, then if it resets, check the value against it when a reset event occurs -- if it is the same, you could assume other memory may also be the same. If volatile memory is not an option you'll need to have a look at the datasheet for non-volatile options, however it is also advisable not to continually write to non-volatile memory due to writing limitations.
The only reliable solution is to use a debugger with trace capability if your chip supports embedded instruction trace.
Some devices have an option to redirect the watchdog timeout to an interrupt rather then a reset. This would allow you to write the watchdog timeout handler much like an exception handler and dump or store the stack information including the return address which will indicate the location the interrupt occurred.
However in some cases, neither solution is a reliable method of achieving your aim. In a multi-tasking environment or system with interrupt handlers, the code running when the watchdog timeout occurs may not be the process that is causing the problem.
I have an ATSAMD21E18A micro that I am using with semi-hosting. In order for the semi-hosting to work, GDB needs to be "attached" before the first bkpt instruction. On the other hand, I have inexplicably found that the SysTick interrupt will not fire if GDB was already attached when I configured it. If I want to the SysTick interrupt to fire, I have to perform a reset (power-off via a button) and tell GDB to continue when it hasn't yet configured the micro (that is, it hasn't sent breakpoints over or anything else), and then hit Ctrl-C to initialize debugging mode after the SysTick configuration but before we get to initialise_monitor_handles.
I have verified that the start function is only copying over the relocatable data segment, zeroing the zero segment, and setting the right initial stack pointer value. We are writing our code without libraries like CMSIS.
Also I can confirm that I have no issues when the debugger is not attached (JLinkGDBServer through an Atmel SAM-ICE), besides needing to remove the semi-hosting stuff.
Also, the SysTick COUNT does still correctly count even when the interrupts themselves don't fire. Also the SysTick pending interrupt bit PENDSTSET in ICSR, is in fact set when this happens.
My code follows:
int main()
{
// enable system timer interrupt
SYS_TICK->STATUS = 0; // (CSR)
SYS_TICK->PERIOD = 48000; // (RVR) fire at 1khz for 48mhz clock
SYS_TICK->STATUS = 0b111; // use processor clock, w/ interrupt, and enabled
SYS_TICK->COUNT = 1; // (CVR) avoid high unknown value
// dumb busy loop
util_idle_ms(2000); // <<< I hit Ctrl-C to break here!
initialise_monitor_handles();
// ... more system initialization and everything else
}
I have seen some similar seeming questions here on StackOverflow, but they seemed to be too vague to get good answers.
Edit:
Here are possibly relevant register values taken during the busy loop for the run that doesn't call the SysTick handler (no hard reset, GDB attached before SysTick configured):
SYS_TICK_CSR/STATUS: 0x10007
SYS_TICK_RVR/PERIOD: 48000
SYS_TICK_CVR/COUNT: 5245 (varies of course)
NVC_ISER: 0 (and we expect this since SysTick is considered an exception, and not an interrupt)
DHCSR: 0x30003/0x1030003 (C_MASKINTS is not set; I've seen both values show up)
ICSR: 0x400f00f (it really wants to run the SysTick handler)
PRIMASK: 0
xPSR: 0x2100000f (IPSR is 0x0f/SysTick)
And for the run that calls the SysTick handler just fine (hard reset with GDB attaching after SysTick configuration):
SYS_TICK_CSR/STATUS: 0x10007
SYS_TICK_RVR/PERIOD: 48000
SYS_TICK_CVR/COUNT: 16892 (varies of course)
NVC_ISER: 0
DHCSR: 0x10003/0x1030003 (I've seen both values show up)
ICSR: 0 (SysTick handler already run)
PRIMASK: 0
xPSR: 0x2100000f
So the register values here do not yet seem to reveal anything new to me... Please help inform me of other potentially relevant registers to check!
Just for interest, the reason this is important to me is because I have gotten gprof to work on this chip, based on https://mcuoneclipse.com/2015/08/23/tutorial-using-gnu-profiling-gprof-with-arm-cortex-m/
And although I do have to hit Ctrl-C at just the right time after a hard reset, it does work like this!
Edit
I have found that I had a misunderstanding where I thought running load in GDB performed a soft reset. I have since found that although it returns execution to reset vector, various peripherals and other registers are not in fact reset. If I perform a soft reset in GDB with monitor reset then I don't need to Ctrl-C during a delay to attach GDB and both SysTick and SemiHosting will work.
The problem occurs when SysTick is configured and then load is run in GDB, without an explicit hard or soft reset. In this case, SysTick does not fire interrupts. Most of my debugging went like this, loading new code and immediately expecting it to work so I could evaluate it. Just running monitor reset is a better workaround than before, but I still would prefer to know the reason for SysTick's misbehavior!
I would visit the ARM® v6-M Architecture Reference Manual and see if you can get some direction from that. https://static.docs.arm.com/ddi0419/d/DDI0419D_armv6m_arm.pdf
Observe that state of the registers related to the Systick that you didn't include in your question. If you can't figure out the problem based on those registers, edit your question and post the register values here (the NVIC ISER, all registers related to systick config, the DHCSR, and any others you think are related). They will be the key to getting more feedback.
The Debug Halting Control and Status Register (DHCSR) has the ability to mask interrupts including the systick. Maybe this is being set by the debugger?
bit 3 of the DHCSR looks relevant
I would also check that the SYST_RVR (Systick reload value register) is being set to something sane.
I don't have the rep to comment on your question, but I'm hoping this can get you going in a productive direction :)
I have a conundrum. The part I am using (NXP KL27, Cortex-M0+) has an errata in its I2C peripheral such that during receive there is no flow control. As a result, it needs to be a high priority interrupt. I am also using a UART that, by its asynchronous nature, has no flow control on its receive. As a result, it needs to be a high priority interrupt.
Circular Priority
The I2C interrupt needs to be higher priority than the UART interrupt, otherwise an incoming byte can get demolished in the shift register before being read. It really shouldn't work this way, but that's the errata, and so it needs to be higher priority.
The UART interrupt needs to be higher priority than the I2C interrupt, because to close out an I2C transaction the driver (from NXP's KSDK) needs to set a flag and wait for a status bit. During this wait incoming characters on the UART can overflow the non-FIFO'd shift register.
In trying to solve an issue with the UART, I discovered this circular dependency. The initial issue saw characters disappearing from the UART receive and the overrun flag being set. When swapping priorities, the UART was rock solid, never missing a character, but I2C transactions ended up stalling due to overruns.
Possible Solution
The solution I came up with involves changing interrupt priorities on the fly. When the I2C driver is closing out a transaction, it is not receiving, which means the errata that causes bytes to flow in uncontrolled is not an issue. I would like to demote the I2C interrupt priority in the NVIC during this time so that the UART is able to take priority over it, thus making the UART happy (and not missing any characters).
Question
I haven't been able to find anything from ARM that states whether changing the interrupt priority while executing that interrupt will take effect immediately, or if the priority of the current interrupt was latched in when it started executing. I am hoping someone can definitely save from the depths of their knowledge of the architecture or from experience that changing the priority will take effect immediately, or not.
Other Possible Solutions
There are a number of other possible solutions and reasons why they are undesirable. Refactoring the I2C driver to handle the loop in the process context rather than interrupt context would be a significant effort digging into the vendor code and affects the application code that calls into it. Using DMA for either of these peripherals uses up a non-trivial amount of the DMA channels available and incurs the overhead of setting up DMA for each transaction (and also affects the application code that calls into the drivers).
I am open to other solutions, but hesitant to go down any path that causes significant changes to the vendor code.
Test
I have an idea for an experiment to test how the NVIC works in this regard, but I thought I would check here first. If I get to the experiment, I will post a follow-up answer with the results.
Architecturally, this appears to be UNPREDICTABLE (changing the priority of a currently active exception). There seems to be no logic in place to enforce more consistent behavior (i.e. the registration logic you are concerned about is not obviously present in M0/M0+).
This means that if you test for the effectiveness of your workaround, it will probably appear to work - and in your constrained scenario it might be effective. However, there is no guarantee that the same code will work on M3, or that it works reliably in all scenarios (for example any interaction with debug). You might even observe some completely unpredictable corner case behavior, but the area-constrained
This is specified as unpredictable in section B1.5.4 of the ARM v6-M ARM.
For v7-M (B1.5.4, Exception Priorities and preemption)
This definition of execution priority means that an exception handler
can be executing at a priority that is higher than the priority of the
corresponding exception. In particular, if a handler reduces the
priority of its corresponding exception, the execution priority falls
only to the priority of the highest-priority preempted exception.
Therefore, reducing the priority of the current exception never
permits:
A preempted exception to preempt the current exception handler.
Inversion of the priority of preempted exceptions.
The v7-M aspect clarifies some of the complex scenarios which must be avoided if you attempt to make use of the unpredictable behavior which you have identified as useful with the M0+ part.
Experiment
I coded up a quick experiment today to test this behavior on my particular variant of the Cortex M0+. I am leaving this as an unaccepted answer, and I believe #Sean Houlihane's answer is the most correct (i.e. it is unpredictable). I still wanted to test the behavior and report in under the specific circumstances for while I am using it.
The experiment was performed on a FRDM-KL43Z board. It has a red LED, a green LED, and two push buttons. The application performed some setup of the GPIO and interrupts and then sat in an infinite loop.
Button 1: Button 1's interrupt handler was initialized to midscale priority (0x80). On every falling edge of button 1 it would pend the interrupt. This interrupt would toggle the green LED's state.
Button 2: Button 2's interrupt handler was initialized to midscale priority (0x80), but would be changed as a part of execution. The button 2 interrupt handler would run a loop that lasted approximately 8 seconds (two phases of four), repeating indefinitely. It would turn on the red LED and decrease it's own priority below that of button 1. After the four seconds, it would turn off the red LED and increase it's own priority above that of button 1. After four seconds it would repeat.
Expected Results
If the hypothesis proves to be true, when the red LED is on, pressing button 1 will toggle the green LED, and when the red LED is off, pressing button 1 will have no effect until the red LED turns off. The button 1 interrupt would not execute until the forever looping button 2 interrupt is of a lower priority.
Results
This is the boring section. Everything I expected in the previous section happened.
Conclusion
For the experimental setup (NXP KL43Z Cortex M0+), changing the interrupt priority of the currently executing interrupt takes effect while the interrupt is running. As a result, my hacky workaround of demoting priority during the busy wait and restoring it after should function for what I need.
Edit:
Later Results
Though the experiment was successful, problems started occurring once the workaround for the original issue was implemented. The interaction between the UART and I2C handlers was relatively consistent, but a third peripheral started having very odd behavior in its interrupt handler. Take heed of the warning of UNPREDICTABLE.
One alternative solution could be to defer to another, lower priority, interrupt for the second half of your processing. A good candidate is the PendSV interrupt (if not already in use), which can (only) be triggered from software.
For a more detailed explanation, see this answer to a similar question and this answer about PendSV in general.
I'm developping a bare-metal project on a STM32L4 and I'm starting from an existing code base.
The ISRs have been implemented the following way:
read interrupt status in the peripheral to know what event(s) provoked the interrupt
do something
clear the flags that have read at the beginning.
Is it the right way to clear the flag ? Shouldn't the flags be cleared at the very beginning of the ISR ? My understanding is that, if the same peripheral event is happening a second time during step 2, it will not provoke a second IRQ so it would be lost. On the other hand if you clear the flag as soon as you can, this second event would pulse the interrupt whose state in the CPU would change to "pending and active": a second IRQ would happen.
PS: From STM32 Processor Programming Manual I read: "STM32 interrupts are both level-sensitive and pulse-sensitive".
Definitely at the beginning (unless you have special reasons in the program logic) as some time is needed the for actual write to the flag clear register to propagate through the buses.
If you decide for some reason to put it at the end of the interrupt you should leave some instructions, place the barrier instruction or read back the register before the interrupt routine return to make sure that the clear operation has propagated across the buses. Otherwise you may have a "phantom" duplicate routine calls.
Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.
There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.
Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?
While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.
An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.