I am working on an ARM CORTEX microcontroller and have written a function 'x' for context switching of a thread.
This function included all __asm() calls and hence I used __attribute__((naked)).
After some time, when I called a function f(), the entire function x() wasn't being called at all. Hence no context switch happened.
Why is this happening? Should I not use __attribute__((naked)) ?
Code:
In file1.c
__attribute__((naked)) void SysTick_Handler(void){
tick_increment();
// context switch inline assembly code
}
In file2.c
volatile uint32_t tick_cnt;
volatile uint32_t tick_freq = 1;
void tick_increment(void){
tick_cnt += tick_freq;
}
Since I am calling tick_increment() inside this SysTick Handler function, it never gets called when the CVR hits 0.
Note: SysTick_Handler gets called automatically when the CVR hits 0.
The short answer is:
SysTick_Handler is an interrupt handler and has to be defined without __attribute__((naked)).
A bit longer answer:
Each interrupt routine needs to store some registers before invoking interrupt handler and restore it just before returning from interrupt. The storing and restoring procedures are generated automatically by the compiler and surround user code inside every interrupt handler. When you define interrupt handler as __attribute__((naked)) you get rid of these automatically generated code messing up the main program execution.
Example of unexpected behaviour
Let's have a look how __attribute__((naked)) interrupt handler can mess up the main program.
For example, consider one of the possible implementations of if (a == 0) condition:
The value of the variable a is loaded from RAM into a register (let it be r16).
The value of r16 register is compared with zero.
While comparing the value of the Status register is changing. Some bits in this register reflect the comparision result.
Branch instruction examine the comparison result bit of the Status register and perfirm jump to one branch or another.
The condition hereby works right only if there are no unexpected changes of Status register between steps 2 and 4.
A. Normal interrupt execution
Main program starts execution of if (a == 0) condition:
a is loaded from RAM into r16 register.
r16 is compared with zero.
Status register updated.
---> INTERRUPT IS HIT
a. Interrupt handler is executed.
b. Save Status register to stack.
c. Execute user code (suppose Status register is changed).
d. Restore Status register from stack.
e. Return from interrupt handler.
<--- INTERRUPT COMPLETED
Branch instruction examine Status register (from step 3).
Everithing works well.
B. Naked interrupt execution
Main program starts execution of if (a == 0) condition:
a is loaded from RAM into r16 register.
r16 is compared with zero.
Status register updated.
---> INTERRUPT IS HIT
a. Interrupt handler is executed.
b. Execute user code (suppose Status register is changed).
c. Return from interrupt handler.
<--- INTERRUPT COMPLETED
Branch instruction examine Status register (from step b).
The value of examined Status register does not reflect the comparison done in step 2, the branch instruction could jump to the wrong branch.
This is just an example, real messing up can arise in many other ways with the __attribute__((naked)) interrupt handlers.
Related
When I am in handler mode due to an exception, I am evaluating some conditions and deciding if return to thread mode to the same function, or to a different one, just to return to the original after being done.
Initially, I wanted to do like with the Cortex R4 and switch from Interrupt mode to Privileged mode.
mrs R1,CPSR ; Save the Interrupt Mode registers
cps #19 ; Switch to privilege mode
.... ; do your thing
msr CPSR_CXSF, R1 ; Return to interrupt mode
But now I am using a Cortex M7
I have tried doing
OriginalLR = *(int*)(__get_PSP() + 0x18);
ptr_p = (int*)(__get_PSP() + 0x18);
*ptr_p = MyDummyFunction;
Then in MyDummyFunction I push all the registers, "do my thing" and restore all the registers.
asm(" STMFD R13!,{R0-R12}");
... ; doing my thing
asm(" LDMFD R13!,{R0-R12}");
But I have no idea how to return, If I exit or BL, then it POP's stuff from the stack. Changing the PC seems dangerous.
Any suggestion? I guess I cannot change from Handler Mode to Thread Mode within the function as in the Cortex R4?
In the absence of any more clarity about the use-case for this, I can give only a few possibilities for how this might be achieved.
On the Cortex-M family, there are generally three primary differences between thread mode and handler mode (note that differences exist between members of the Cortex-M family):
Handler mode is always privileged. Thread mode can be either privileged or unprivileged, depending on the setting of the nPRIV bit in the CONTROL register. nPRIV is not available on Cortex-M0 and is optional on M0+ but exists on M3, M4 and M4F.
Handler mode has a higher priority than thread mode. Or rather, handlers have a higher priority than non-handler code, which amounts to the same thing.
Handler mode always uses the main stack pointer (MSP) mirrored through r13 (SP). Thread mode can be configured to mirror either the MSP or the process stack pointer (PSP) through SP, by using the SPSEL bit in the CONTROL register. I'm pretty sure all Cortex-M devices have this arrangement - certainly the Cortex-M0, -M3 and -M4F devices I'm familiar with do and I can't find any reference to it being optional. (This facility is designed to help with the implementation of context switches, which it does magnificently.)
It's not clear exactly why you want to ensure that your 'injected' function runs in thread mode - whether you wish to lower its priority, lower its privilege or have it use a different stack pointer. So, here are some options.
Option 1: None of the above
If you don't need to lower the priority of the injected function, and you don't need to swap stacks, and you don't care about running the injected function with privilege, then just call it before returning from the handler. It'll run in handler mode of course, but who cares?
Option 2: It's about priority
If you want to ensure that your injected function can be interrupted by interrupt requests, for example, but you don't actually care about it being run in thread mode, and you're confident that the PendSV handler isn't being used for another purpose (it's the ideal place to put a context switch, so if you're using an OS then it's likely it is), then you can configure your function to be injected as the PendSV handler.
In your existing handler, setting the PendSV bit in the ICSR (Interrupt Control and Status Register) in the memory-mapped SCB (System Control Block) will cause the PendSV handler to be triggered, and it's generally configured to be the lowest-priority handler so it's executed once all other pending handlers have completed. Note that the PendSV bit is reset automatically when the handler begins execution.
Option 3: You really need thread mode
Well, this can be done, but it's complicated. Some examples of the hurdles you face are:
It's perfectly possible to poke the stack to force a handler to return to somewhere it didn't come from. That's what a context switch does. However you'd need to ensure that the exception handler could never be invoked from an interrupt handler before attempting this.
At the end of the injected function, control must somehow be restored to the function that was interrupted, along with all of its volatile context including status flags and so on. Assuming you can't write your injected function in a way that guarantees preservation of the status flags (which means no comparisons or tests!) then depending on your privilege configuration this could be tricky; the APSR (the part of the program status register where the status flags are held) is writeable only from privileged code.
I would suggest as a starting point (and I haven't tested this) that if you can be sure your exception handler can only be invoked from thread mode, you could:
Store the value of the thread-mode stack pointer somewhere safe.
Create a dummy stack frame and push it to the thread-mode stack, setting up the 'stacked' PC to force return to your injected function.
Return from the handler, which will force a return to thread mode and the invocation of the injected function.
At the end of the injected function, instead of returning, issue a SVC instruction to invoke a software interrupt handler. This can be done from C using CMSIS. (Note that if you can't control how the injected function is written, when creating the dummy stack frame you could initialise LR to force a branch to an SVC instruction at the end of the injected function.)
Write an SVC handler that restores the saved thread-mode stack pointer. When that handler returns, the original interrupted function will be restored.
If you need to work with the possibility of interrupt handlers triggering your exception handler, you could implement the above in the PendSV handler. Unless that's already being used... in which case I'm out of ideas.
I hope this gives you a starting point.
This is a sample code from a start up code for Tiva C, as you can see the main function is called inside the reset handler, and as i understand it's is the highest priority, so my question is how any other interrupt can be handled if we are still inside the reset handler?
```
; Reset Handler
Reset_Handler PROC
EXPORT Reset_Handler [WEAK]
IMPORT SystemInit
IMPORT __main
LDR R0, =SystemInit
BLX R0
LDR R0, =__main
BX R0
ENDP
```
The reset is "special". When the reset handler is invoked by a processor reset, instructions are executed in thread mode. Necessarily so, since the reset vector is invoked on a power-on-reset (POR) - if the handler had to "return" where would it return to?
Also on reset in any case registers are reset to their defined reset state, and the stack pointer set to the address at the start of the table (in the case of am ARM Cortex-M at least), so there would be nowhere from which to fetch a return address - in fact the reset signal does not cause a return address to be stacked in any case.
The whole point of a reset is to restart the processor in a known state.
Returning to the point at which the reset occurred makes little sense, and would not be likely to work given that the reset state of the processor is unlikely to be a suitable run-state for the "interrupted" code.
From the ARM Cortex-M3 User Guide (my emphasis) other ARM architectures may differ in the details, but not the general point.
2.3.2. Exception types The exception types are:
Reset
Reset is invoked on power up or a warm reset. The exception model treats reset as a special form of exception. When reset is asserted,
the operation of the processor stops, potentially at any point in an
instruction. When reset is deasserted, execution restarts from the
address provided by the reset entry in the vector table. Execution
restarts as privileged execution in Thread mode.
[...]
I've found the pseudocode in the ARM architecture reference manuals to be quite helpful for answering this type of question. By "tiva c", I assume you are talking about the TM4C line of microcontrollers which are Cortex-M4 based MCUs. This means we will want to look at the ARMv7-M architecture reference manual.
Section "B1.5.5 Reset Behavior" has the pseudocode we are interested in. Here's a snippet (with the parts not relevant to the question elided out):
Asserting reset causes the processor to abandon the current execution
state without saving it. On the deassertion of reset, all registers
that have a defined reset value contain that value, and the processor
performs the actions described by the TakeReset() pseudocode.
// TakeReset()
// ============
TakeReset()
CurrentMode = Mode_Thread;
PRIMASK<0> = '0'; /* priority mask cleared at reset */
FAULTMASK<0> = '0'; /* fault mask cleared at reset */
BASEPRI<7:0> = Zeros(8); /* base priority disabled at reset */
// [...]
From the description we can note:
If the system is running and a reset is issued, the processor will always "abandon the current execution". So it is the "highest priority" thing that can happen if the MCU is running.
However, after the MCU restarts and the "TakeReset" logic starts to run, the "CurrentMode" the processor enters is actually Thread mode. ARMv7-M has two operation modes known as Thread Mode and Handler Mode. All interrupts/exceptions run in Handler Mode and normal code runs in Thread Mode. This tells us the reset path does not actually start like an interrupt/exception would. It's just running like normal code would.
As we know we write Embedded C programming, for task management, memory management, ISR, File system and all.
I would like to know if some task or process is running and at the same time an interrupt occurred, then how SW or process or system comes to know that, the interrupt has occurred? and pauses the current task execution and starts serving ISR.
Suppose if I will write the below code like;
// Dummy Code
void main()
{
for(;;)
printf("\n forever");
}
// Dummy code for ISR for understanding
void ISR()
{
printf("\n Interrupt occurred");
}
In this above code if an external interrupt(ISR) occurs, then how main() comes to know that the interrupt occurred? So that it would start serving ISR first?
main doesn't know. You have to execute some-system dependent function in your setup code (maybe in main) that registers the interrupt handler with the hardware interrupt routine/vector, etc.
Whether that interrupt code can execute a C function directly varies quite a lot; runtime conventions for interrupt procedures don't always follow runtime conventions for application code. Usually there's some indirection involved in getting a signal from the interrupt routine to your C code.
your query: I understood your answer. But I wanted to know when Interrupt occurs how the current task execution gets stopped/paused and the ISR starts executing?
well Rashmi to answer your query read below,
when microcontroller detects interrupt, it stops exucution of the program after executing current instruction. Then it pushes PC(program counter) on to stack and loads PC with the vector location of that inerrupt hence, program flow is directed to interrrupt service routine. On completion of ISR the microcontroller again pops the stored program counter from stack and loads it on to PC hence, program execution again resumes from next location it was stopped.
does that replied to your query?
It depends on your target.
For example the ATMEL mega family uses a pre-processor directive to register the ISR with an interrupt vector. When an interrupt occurs the corrosponding interrupt flag is raised in the relevant status register. If the global interrupt flag is raised the program counter is stored on the stack before the ISR is called. This all happens in hardware and the main function knows nothing about it.
In order to allow main to know if an interrupt has occurred you need to implement a shared data resource between the interrupt routine and your main function and all the rules from RTOS programming apply here. This means that as the ISR may be executed at any time it as not safe to read from a shared resource from main without disabling interrupts first.
On an ATMEL target this could look like:
volatile int shared;
int main() {
char status_register;
int buffer;
while(1) {
status_register = SREG;
CLI();
buffer = shared;
SREG = status_register;
// perform some action on the shared resource here.
}
return 0;
}
void ISR(void) {
// update shared resource here.
}
Please note that the ISR is not added to the vector table here. Check your compiler documentation for instructions on how to do that.
Also, an important thing to remember is that ISRs should be very short and very fast to execute.
On most embedded systems the hardware has some specific memory address that the instruction pointer will move to when a hardware condition indicates an interrupt is required.
When the instruction pointer is at this specific location it will then begin to execute the code there.
On a lot of systems the programmer will place only an address of the ISR at this location so that when the interrupt occurs and the instruction pointer moves to the specific location it will then jump to the ISR
try doing a Google search on "interrupt vectoring"
An interrupt handling is transparent for the running program. The processor branchs automatically to a previously configured address, depending on the event, and this address being the corresponding ISR function. When returning from the interrupt, a special instruction restores the interrupted program.
Actually, most of the time you won't ever want that a program interrupted know it has been interrupted. If you need to know such info, the program should call a driver function instead.
interrupts are a hardware thing not a software thing. When the interrupt signal hits the processor the processor (generally) completes the current instruction. In some way shape or form preserves the state (so it can get back to where it was) and in some way shape or form starts executing the interrupt service routine. The isr is generally not C code at least the entry point is usually special as the processor does not conform to the calling convention for the compiler. The ISR might call C code, but you end up with the mistakes that you made, making calls like printf that should not be in an ISR. hard once in C to keep from trying to write general C code in an isr, rather than the typical get in and get out type of thing.
Ideally your application layer code should never know the interrupt happened, there should be no (hardware based) residuals affecting your program. You may choose to leave something for the application to see like a counter or other shared data which you need to mark as volatile so the application and isr can share it. this is not uncommon to have the isr simply flag that an interrupt happened and the application polls that flag/counter/variable and the handling happens primarily in the application not isr. This way the application can make whatever system calls it wants. So long as the overall bandwidth or performance is met this can and does work as a solution.
Software doesnt recognize the interrupt to be specific, it is microprocessor (INTC) or microcontrollers JOB.
Interrupt routine call is just like normal function call for Main(), the only difference is that main dont know when that routine will be called.
And every interrupt has specific priority and vector address. Once the interrput is received (either software or hardware), depending on interrupt priority, mask values and program flow is diverted to specific vector location associated with that interrupt.
hope it helps.
I have an interrupt function called, interrupt_Foo() {...} which turns on a flag when 1 second has elapsed, and a user-defined function foo_calling() {...} which calls another function foo_called() {...}. I want to stop the process in foo_called() when 1 second has elapsed.
The code snippet below may elaborate further my need:
void interrupt interrupt_foo() {
...
if(1 second has elapsed) {
flag1s = 1;
} else {
flag1s = 0;
}
}
void foo_calling() {
// need something here to stop the process of foo_called()
...
(*fptr_called)(); // ptr to function which points to foo_called
...
}
void foo_called() {
// or something here to stop the process of this function
...
// long code
...
}
This is real time operating system so polling the 1 second flag inside foo_called() at some portion in the code is undesirable. Please help.
If you are willing to write non-portable code, and test the heck out of it before deploying it, and if the processor supports it, there may be a solution.
When the interrupt handler is called, the return address must be stored somewhere. If that is a location your code can query - like a fixed offset down the stack - then you can compare that address to the range occupied by your function to determine if 'foo_called is executing. You can get the address of the function by storing a dummy address, compiling, parsing the map file, then updating the address and recompiling.
Then, if your processor supports it, you can replace the return address with the address of the last instruction(s) of foo_called. (make sure you include the stack cleanup and register restoration code.). Then exit the interrupt as normal, and the interrupt handling logic will return code to the end of your interrupted function.
If the return address is not stored in the stack, but in an unwritable register, you still may be able to force quit your function - if the executable code is in writrable memory. Just store the instruction at the interruupt's return address, then overwrite it with a jump instruction which jumps to the function end. In the caller code, add a detector which restored the overwritten instruction.
I would expect that your RTOS has some kind of timer signal/interrupt that you can use to notify you when one second has passed. For instance if it is a realtime UNIX/Linux then you would set a signal handler for SIGALRM for one second. On a RT variant of Linux this signal will have more granularity and better guarantees than on a non-RT variant. But it is still a good idea to set the signal for slightly less than a second and busy-wait (loop) until you reach one second.
Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.
There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.
Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?
While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.
An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.