What happens in an interrupt service routine? - c

Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.

There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.

Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?

While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.

An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.

Related

How could we sleep when we are executing a syscall that execute in interrupt mode

When I am executing a system call to do write or something else, the ISR corresponded to the exception is executing in interrupt mode (on cortex-m3 the IPSR register is having a non-zero value, 0xb). And what I have learned is that when we execute a code in an interrupt mode we can not sleep, we can not use functions that might block ...
My question is that: is there any kind of a mechanism with which the ISR could still executing in interrupt mode and in the same time it could use functions that might block, or is there any kind of trick is implemented.
Caveat: This is more of a comment than an answer but is too big to fit in a comment or series of comments.
TL;DR: Needing to sleep or execute a blocking operation from an ISR is a fundamental misdesign. This seems like an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Doing sleep [even in application code] is generally a code smell. Why do you feel you need to sleep [vs. some event/completion driven mechanism]?
Please edit your question and add clarification [i.e. don't just add comments]
When I am executing a system call to do write or something else
What is your application doing? A write to what device? What "something else"?
What is the architecture/board, kernel/distro? (e.g. Raspberry Pi running Raspian? nvidia Jetson? Beaglebone? Xilinx FPGA with petalinux?)
What is the H/W device and where did the device driver come from? Did you write the device driver yourself or is it a standard one that comes with the kernel/distro? If you wrote it, please post it in your question.
Is the device configured properly? (e.g.) Are the DTB entries correct?
Is the device a block device, such as a disk controller? Or, is it a character device, such as a UART? Does the device transfer data via DMA? Or, does it transfer data by reading/writing to/from an IO port?
What do you mean by "exception"? Generally, exception is an abnormal condition (e.g. segfault, bus error, etc.). Please describe the exact context/scenario for which this occurs.
Generally, an ISR does little things. (e.g.) Grab [and save] status from the device. Clear/rearm the interrupt in the interrupt controller. Start the next queued transfer request. Wake up the sleeping base level task (usually the task that executed the syscall [waiting on a completion event in kernel mode]).
More elaborate actions are generally deferred and handled in the interrupt's "bottom half" handler and/or tasklet. Or, the base level is woken up and it handles the remaining processing.
What kernel subsystems are involved? Are you using platform drivers? Are you interfacing from within the DMA device driver framework? Are message buses involved (e.g. I2C, SPI, etc.)?
Interrupt and device handling in the linux kernel is somewhat different than what one might do in a "bare metal" system or RTOS (e.g. FreeRTOS). So, if you're coming from those environments, you'll need to think about restructuring your driver code [and/or application code].
What are your requirements for device throughput and latency?
You may wish to consult a good book on linux device driver design. And, you may wish to consult the kernel's Documentation subdirectory.
If you're able to provide more information, I may be able to help you further.
UPDATE:
A system call is not really in the same class as a hardware interrupt as far as the kernel is concerned, even if the CPU hardware uses the same sort of exception vector mechanisms for handling both hardware and software interrupts. The kernel treats the system call as a transition from user mode to kernel mode. – Ian Abbott
This is a succinct/great explanation. The "mode" or "context" has little to do with how we got/get there from a H/W mechanism.
The CPU doesn't really "understand" interrupt mode [as defined by the kernel]. It understands "supervisor" vs "user" privilege level [sometimes called "mode"].
When executing at user privilege level, an interrupt/exception will notice the transition from "user" level to "supvervisor" level. It may have a special register that specifies the address of the [initial] supervisor stack pointer. Atomically, it swaps in the value, pushing the user SP onto the new kernel stack.
If the interrupt is interrupting a CPU that is already at supervisor level, the existing [supervisor] SP will be used unchanged.
Note that x86 has privilege "ring" levels. User mode is ring 3 and the highest [most privileged] level is ring 0. For arm, some arches can have a "hypervisor" privilege level [which is higher privilege than "supervisor" privilege].
The setup of the mode/context is handled in arch/arm/kernel/entry-*.S code.
An svc is a synchronous interrupt [generated by a special CPU instruction]. The resulting context is the context of the currently executing thread. It is analogous to "call function in kernel mode". The resulting context is "kernel thread mode". At that point, it's not terribly useful to think of it as an "interrupt" anymore.
In fact, on some arches, the syscall instruction/mechanism doesn't use the interrupt vector table. It may have a fixed address or use a "call gate" mechanism (e.g. x86).
Each thread has its own stack which is different than the initial/interrupt stack.
Thus, once the entry code has established the context/mode, it is not executing in "interrupt mode". So, the full range of kernel functions is available to it.
An interrupt from a H/W device is asynchronous [may occur at any time the CPU's internal interrupt enable flag is set]. It may interrupt a userspace application [executing in application mode] OR kernel thread mode OR an existing ISR executing in interrupt mode [from another interrupt]. The resulting ISR is executing in "interrupt mode" or "ISR mode".
Because the ISR can interrupt a kernel thread, it may not do certain things. For example, if the CPU were in [kernel] thread mode, and it was in the middle of a kmalloc call [GFP_KERNEL], the ISR would see partial state and any action that tried to adjust the kernel's heap would result in corruption of the heap.
This is a design choice by linux for speed.
Kernel ISRs may be classified as "fast interrupts". The ISR executes with the CPU interrupt enable [IE] flag cleared. No normal H/W interrupt may interrupt the ISR.
So, if another H/W device asserts its interrupt request line [in the external interrupt controller], that request will be "pending". That is, the request line has been asserted but the CPU has not acknowledged it [and the CPU has not jumped via the interrupt table].
The request will remain pending until the CPU allows further interrupts by asserting IE. Or, the CPU may clear the pending interrupt without taking action by clearing the pending interrupt in the interrupt controller.
For a "slow" interrupt ISR, the interrupt entry code will clear the interrupt in the external interrupt controller. It will then rearm interrupts by setting IE and call the ISR. This ISR can be interrupted by other [higher priority] interrupts. The result is a "stacked" interrupt.
I have been searching all over the places, I come to the conclusion is that the interrupts have a higher priority than some of exceptions in the Linux kernel.
An exception [synchronous interrupt] can be interrupted if the IE flag is enabled. An svc is treated differently but after the entry code is executed, the IE flag is set, so the actual syscall code [executing in kernel thread mode] can be interrupted by a H/W interrupt.
Or, in limited circumstances, the kernel code can generate an exception (e.g. a page fault caused by a kernel action [which is usually deemed fatal]).
but I am still looking on how exactly the context switching happen when executing an exception and letting the processor to execute in a thread mode while the SVCall exception is pending (was preempted and have not returned yet)... I think when I understand that, it would be more clear to me.
I think you have to be very careful with the terminology. In particular, when combining terms from disparate sources. Although user mode, kernel thread mode, or interrupt mode can be considered a context [in the dictionary sense of the word], context switching usually means that the current thread is suspended, the scheduler selects a new thread to run and resumes it. That is separate from the user-to-kernel transition.
And if there is any recommended resources about that for ARM-Cortex-M3/4, it would be nice
Here is something: https://interrupt.memfault.com/blog/arm-cortex-m-exceptions-and-nvic But, be very careful in applying the terminology therein. What it considers "pending" only exists in the kernel during the entry code. What is more relevant is what the kernel does to set up mode/context and the terms are not equivalent.
So, from the kernel's standpoint, it's probably better to not consider an svc as "pending".

The right way to clear an interrupt flag on STM32

I'm developping a bare-metal project on a STM32L4 and I'm starting from an existing code base.
The ISRs have been implemented the following way:
read interrupt status in the peripheral to know what event(s) provoked the interrupt
do something
clear the flags that have read at the beginning.
Is it the right way to clear the flag ? Shouldn't the flags be cleared at the very beginning of the ISR ? My understanding is that, if the same peripheral event is happening a second time during step 2, it will not provoke a second IRQ so it would be lost. On the other hand if you clear the flag as soon as you can, this second event would pulse the interrupt whose state in the CPU would change to "pending and active": a second IRQ would happen.
PS: From STM32 Processor Programming Manual I read: "STM32 interrupts are both level-sensitive and pulse-sensitive".
Definitely at the beginning (unless you have special reasons in the program logic) as some time is needed the for actual write to the flag clear register to propagate through the buses.
If you decide for some reason to put it at the end of the interrupt you should leave some instructions, place the barrier instruction or read back the register before the interrupt routine return to make sure that the clear operation has propagated across the buses. Otherwise you may have a "phantom" duplicate routine calls.

semaphore like synchronization in ISR (Interrupt service routine)

I have a queue where the put and pull functions of the queue are called when different interrupts happen. Is there a way to prevent race condition in this scenario?
While we can not wait on semaphores in interrupt service routines what is the best way to create a similar functionality.
We are using an ARM-Cortex A5 processor of a Zynq FPGA to develope the code.
Assuming that each interrupt causes the "Interrupt Disabled" state of the processor to be turned on, and assuming that the interrupts you are handling have the same priority (that is, one can't interrupt the execution of the other), then there already can be no race condition and your ISRs can just access the shared queue.
(When an interrupt occurs, the processor goes into interrupt disabled mode, pushes all registers onto the stack, jumps to the ISR entry point and continues execution there. Once the ISR is done, the "iret" instruction does the reverse of the entry. This simple description can be implemented differently in different processors and platforms.)

What is the irq latency due to the operating system?

How can I estimate the irq latency on ARM processor?
What is the definition for irq latency?
Interrupt Request (irq) latency is the time that takes for interrupt request to travel from source of the interrupt to the point when it will be serviced.
Because there are different interrupts coming from different sources via different paths, obviously their latency is depending on the type of the interrupt. You can find table with very good explanations about latency (both value and causes) for particular interrupts on ARM site
You can find more information about it in ARM9E-S Core Technical Reference Manual:
4.3 Maximum interrupt latency
If the sampled signal is asserted at the same time as a multicycle instruction has started
its second or later cycle of execution, the interrupt exception entry does not start until
the instruction has completed.
The longest LDM instruction is one that loads all of the registers, including the PC.
Counting the first Execute cycle as 1, the LDM takes 16 cycles.
• The last word to be transferred by the LDM is transferred in cycle 17, and the abort
status for the transfer is returned in this cycle.
• If a Data Abort happens, the processor detects this in cycle 18 and prepares for
the Data Abort exception entry in cycle 19.
• Cycles 20 and 21 are the Fetch and Decode stages of the Data Abort entry
respectively.
• During cycle 22, the processor prepares for FIQ entry, issuing Fetch and Decode
cycles in cycles 23 and 24.
• Therefore, the first instruction in the FIQ routine enters the Execute stage of the
pipeline in stage 25, giving a worst-case latency of 24 cycles.
and
Minimum interrupt latency
The minimum latency for FIQ or IRQ is the shortest time the request can be sampled
by the input register (one cycle), plus the exception entry time (three cycles). The first
interrupt instruction enters the Execute pipeline stage four cycles after the interrupt is
asserted
There are three parts to interrupt latency:
The interrupt controller picking up the interrupt itself. Modern processors tend to do this quite quickly, but there is still some time between the device signalling it's pin and the interrupt controller picking it up - even if it's only 1ns, it's time [or whatever the method of signalling interrupts are].
The time until the processor starts executing the interrupt code itself.
The time until the actual code supposed to deal with the interrupt is running - that is, after the processor has figured out which interrupt, and what portion of driver-code or similar should deal with the interrupt.
Normally, the operating system won't have any influence over 1.
The operating system certainly influences 2. For example, an operating system will sometimes disable interrupts [to avoid an interrupt interfering with some critical operation, such as for example modifying something to do with interrupt handling, or when scheduling a new task, or even when executing in an interrupt handler. Some operating systems may disable interrupts for several milliseconds, where a good realtime OS will not have interrupts disabled for more than microseconds at the most.
And of course, the time it takes from the first instruction in the interrupt handler runs, until the actual driver code or similar is running can be quite a few instructions, and the operating system is responsible for all of them.
For real time behaviour, it's often the "worst case" that matters, where in non-real time OS's, the overall execution time is much more important, so if it's quicker to not enable interrupts for a few hundred instructions, because it saves several instructions of "enable interrupts, then disable interrupts", a Linux or Windows type OS may well choose to do so.
Mats and Nemanja give some good information on interrupt latency. There are two is one more issue I would add, to the three given by Mats.
Other simultaneous/near simultaneous interrupts.
OS latency added due to masking interrupts. Edit: This is in Mats answer, just not explained as much.
If a single core is processing interrupts, then when multiple interrupts occur at the same time, usually there is some resolution priority. However, interrupts are often disabled in the interrupt handler unless priority interrupt handling is enabled. So for example, a slow NAND flash IRQ is signaled and running and then an Ethernet interrupt occurs, it may be delayed until the NAND flash IRQ finishes. Of course, if you have priorty interrupts and you are concerned about the NAND flash interrupt, then things can actually be worse, if the Ethernet is given priority.
The second issue is when mainline code clears/sets the interrupt flag. Typically this is done with something like,
mrs r9, cpsr
biceq r9, r9, #PSR_I_BIT
Check arch/arm/include/asm/irqflags.h in the Linux source for many macros used by main line code. A typical sequence is like this,
lock interrupts;
manipulate some flag in struct;
unlock interrupts;
A very large interrupt latency can be introduced if that struct results in a page fault. The interrupts will be masked for the duration of the page fault handler.
The Cortex-A9 has lots of lock free instructions that can prevent this by never masking interrupts; because of better assembler instructions than swp/swpb. This second issue is much like the IRQ latency due to ldm/stm type instructions (these are just the longest instructions to run).
Finally, a lot of the technical discussions will assume zero-wait state RAM. It is likely that the cache will need to be filled and if you know your memory data rate (maybe 2-4 machine cycles), then the worst case code path would multiply by this.
Whether you have SMP interrupt handling, priority interrupts, and lock free main line depends on your kernel configuration and version; these are issues for the OS. Other issues are intrinsic to the CPU/SOC interrupt controller, and to the interrupt code itself.

How does software recognize an interrupt has occured?

As we know we write Embedded C programming, for task management, memory management, ISR, File system and all.
I would like to know if some task or process is running and at the same time an interrupt occurred, then how SW or process or system comes to know that, the interrupt has occurred? and pauses the current task execution and starts serving ISR.
Suppose if I will write the below code like;
// Dummy Code
void main()
{
for(;;)
printf("\n forever");
}
// Dummy code for ISR for understanding
void ISR()
{
printf("\n Interrupt occurred");
}
In this above code if an external interrupt(ISR) occurs, then how main() comes to know that the interrupt occurred? So that it would start serving ISR first?
main doesn't know. You have to execute some-system dependent function in your setup code (maybe in main) that registers the interrupt handler with the hardware interrupt routine/vector, etc.
Whether that interrupt code can execute a C function directly varies quite a lot; runtime conventions for interrupt procedures don't always follow runtime conventions for application code. Usually there's some indirection involved in getting a signal from the interrupt routine to your C code.
your query: I understood your answer. But I wanted to know when Interrupt occurs how the current task execution gets stopped/paused and the ISR starts executing?
well Rashmi to answer your query read below,
when microcontroller detects interrupt, it stops exucution of the program after executing current instruction. Then it pushes PC(program counter) on to stack and loads PC with the vector location of that inerrupt hence, program flow is directed to interrrupt service routine. On completion of ISR the microcontroller again pops the stored program counter from stack and loads it on to PC hence, program execution again resumes from next location it was stopped.
does that replied to your query?
It depends on your target.
For example the ATMEL mega family uses a pre-processor directive to register the ISR with an interrupt vector. When an interrupt occurs the corrosponding interrupt flag is raised in the relevant status register. If the global interrupt flag is raised the program counter is stored on the stack before the ISR is called. This all happens in hardware and the main function knows nothing about it.
In order to allow main to know if an interrupt has occurred you need to implement a shared data resource between the interrupt routine and your main function and all the rules from RTOS programming apply here. This means that as the ISR may be executed at any time it as not safe to read from a shared resource from main without disabling interrupts first.
On an ATMEL target this could look like:
volatile int shared;
int main() {
char status_register;
int buffer;
while(1) {
status_register = SREG;
CLI();
buffer = shared;
SREG = status_register;
// perform some action on the shared resource here.
}
return 0;
}
void ISR(void) {
// update shared resource here.
}
Please note that the ISR is not added to the vector table here. Check your compiler documentation for instructions on how to do that.
Also, an important thing to remember is that ISRs should be very short and very fast to execute.
On most embedded systems the hardware has some specific memory address that the instruction pointer will move to when a hardware condition indicates an interrupt is required.
When the instruction pointer is at this specific location it will then begin to execute the code there.
On a lot of systems the programmer will place only an address of the ISR at this location so that when the interrupt occurs and the instruction pointer moves to the specific location it will then jump to the ISR
try doing a Google search on "interrupt vectoring"
An interrupt handling is transparent for the running program. The processor branchs automatically to a previously configured address, depending on the event, and this address being the corresponding ISR function. When returning from the interrupt, a special instruction restores the interrupted program.
Actually, most of the time you won't ever want that a program interrupted know it has been interrupted. If you need to know such info, the program should call a driver function instead.
interrupts are a hardware thing not a software thing. When the interrupt signal hits the processor the processor (generally) completes the current instruction. In some way shape or form preserves the state (so it can get back to where it was) and in some way shape or form starts executing the interrupt service routine. The isr is generally not C code at least the entry point is usually special as the processor does not conform to the calling convention for the compiler. The ISR might call C code, but you end up with the mistakes that you made, making calls like printf that should not be in an ISR. hard once in C to keep from trying to write general C code in an isr, rather than the typical get in and get out type of thing.
Ideally your application layer code should never know the interrupt happened, there should be no (hardware based) residuals affecting your program. You may choose to leave something for the application to see like a counter or other shared data which you need to mark as volatile so the application and isr can share it. this is not uncommon to have the isr simply flag that an interrupt happened and the application polls that flag/counter/variable and the handling happens primarily in the application not isr. This way the application can make whatever system calls it wants. So long as the overall bandwidth or performance is met this can and does work as a solution.
Software doesnt recognize the interrupt to be specific, it is microprocessor (INTC) or microcontrollers JOB.
Interrupt routine call is just like normal function call for Main(), the only difference is that main dont know when that routine will be called.
And every interrupt has specific priority and vector address. Once the interrput is received (either software or hardware), depending on interrupt priority, mask values and program flow is diverted to specific vector location associated with that interrupt.
hope it helps.

Resources