In Chapter 5 of ULK the author states as follows:
"...each interrupt handler is serialized with respect to itself-that is, it cannot execute more than one concurrently. Thus, accessing the data struct does not require synchronization primitives"
I don't quite understand why interrupt handlers is "serialized" on modern CPUs with multiple cores. I'm thinking it could be possible that a same ISR can be run on different cores simultaneously, right? If that's the case, if you don't use spinlock to protect your data it can come to a race condition.
So my question is, on a modern system with multi-cpus, for every interrupt handler you are going to write that will read & write some data, is spinlock always needed?
While executing interrupt handlers, the kernel explicitly disables that particular interrupt line at the interrupt controller, so one interrupt handler cannot be executed more than once concurrently. (The handlers of other interrupts can run concurrently, though.)
Clarification: as per CL. remark below - the kernel makes sure not to fire the interrupt handler for the same interrupt but if you have multiple registrations of the same interrupt handler for multiple interrupts than the below answer is, I believe, correct.
You are right that the same interrupt handler can run concurrently on multiple cores and that shared data needs to be protected. However, a spinlock is not the only and certainly not always the recommended way to achieve this.
A multitude of other synchronization methods, from per-CPU data, accessing shared data only using atomic operations and even Read-Copy-Update variants may be used to protect the shared data.
No spinlock is not always needed in interrupt handler.
Please note one thing first -
When an interrupt of a particular device occur on interrupt controller, that interrupt is disabled at interrupt controller and hence on all the cores for that particular device. So, the interrupt of same device cannot come on all the CPU simultaneously.
So in normal case there wont be any spin lock required as the code would not be re-entrant.
Though there are 2 cases below in which spinlock is needed in interrupt handler.
Please note, when an interrupt comes from a device and IRQ line, that cores disables all other interrupt on that core and also for that device interrupt on other core also. Interrupt from other devices can comes on other core.
So there can be a case in which, same interrupt handler is registered for different devices.
for eg:-
request_irq(A,func,..);
reqest_irq(B,func,..);
for device A interrupt handler func is called.
for device B same interrupt handler func is called.
So, a spinlock should be used in this case to prevent raise condition.
When same resource is being used in interrupt handler and also some other code which runs in process context.
For eg:- there is resource A
So there can be a case in which one cores runs in interrupt mode, the interrupt handler and is modifying the resource A and other core runs in process context and is also modifying the same resource in some other place.
So to present raise condition for that resource we should use spin lock.
section 4.6 of Understanding the Linux Kernel, 3rd Edition by Marco Cesati, Daniel P. Bovet told you the answer.
Actual interrupt handler is process by handle_IRQ_event. irq_desc[irq].lock prevent concurrently access to handle_IRQ_event by any other CPU.
If the critical data is shared b/w the interrupt handler and your process (may be a kernel thread) then you need to protect your data and hence spinlock is required.A common Kernel api for spinlock is : spin_lock().
There are also variants of these api e.g. spin_lock_irqsave() which can help avoiding the deadlock problems which one can face while acquiring/holding the spin locks.Please go through the below link to find details of the subject:
http://www.linuxjournal.com/article/5833
Related
When I am executing a system call to do write or something else, the ISR corresponded to the exception is executing in interrupt mode (on cortex-m3 the IPSR register is having a non-zero value, 0xb). And what I have learned is that when we execute a code in an interrupt mode we can not sleep, we can not use functions that might block ...
My question is that: is there any kind of a mechanism with which the ISR could still executing in interrupt mode and in the same time it could use functions that might block, or is there any kind of trick is implemented.
Caveat: This is more of a comment than an answer but is too big to fit in a comment or series of comments.
TL;DR: Needing to sleep or execute a blocking operation from an ISR is a fundamental misdesign. This seems like an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Doing sleep [even in application code] is generally a code smell. Why do you feel you need to sleep [vs. some event/completion driven mechanism]?
Please edit your question and add clarification [i.e. don't just add comments]
When I am executing a system call to do write or something else
What is your application doing? A write to what device? What "something else"?
What is the architecture/board, kernel/distro? (e.g. Raspberry Pi running Raspian? nvidia Jetson? Beaglebone? Xilinx FPGA with petalinux?)
What is the H/W device and where did the device driver come from? Did you write the device driver yourself or is it a standard one that comes with the kernel/distro? If you wrote it, please post it in your question.
Is the device configured properly? (e.g.) Are the DTB entries correct?
Is the device a block device, such as a disk controller? Or, is it a character device, such as a UART? Does the device transfer data via DMA? Or, does it transfer data by reading/writing to/from an IO port?
What do you mean by "exception"? Generally, exception is an abnormal condition (e.g. segfault, bus error, etc.). Please describe the exact context/scenario for which this occurs.
Generally, an ISR does little things. (e.g.) Grab [and save] status from the device. Clear/rearm the interrupt in the interrupt controller. Start the next queued transfer request. Wake up the sleeping base level task (usually the task that executed the syscall [waiting on a completion event in kernel mode]).
More elaborate actions are generally deferred and handled in the interrupt's "bottom half" handler and/or tasklet. Or, the base level is woken up and it handles the remaining processing.
What kernel subsystems are involved? Are you using platform drivers? Are you interfacing from within the DMA device driver framework? Are message buses involved (e.g. I2C, SPI, etc.)?
Interrupt and device handling in the linux kernel is somewhat different than what one might do in a "bare metal" system or RTOS (e.g. FreeRTOS). So, if you're coming from those environments, you'll need to think about restructuring your driver code [and/or application code].
What are your requirements for device throughput and latency?
You may wish to consult a good book on linux device driver design. And, you may wish to consult the kernel's Documentation subdirectory.
If you're able to provide more information, I may be able to help you further.
UPDATE:
A system call is not really in the same class as a hardware interrupt as far as the kernel is concerned, even if the CPU hardware uses the same sort of exception vector mechanisms for handling both hardware and software interrupts. The kernel treats the system call as a transition from user mode to kernel mode. – Ian Abbott
This is a succinct/great explanation. The "mode" or "context" has little to do with how we got/get there from a H/W mechanism.
The CPU doesn't really "understand" interrupt mode [as defined by the kernel]. It understands "supervisor" vs "user" privilege level [sometimes called "mode"].
When executing at user privilege level, an interrupt/exception will notice the transition from "user" level to "supvervisor" level. It may have a special register that specifies the address of the [initial] supervisor stack pointer. Atomically, it swaps in the value, pushing the user SP onto the new kernel stack.
If the interrupt is interrupting a CPU that is already at supervisor level, the existing [supervisor] SP will be used unchanged.
Note that x86 has privilege "ring" levels. User mode is ring 3 and the highest [most privileged] level is ring 0. For arm, some arches can have a "hypervisor" privilege level [which is higher privilege than "supervisor" privilege].
The setup of the mode/context is handled in arch/arm/kernel/entry-*.S code.
An svc is a synchronous interrupt [generated by a special CPU instruction]. The resulting context is the context of the currently executing thread. It is analogous to "call function in kernel mode". The resulting context is "kernel thread mode". At that point, it's not terribly useful to think of it as an "interrupt" anymore.
In fact, on some arches, the syscall instruction/mechanism doesn't use the interrupt vector table. It may have a fixed address or use a "call gate" mechanism (e.g. x86).
Each thread has its own stack which is different than the initial/interrupt stack.
Thus, once the entry code has established the context/mode, it is not executing in "interrupt mode". So, the full range of kernel functions is available to it.
An interrupt from a H/W device is asynchronous [may occur at any time the CPU's internal interrupt enable flag is set]. It may interrupt a userspace application [executing in application mode] OR kernel thread mode OR an existing ISR executing in interrupt mode [from another interrupt]. The resulting ISR is executing in "interrupt mode" or "ISR mode".
Because the ISR can interrupt a kernel thread, it may not do certain things. For example, if the CPU were in [kernel] thread mode, and it was in the middle of a kmalloc call [GFP_KERNEL], the ISR would see partial state and any action that tried to adjust the kernel's heap would result in corruption of the heap.
This is a design choice by linux for speed.
Kernel ISRs may be classified as "fast interrupts". The ISR executes with the CPU interrupt enable [IE] flag cleared. No normal H/W interrupt may interrupt the ISR.
So, if another H/W device asserts its interrupt request line [in the external interrupt controller], that request will be "pending". That is, the request line has been asserted but the CPU has not acknowledged it [and the CPU has not jumped via the interrupt table].
The request will remain pending until the CPU allows further interrupts by asserting IE. Or, the CPU may clear the pending interrupt without taking action by clearing the pending interrupt in the interrupt controller.
For a "slow" interrupt ISR, the interrupt entry code will clear the interrupt in the external interrupt controller. It will then rearm interrupts by setting IE and call the ISR. This ISR can be interrupted by other [higher priority] interrupts. The result is a "stacked" interrupt.
I have been searching all over the places, I come to the conclusion is that the interrupts have a higher priority than some of exceptions in the Linux kernel.
An exception [synchronous interrupt] can be interrupted if the IE flag is enabled. An svc is treated differently but after the entry code is executed, the IE flag is set, so the actual syscall code [executing in kernel thread mode] can be interrupted by a H/W interrupt.
Or, in limited circumstances, the kernel code can generate an exception (e.g. a page fault caused by a kernel action [which is usually deemed fatal]).
but I am still looking on how exactly the context switching happen when executing an exception and letting the processor to execute in a thread mode while the SVCall exception is pending (was preempted and have not returned yet)... I think when I understand that, it would be more clear to me.
I think you have to be very careful with the terminology. In particular, when combining terms from disparate sources. Although user mode, kernel thread mode, or interrupt mode can be considered a context [in the dictionary sense of the word], context switching usually means that the current thread is suspended, the scheduler selects a new thread to run and resumes it. That is separate from the user-to-kernel transition.
And if there is any recommended resources about that for ARM-Cortex-M3/4, it would be nice
Here is something: https://interrupt.memfault.com/blog/arm-cortex-m-exceptions-and-nvic But, be very careful in applying the terminology therein. What it considers "pending" only exists in the kernel during the entry code. What is more relevant is what the kernel does to set up mode/context and the terms are not equivalent.
So, from the kernel's standpoint, it's probably better to not consider an svc as "pending".
I have a queue where the put and pull functions of the queue are called when different interrupts happen. Is there a way to prevent race condition in this scenario?
While we can not wait on semaphores in interrupt service routines what is the best way to create a similar functionality.
We are using an ARM-Cortex A5 processor of a Zynq FPGA to develope the code.
Assuming that each interrupt causes the "Interrupt Disabled" state of the processor to be turned on, and assuming that the interrupts you are handling have the same priority (that is, one can't interrupt the execution of the other), then there already can be no race condition and your ISRs can just access the shared queue.
(When an interrupt occurs, the processor goes into interrupt disabled mode, pushes all registers onto the stack, jumps to the ISR entry point and continues execution there. Once the ISR is done, the "iret" instruction does the reverse of the entry. This simple description can be implemented differently in different processors and platforms.)
I know when an interrupt occurs the process running is put on hold, and the Interrupt Service Routine is called. The current pointer is pointing to the process that was interrupted and I was told that when an interrupt occurs it is not linked to a specific process. So my question is why only another interrupt can preempt an existing interrupt routine?
Also, when a process(p2) preempts another process(p1), who is calling the schedule() method?
the first two answers both show some significant misunderstanding about interrupts and how they work
Of particular interest,
for the CPUs that we are usually using
( 86x.., power PC, 68xxx, ARM, and many others)
each interrupt source has a priority.
sadly, there are some CPUs, for instance the 68HC11, where all the interrupts, except the reset interrupt and the NMI interrupt, have the same priority so servicing any of the other interrupt events will block all the other (same priority) interrupt events.
for our discussion purposes, a higher priority interrupt event can/ will interrupt a lower priority interrupt handler.
(a interrupt handler can modify the appropriate hardware register to disable all interrupt events or just certain interrupt events. or even enable lower priority interrupts by clearing their own interrupt pending flag (usually a bit in a register)
In general, the scheduler is invoked by a interrupt handler,
(or by a process willingly giving up the CPU)
That interrupt is normally the result of a hardware timer expiring/reloading and triggering the interrupt event.
A interrupt is really just an event where the event is waiting to be serviced.
The interrupt event, when allowed, for instance by being the highest priority interrupt that is currently pending, will cause the PC register to load the first address of the related interrupt handler.
the act of diverting the PC register to the interrupt handler will (at a minimum) push the prior PC register value and status register onto the stack. (in some CPUs, there is a special set of save areas for those registers, so they are pushed onto the special area rather than on the stack.
The act of returning from an interrupt, for instance via the RTI instruction, will 'automatically' cause the prior PC and status register values to be restored.
Note: returning from an interrupt handler does not clear the interrupt event pending indication, so the interrupt handler, before exiting needs to modify the appropriate register otherwise the flow of execution will immediately reenter the interrupt handler.
The interrupt handler has to, upon entry, push any other registers that it modifies and, when ready to exit, restore them.
Only interrupts of a lower priority are blocked by the interrupt event diverting the PC to the appropriate interrupt handler. Blocked, not disabled.
on some CPUs, for instance most DSPs, there are also software interrupts that can be triggered by an instruction execution.
This is usually used by hardware interrupt handlers to trigger the data processing after some amount of data has been input/saved in a buffer. This separates the I/O from the processing thereby enabling the hardware interrupt event handler to be quick and still have the data processed in a timely manner
The above contradicts much of what the comments and other answers state. However, those comments and answers are from the misleading view of the 'user' side of the OS, while I normally program right on the bare hardware and so am very familiar with what actually happens.
So my question is why only another interrupt can preempt an existing
interrupt routine?
A hardware interrupt usually puts the processor hardware in an interrupt state where all interrupts are disabled. The interrupt-handler can, and often does, explicitly re-enable interrupts of a higher priority. Such an interrupt can then preempt the lower-priority interrupt. That is the only mechanism that can interrupt a hardware interrupt.
Also, when a process(p2) preempts another process(p1), who is calling
the schedule() method?
That depends somewhat on whether the preemption is initiated by a syscall from a thread already running, or by a hardware interrupt that causes a handler/driver to run and subsequently enter the kernel to request a reschedule. The exact mechansims, (states, stacks etc), used are architecture-dependent.
Regarding your first question: While an interrupt is running, interrupts are disabled on that processor. Therefore, it cannot be interrupted.
Regarding your second question: A process never preempts another process, it is always the OS doing that. The OS calls the scheduler routine regularly, where it decides which process will run next. So p2 doesn't say "i want to run now", it just has some attributes like a priority, remaining time slot, etc., and the OS then decides whether p2 should run now.
I'm studying operating systems and I encountered both the terms ISR and interrupt handler. Are they two words for the same mechanism? If not, what is the difference?
There is no difference in Interrupt handler and ISR.
Wiki says that:
In computer systems programming, an interrupt handler, also known as an interrupt service routine or ISR, is a callback function [...]
ISR is callback for a specific service pertaining to a device/operation/source. There could be multiple ISRs present in a system depending on addresses available in Interrupt Vector table. Where is Interrupt handler is a common routine which is triggered whenever any interrupt comes. Its job is to understand the source of the interrupt and trigger appropriate ISR mapped in Interrupt Vector table.
When interrupt occurs,
interrupt handler performs minimal operations required to respond to the device where as updating the buffer and all other operations are taken care by ISR
Can I set up the priority of a workqueue?
I am modifying the SPI kernel module "spidev" so it can communicate faster with my hardware.
The external hardware is a CAN controller with a very small buffer, so I must read any incoming data quickly to avoid loosing data.
I have configured a GPIO interrupt to inform me of the new data, but I cannot read the SPI hardware in the interrupt handler.
My interrupt handler basically sets up a workqueue that will read the SPI data.
It works fine when there is only one active process in the kernel.
As soon as I open any other process (even the process viewer top) at the same time, I start loosing data in bunches, i.e., I might receive 1000 packects of data with no problems and then loose 15 packets in a row and so on.
I suspect that the cause of my problem is that when the other process (top, in this case) has control over the cpu the interrupt handler runs, but the work in the workqueue doesn't until the scheduler is called again.
I tried to increase the priority of my process with no success.
I wonder if there is a way to tell the kernel to execute the work in workqueue immediatelly after the interrupt handling function.
Suggestions are welcome.
As an alternative you could consider using a tasklet, which will tell the kernel execute more immediate, but be aware you are unable to sleep in tasklets
A good ibm article on deffering work in the kernel
http://www.ibm.com/developerworks/linux/library/l-tasklets/
http://www.makelinux.net/ldd3/chp-7-sect-5
A tasklet is run at the next timer tick as long as the CPU is busy running a process, but it is run immediately when the CPU is otherwise idle. The kernel provides a set of ksoftirqd kernel threads, one per CPU, just to run "soft interrupt" handlers, such as the tasklet_action function. Thus, the final three runs of the tasklet take place in the context of the ksoftirqd kernel thread associated to CPU 0. The jitasklethi implementation uses a high-priority tasklet, explained in an upcoming list of functions.