Does int 80h interrupt a kernel process? - c

First some background knowledge, this is from the book: Linux System Programming: Talking Directly to the Kernel and C Library
Signals are a mechanism for one-way asynchronous notifications. A
signal may be sent from the kernel to a process, from a process to
another process, or from a process to itself.
The Linux kernel implements about 30 signals.
Signals interrupt an executing process, causing it to stop whatever it
is doing and immediately perform a predetermined action.
Ok moving further, from here I will quote this part:
On the Intel family of microprocessors, such as the Pentium, int 80h
is the assembly language op code for interrupt 80h. This is the
syscall interrupt on a typical Intel-based Unix system, such as
FreeBSD. It allows application programmers to obtain system services
from the Unix kernel.
I can not quite make the connection in my head really. So when I for example use the
write
method defined in Posix, and when it is compiled into assembly, and then further assembled into an object code and linked to an executable in a given architecture that runs Linux.... a system call is made right?
I am assuming the compiled code would look something like this:
mov eax,4 ;code for system_write
mov ebx,1 ;standard output
mov ecx, memlocation; memlocation is just some location where a number of ascii is present
mov edx, 20; 20 bytes will be written
int 80h;
Ok my question is exactly at this point. Will int 80h send a signal to kernel / interrupt the kernel? Is kernel just one process? (Is it the init process?) When the cpu executes the int 80h , what exactly happens? The registers are full of information already, (eax, ebx, ecx and edx in my example..), but how is this information used?
I can not quite make the connection between the CPU - the kernel and what exactly the CPU does as it executes the int 80h.
I can imagine some code resides somewhere in the memory that actually sends the required information to the device driver but to which process does this code belong to? (I am assuming kernel but is kernel just one process?) And how does int 80h instruction jump to that code? Is it something that Linux has to implement somehow?

Is kernel just one process? (Is it the init process?)
The kernel is a magic beast. It's not a process. The kernel doesn't have a PID you can refer to.
First, it's worth stating (even though it's obvious) that instructions runs on the processor: Therefore, int 80h is executed by the processor.
There is something called Interrupt Request Handler. They are somehow similar to a function pointer. The processor has a table of interrupt handler. This table is called the Interrupt Descriptor Table (aka IDT) and is system wide (ie, not every process has it's own table).
I believe that this table is populated by the kernel when it first boot.
So, what happens when the int 80h is executed?
The processor was running in ring 3 protection level (the normal level for a process). For more info on ring level, see this.
The processor will switch to ring 0, aka kernel mode. In this mode, hardware protection are disabled. This mean that the code that will be executed from now on can do whatever it wants. Write everywhere in physical memory, rewrite the interrupt descriptor table, etc.
The processor will jump to the code located in the Interrupt Descriptor Table for the 80h interrupt. The space available for each interruption in the IDT is very small. This is why this code will generally jump again somewhere else.
The previous jump lend the processor in the kernel routine dedicated to handling int 80h. The processor is no longer running your process' code, but it is now running kernel code.
The kernel can check the registers and memory and determine why the interrupt was triggered. It will understand that you wanted to execute the system call write.
The kernel code will jump again, this time in the routine that handle write. The kernel will runs its code for write.
The kernel is done running its code. It tells the processor to go back to ring 3 protection level, and resume your process.
Userspace process (aka your process) resumes.

When the CPU executes an INT 80h instruction the currently running process on that CPU is an ordinary user process. As a result of processing this instruction the CPU switches from user-mode to kernel-mode. The process doesn't change. The current process is still an ordinary user process, it's just now executing in kernel-mode. Being in kernel-mode gives the system call permission to do things that the program can't do itself. The kernel code then does whatever is necessary to implement system call and executes an IRET instruction. The causes the CPU to switch back to user-mode and start executing code following INT 80h instruction.
Note if the kernel mode code takes long enough to execute, in particular if it blocks, then the scheduler may kick in and switch the CPU to running a different process. In this case the kernel mode code has to wait for an opportunity to finish its job.
Most of the CPU time spent in the kernel is like this, executing system calls in the context of the process that made the system call. Most of the rest of the time spent in the kernel is handling hardware interrupts. (Note that INT 80h is a software interrupt.) In that case the interrupt runs in the context of whatever process happens to be running at the time. The interrupt routine does whatever is necessary to service the hardware device that generated the interrupt and then returns.
While the kernel creates some special processes for itself, these processes have very specialized tasks. There's no main kernel process. The init process in particular isn't a kernel process, it's just an ordinary user process.

your questions are answered as asked. I recommend to consult the book linux programming interface page 44. howerever short answers are as follows.
Ok my question is exactly at this point. Will int 80h send a signal to kernel / interrupt the kernel?
No int 80h does not raise any signal to kernel instead it is an entry in a table of interrupts
Is kernel just one process? (Is it the init process?)
No. Now unix kernel is set of threats (called native threads) which can have 3 different types of process-kernel mappings.
When the cpu executes the int 80h , what exactly happens? The registers are full of information already, (eax, ebx, ecx and edx in my example..), but how is this information used?
int 80h is a trap instruction which transition the environment from user to kernel mode %eax contains the system call number for write to be run in kernel mode. contents of all other registers are stored in memory to be stored on return to user mode
I can not quite make the connection between the CPU - the kernel and what exactly the CPU does as it executes the int 80h.
80h is a trap for CPU which changes the environment from user to kernel and save registers to memory. it means CPU helps the kernel for doing somthing useful with efficiency.
I can imagine some code resides somewhere in the memory that actually sends the required information to the device driver but to which process does this code belong to? (I am assuming kernel but is kernel just one process?) And how does int 80h instruction jump to that code? Is it something that Linux has to implement somehow?
here you are asking about device drivers. drivers functionality is different from syscall handling. int 80h does not work with drivers.

The answer by #Xaqq is perfect but in addition to that understand this.
A CPU is just silicon Slab with Billions of transistors which are used in CPU to Hard Code the most basic Routines in it. It is done using Logic Gates [AND, OR, XOR] etc. This Landscape of Transistors is designed in such a way that when you put :
4 in EAX
1 in EBX
Addr in ECX
Length in EDX
And then call the int 80h which is basically a routine which understands above values and what to do with them. It handles the execution to the CPU and just by magic these transistors flip flop to bring about the action you intended which is to print the message at addr to console.

Related

How could we sleep when we are executing a syscall that execute in interrupt mode

When I am executing a system call to do write or something else, the ISR corresponded to the exception is executing in interrupt mode (on cortex-m3 the IPSR register is having a non-zero value, 0xb). And what I have learned is that when we execute a code in an interrupt mode we can not sleep, we can not use functions that might block ...
My question is that: is there any kind of a mechanism with which the ISR could still executing in interrupt mode and in the same time it could use functions that might block, or is there any kind of trick is implemented.
Caveat: This is more of a comment than an answer but is too big to fit in a comment or series of comments.
TL;DR: Needing to sleep or execute a blocking operation from an ISR is a fundamental misdesign. This seems like an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Doing sleep [even in application code] is generally a code smell. Why do you feel you need to sleep [vs. some event/completion driven mechanism]?
Please edit your question and add clarification [i.e. don't just add comments]
When I am executing a system call to do write or something else
What is your application doing? A write to what device? What "something else"?
What is the architecture/board, kernel/distro? (e.g. Raspberry Pi running Raspian? nvidia Jetson? Beaglebone? Xilinx FPGA with petalinux?)
What is the H/W device and where did the device driver come from? Did you write the device driver yourself or is it a standard one that comes with the kernel/distro? If you wrote it, please post it in your question.
Is the device configured properly? (e.g.) Are the DTB entries correct?
Is the device a block device, such as a disk controller? Or, is it a character device, such as a UART? Does the device transfer data via DMA? Or, does it transfer data by reading/writing to/from an IO port?
What do you mean by "exception"? Generally, exception is an abnormal condition (e.g. segfault, bus error, etc.). Please describe the exact context/scenario for which this occurs.
Generally, an ISR does little things. (e.g.) Grab [and save] status from the device. Clear/rearm the interrupt in the interrupt controller. Start the next queued transfer request. Wake up the sleeping base level task (usually the task that executed the syscall [waiting on a completion event in kernel mode]).
More elaborate actions are generally deferred and handled in the interrupt's "bottom half" handler and/or tasklet. Or, the base level is woken up and it handles the remaining processing.
What kernel subsystems are involved? Are you using platform drivers? Are you interfacing from within the DMA device driver framework? Are message buses involved (e.g. I2C, SPI, etc.)?
Interrupt and device handling in the linux kernel is somewhat different than what one might do in a "bare metal" system or RTOS (e.g. FreeRTOS). So, if you're coming from those environments, you'll need to think about restructuring your driver code [and/or application code].
What are your requirements for device throughput and latency?
You may wish to consult a good book on linux device driver design. And, you may wish to consult the kernel's Documentation subdirectory.
If you're able to provide more information, I may be able to help you further.
UPDATE:
A system call is not really in the same class as a hardware interrupt as far as the kernel is concerned, even if the CPU hardware uses the same sort of exception vector mechanisms for handling both hardware and software interrupts. The kernel treats the system call as a transition from user mode to kernel mode. – Ian Abbott
This is a succinct/great explanation. The "mode" or "context" has little to do with how we got/get there from a H/W mechanism.
The CPU doesn't really "understand" interrupt mode [as defined by the kernel]. It understands "supervisor" vs "user" privilege level [sometimes called "mode"].
When executing at user privilege level, an interrupt/exception will notice the transition from "user" level to "supvervisor" level. It may have a special register that specifies the address of the [initial] supervisor stack pointer. Atomically, it swaps in the value, pushing the user SP onto the new kernel stack.
If the interrupt is interrupting a CPU that is already at supervisor level, the existing [supervisor] SP will be used unchanged.
Note that x86 has privilege "ring" levels. User mode is ring 3 and the highest [most privileged] level is ring 0. For arm, some arches can have a "hypervisor" privilege level [which is higher privilege than "supervisor" privilege].
The setup of the mode/context is handled in arch/arm/kernel/entry-*.S code.
An svc is a synchronous interrupt [generated by a special CPU instruction]. The resulting context is the context of the currently executing thread. It is analogous to "call function in kernel mode". The resulting context is "kernel thread mode". At that point, it's not terribly useful to think of it as an "interrupt" anymore.
In fact, on some arches, the syscall instruction/mechanism doesn't use the interrupt vector table. It may have a fixed address or use a "call gate" mechanism (e.g. x86).
Each thread has its own stack which is different than the initial/interrupt stack.
Thus, once the entry code has established the context/mode, it is not executing in "interrupt mode". So, the full range of kernel functions is available to it.
An interrupt from a H/W device is asynchronous [may occur at any time the CPU's internal interrupt enable flag is set]. It may interrupt a userspace application [executing in application mode] OR kernel thread mode OR an existing ISR executing in interrupt mode [from another interrupt]. The resulting ISR is executing in "interrupt mode" or "ISR mode".
Because the ISR can interrupt a kernel thread, it may not do certain things. For example, if the CPU were in [kernel] thread mode, and it was in the middle of a kmalloc call [GFP_KERNEL], the ISR would see partial state and any action that tried to adjust the kernel's heap would result in corruption of the heap.
This is a design choice by linux for speed.
Kernel ISRs may be classified as "fast interrupts". The ISR executes with the CPU interrupt enable [IE] flag cleared. No normal H/W interrupt may interrupt the ISR.
So, if another H/W device asserts its interrupt request line [in the external interrupt controller], that request will be "pending". That is, the request line has been asserted but the CPU has not acknowledged it [and the CPU has not jumped via the interrupt table].
The request will remain pending until the CPU allows further interrupts by asserting IE. Or, the CPU may clear the pending interrupt without taking action by clearing the pending interrupt in the interrupt controller.
For a "slow" interrupt ISR, the interrupt entry code will clear the interrupt in the external interrupt controller. It will then rearm interrupts by setting IE and call the ISR. This ISR can be interrupted by other [higher priority] interrupts. The result is a "stacked" interrupt.
I have been searching all over the places, I come to the conclusion is that the interrupts have a higher priority than some of exceptions in the Linux kernel.
An exception [synchronous interrupt] can be interrupted if the IE flag is enabled. An svc is treated differently but after the entry code is executed, the IE flag is set, so the actual syscall code [executing in kernel thread mode] can be interrupted by a H/W interrupt.
Or, in limited circumstances, the kernel code can generate an exception (e.g. a page fault caused by a kernel action [which is usually deemed fatal]).
but I am still looking on how exactly the context switching happen when executing an exception and letting the processor to execute in a thread mode while the SVCall exception is pending (was preempted and have not returned yet)... I think when I understand that, it would be more clear to me.
I think you have to be very careful with the terminology. In particular, when combining terms from disparate sources. Although user mode, kernel thread mode, or interrupt mode can be considered a context [in the dictionary sense of the word], context switching usually means that the current thread is suspended, the scheduler selects a new thread to run and resumes it. That is separate from the user-to-kernel transition.
And if there is any recommended resources about that for ARM-Cortex-M3/4, it would be nice
Here is something: https://interrupt.memfault.com/blog/arm-cortex-m-exceptions-and-nvic But, be very careful in applying the terminology therein. What it considers "pending" only exists in the kernel during the entry code. What is more relevant is what the kernel does to set up mode/context and the terms are not equivalent.
So, from the kernel's standpoint, it's probably better to not consider an svc as "pending".

Is there any way to make a call to linux kernel with my own softirq

Similar to how system call works on int 0x80, is it possible to implement my own ISR inside kernel so that on softirq assume int 0x120 or with any other softirq Program Counter can jump from user space to kernel space?
Is entering kernel in privileged mode is associated only with int 0x80, or with any softirq implementation I can enter privileged mode automatically or for disabling the protected mode and entering into privileged mode we have to do manually by writing its associated flag?
and one more thing, if it is possible to implement this type of ISR, is the best possible way for data exchange is with registers EBX, ECX, EDX, ESI, EDI and EBP or any other way is still there?
I already saw How to define and trigger my own new softirq in linux kernel? but didn't got the solution I was looking for.
I'll make it some more clear, why i need this
I had implemented few kernel functions, which are directly talking to hardware peripherals, I want them to trigger from user space using software interrupt. can't use system calls with available driver architecture because i need to reduce execution time.
First, software interrupts and softirq are completely different:
software interrupt is the assembly instruction to switch from user mode to privilege mode and this is what you're looking for
softirq is a mechanism to split hardware interrupt handler to top,bottom halfs
For your question - you'll need to write assembly code and modify platform specific code
You need to define the int number in Linux arch/x86/include/asm/irq_vectors.h:
#define MY_SYSCALL_VECTOR 0x120
Change the function trap_init in Linux arch/x86/kernel/traps.c:
set_system_trap_gate(MY_SYSCALL_VECTOR, entry_INT120_32);
Now you need to write the assembly function entry_INT120_32. you can see an example in the file: arch/x86/entry/entry_32.S starting at ENTRY(entry_INT80_32).
You'll need to take care of the CPU registers as documented at the beginning of entry_32.S file.

why clear interrput flag cause segmentation fault in C?

I am learning some basics about Assembly and C. for learning purpose I decide to write a simple program that disable Interrupts and when user wants to type something in the console he/she can't :
#include <stdio.h>
int main(){
int a;
printf("enter your number : ");
asm ("cli");
scanf("%d", &a);
printf("your number is %d\n" , a);
return 0;
}
but when I compile this with GCC I got segmentation fault :
Segmentation fault (core dumped)
And when I debug it with gdb I got this message when program reach to the asm("cli"); line:
Program received signal SIGSEGV, Segmentation fault.
main () at cli.c:6
6 asm ("cli");
This is happening because You can't disable interrupts from user space program. All interrupts are under the control of kernel. You need to do it from kernel space. Before you do it you need to learn kernel internals first and playing with interrupts are very critical and requires more knowledge on kernel according to my knowledge.
You need to write a kernel module that can interact with user space through /dev/ (or some other) interface. User space code should request kernel module to disable interrupts.
cli is a privileged instruction. It raises a #GP(0) exception "If the CPL is greater (has less privilege) than the IOPL of the current program or procedure". This #GP is what causes Linux to deliver a SIGSEGV to your process.
Under Linux, you could make an iopl(3) system call to raise your IO priv level to match your ring 3 CPL, and then you could disable interrupts from user-space. (But don't do this, it's not supported AFAIK. The intended use-case for iopl is to use in and out instructions from user-space with high port numbers, not cli/sti. x86 just happens to use the same permissions for both.)
You'll probably crash your system if you don't re-enable interrupts right away, or maybe even if you do. Or at least screw up that CPU on a multi-core system. Basically don't do this unless you're ready to press the reset button, i.e. shut down X11, saved your files and run sync. Also remount your filesystems read-only.
Or try it in a virtual machine or simulator like BOCHS that will let you break in with a debugger even while interrupts are disabled. Or try it while booted from a USB stick.
Note that disabling interrupts only disables external interrupts. Software-generated interrupts like int $0x80 are still taken, but making system calls with interrupts disabled is probably an even worse idea. (It might work, though. The kernel saves/restores EFLAGS, so it probably won't return to user-space with interrupts re-enabled. Still, leaving interrupts disabled for a long time is a Bad Thing for interrupt latency.)
If you want to play around with disabling interrupts as a beginner, you should probably do it from a toy boot-sector program that uses BIOS calls for I/O. Or just look in the Linux kernel source for some places where it disables/enables interrupts if you're curious why it might do that.
IMO, "normal" asm in user-space is plenty interesting. With performance counters, you can see the details of how the CPU decodes and executes instructions. See links in the x86 tag wiki for manuals, guides, and performance tuning info.

How does software recognize an interrupt has occured?

As we know we write Embedded C programming, for task management, memory management, ISR, File system and all.
I would like to know if some task or process is running and at the same time an interrupt occurred, then how SW or process or system comes to know that, the interrupt has occurred? and pauses the current task execution and starts serving ISR.
Suppose if I will write the below code like;
// Dummy Code
void main()
{
for(;;)
printf("\n forever");
}
// Dummy code for ISR for understanding
void ISR()
{
printf("\n Interrupt occurred");
}
In this above code if an external interrupt(ISR) occurs, then how main() comes to know that the interrupt occurred? So that it would start serving ISR first?
main doesn't know. You have to execute some-system dependent function in your setup code (maybe in main) that registers the interrupt handler with the hardware interrupt routine/vector, etc.
Whether that interrupt code can execute a C function directly varies quite a lot; runtime conventions for interrupt procedures don't always follow runtime conventions for application code. Usually there's some indirection involved in getting a signal from the interrupt routine to your C code.
your query: I understood your answer. But I wanted to know when Interrupt occurs how the current task execution gets stopped/paused and the ISR starts executing?
well Rashmi to answer your query read below,
when microcontroller detects interrupt, it stops exucution of the program after executing current instruction. Then it pushes PC(program counter) on to stack and loads PC with the vector location of that inerrupt hence, program flow is directed to interrrupt service routine. On completion of ISR the microcontroller again pops the stored program counter from stack and loads it on to PC hence, program execution again resumes from next location it was stopped.
does that replied to your query?
It depends on your target.
For example the ATMEL mega family uses a pre-processor directive to register the ISR with an interrupt vector. When an interrupt occurs the corrosponding interrupt flag is raised in the relevant status register. If the global interrupt flag is raised the program counter is stored on the stack before the ISR is called. This all happens in hardware and the main function knows nothing about it.
In order to allow main to know if an interrupt has occurred you need to implement a shared data resource between the interrupt routine and your main function and all the rules from RTOS programming apply here. This means that as the ISR may be executed at any time it as not safe to read from a shared resource from main without disabling interrupts first.
On an ATMEL target this could look like:
volatile int shared;
int main() {
char status_register;
int buffer;
while(1) {
status_register = SREG;
CLI();
buffer = shared;
SREG = status_register;
// perform some action on the shared resource here.
}
return 0;
}
void ISR(void) {
// update shared resource here.
}
Please note that the ISR is not added to the vector table here. Check your compiler documentation for instructions on how to do that.
Also, an important thing to remember is that ISRs should be very short and very fast to execute.
On most embedded systems the hardware has some specific memory address that the instruction pointer will move to when a hardware condition indicates an interrupt is required.
When the instruction pointer is at this specific location it will then begin to execute the code there.
On a lot of systems the programmer will place only an address of the ISR at this location so that when the interrupt occurs and the instruction pointer moves to the specific location it will then jump to the ISR
try doing a Google search on "interrupt vectoring"
An interrupt handling is transparent for the running program. The processor branchs automatically to a previously configured address, depending on the event, and this address being the corresponding ISR function. When returning from the interrupt, a special instruction restores the interrupted program.
Actually, most of the time you won't ever want that a program interrupted know it has been interrupted. If you need to know such info, the program should call a driver function instead.
interrupts are a hardware thing not a software thing. When the interrupt signal hits the processor the processor (generally) completes the current instruction. In some way shape or form preserves the state (so it can get back to where it was) and in some way shape or form starts executing the interrupt service routine. The isr is generally not C code at least the entry point is usually special as the processor does not conform to the calling convention for the compiler. The ISR might call C code, but you end up with the mistakes that you made, making calls like printf that should not be in an ISR. hard once in C to keep from trying to write general C code in an isr, rather than the typical get in and get out type of thing.
Ideally your application layer code should never know the interrupt happened, there should be no (hardware based) residuals affecting your program. You may choose to leave something for the application to see like a counter or other shared data which you need to mark as volatile so the application and isr can share it. this is not uncommon to have the isr simply flag that an interrupt happened and the application polls that flag/counter/variable and the handling happens primarily in the application not isr. This way the application can make whatever system calls it wants. So long as the overall bandwidth or performance is met this can and does work as a solution.
Software doesnt recognize the interrupt to be specific, it is microprocessor (INTC) or microcontrollers JOB.
Interrupt routine call is just like normal function call for Main(), the only difference is that main dont know when that routine will be called.
And every interrupt has specific priority and vector address. Once the interrput is received (either software or hardware), depending on interrupt priority, mask values and program flow is diverted to specific vector location associated with that interrupt.
hope it helps.

What happens in an interrupt service routine?

Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.
There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.
Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?
While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.
An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.

Resources