Interrupt and DMA, what happens in the background? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how Interrupts are handled by the system and how does it work if there is a DMA integrated in the system.
I will express what I understood until now, and I would like to have some feedback if I'm right or not.
In order for the system to catch I/O actions performed by some device, the system uses what's called Interrupts.
The system sets up interrupts for given actions (we're interested in, for example typing on a keyboard), and once the action is performed the system catches it.
Now I have some doubts, once we catch an Interrupt what happens in the background? What are the overheads? What has does the CPU needs to set up? Is there a context switch? How does the interrupt handler works?
The CPU has to do some work in order to handle the interrupt, does it read the registers and writes the "message" in the memory, in order for the user to see it?
If we have a DMA, instead, once the CPU catches the Interrupt it doesn't need to handle the memory access for the device, thus it can perform other thing until the DMA interrupts the CPU telling him that the transfer it completed and that the CPU can safely close the handling?
As you can see there is some stuff I need to clarify. I would really appreciate your help. I know that an answer to all those questions could be written in one book, but all I need is to know how the things are connected, to get an intuition on what's going on behind the scenes in order to reason more easily about it.

Interrupts are handled by something called Interrupt Service Routines (ISRs). These are functions implemented by the kernel and registered with the hardware. Each type of an interrupt is registered with a separate handler.
When a hardware receives an interrupt, it halts the execution of any running process (on that processor), pushes the state of the process (registers, flags, segments) on the stack and executes the ISR.
Apart from saving the context, the hardware also does one more important thing. It changes the processor context to Privileged mode (lower ring). This is of course if the processor is already not in Ring 0 and if Privileged operations are required by the ISR. There is a flag in the Interrupt Descriptor Table (IDT) which tells the processor whether it is a user mode exception or a privileged mode exception.
Since these ISRs are written by the kernel they are trusted. These ISRs perform whatever is required for example in case of a keyboard interrupt, it moves the byte reads into the input stream of the foreground process.
After the ISR is done, (signaled by an iret instruction on X86), the state of the program is popped off and the execution of the process continues.
Yes, this can be thought of a context switch, but it really isn't since other process is not loaded. It can be just thought of as a halt till a more important job is done.
While this has some overhead, it is not much in case of frequent interrupts like keyboards interrupts (the ISRs are very small) and also these interrupts are very infrequent.
But say there is a hardware that does jobs are very regular interval. Like disk read/write or network card. In this case, interrupting again and again would be very costly.
So what we use is DMA (direct memory access). The processor allocates some physical memory to these hardware. They can access this part of the RAM without halting the process, since the processor's intervention is not required.
They keep doing all the IO they need to, but in the end when the job is done (or if it fails), they signal the processor with a single interrupt.

Related

How could we sleep when we are executing a syscall that execute in interrupt mode

When I am executing a system call to do write or something else, the ISR corresponded to the exception is executing in interrupt mode (on cortex-m3 the IPSR register is having a non-zero value, 0xb). And what I have learned is that when we execute a code in an interrupt mode we can not sleep, we can not use functions that might block ...
My question is that: is there any kind of a mechanism with which the ISR could still executing in interrupt mode and in the same time it could use functions that might block, or is there any kind of trick is implemented.
Caveat: This is more of a comment than an answer but is too big to fit in a comment or series of comments.
TL;DR: Needing to sleep or execute a blocking operation from an ISR is a fundamental misdesign. This seems like an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Doing sleep [even in application code] is generally a code smell. Why do you feel you need to sleep [vs. some event/completion driven mechanism]?
Please edit your question and add clarification [i.e. don't just add comments]
When I am executing a system call to do write or something else
What is your application doing? A write to what device? What "something else"?
What is the architecture/board, kernel/distro? (e.g. Raspberry Pi running Raspian? nvidia Jetson? Beaglebone? Xilinx FPGA with petalinux?)
What is the H/W device and where did the device driver come from? Did you write the device driver yourself or is it a standard one that comes with the kernel/distro? If you wrote it, please post it in your question.
Is the device configured properly? (e.g.) Are the DTB entries correct?
Is the device a block device, such as a disk controller? Or, is it a character device, such as a UART? Does the device transfer data via DMA? Or, does it transfer data by reading/writing to/from an IO port?
What do you mean by "exception"? Generally, exception is an abnormal condition (e.g. segfault, bus error, etc.). Please describe the exact context/scenario for which this occurs.
Generally, an ISR does little things. (e.g.) Grab [and save] status from the device. Clear/rearm the interrupt in the interrupt controller. Start the next queued transfer request. Wake up the sleeping base level task (usually the task that executed the syscall [waiting on a completion event in kernel mode]).
More elaborate actions are generally deferred and handled in the interrupt's "bottom half" handler and/or tasklet. Or, the base level is woken up and it handles the remaining processing.
What kernel subsystems are involved? Are you using platform drivers? Are you interfacing from within the DMA device driver framework? Are message buses involved (e.g. I2C, SPI, etc.)?
Interrupt and device handling in the linux kernel is somewhat different than what one might do in a "bare metal" system or RTOS (e.g. FreeRTOS). So, if you're coming from those environments, you'll need to think about restructuring your driver code [and/or application code].
What are your requirements for device throughput and latency?
You may wish to consult a good book on linux device driver design. And, you may wish to consult the kernel's Documentation subdirectory.
If you're able to provide more information, I may be able to help you further.
UPDATE:
A system call is not really in the same class as a hardware interrupt as far as the kernel is concerned, even if the CPU hardware uses the same sort of exception vector mechanisms for handling both hardware and software interrupts. The kernel treats the system call as a transition from user mode to kernel mode. – Ian Abbott
This is a succinct/great explanation. The "mode" or "context" has little to do with how we got/get there from a H/W mechanism.
The CPU doesn't really "understand" interrupt mode [as defined by the kernel]. It understands "supervisor" vs "user" privilege level [sometimes called "mode"].
When executing at user privilege level, an interrupt/exception will notice the transition from "user" level to "supvervisor" level. It may have a special register that specifies the address of the [initial] supervisor stack pointer. Atomically, it swaps in the value, pushing the user SP onto the new kernel stack.
If the interrupt is interrupting a CPU that is already at supervisor level, the existing [supervisor] SP will be used unchanged.
Note that x86 has privilege "ring" levels. User mode is ring 3 and the highest [most privileged] level is ring 0. For arm, some arches can have a "hypervisor" privilege level [which is higher privilege than "supervisor" privilege].
The setup of the mode/context is handled in arch/arm/kernel/entry-*.S code.
An svc is a synchronous interrupt [generated by a special CPU instruction]. The resulting context is the context of the currently executing thread. It is analogous to "call function in kernel mode". The resulting context is "kernel thread mode". At that point, it's not terribly useful to think of it as an "interrupt" anymore.
In fact, on some arches, the syscall instruction/mechanism doesn't use the interrupt vector table. It may have a fixed address or use a "call gate" mechanism (e.g. x86).
Each thread has its own stack which is different than the initial/interrupt stack.
Thus, once the entry code has established the context/mode, it is not executing in "interrupt mode". So, the full range of kernel functions is available to it.
An interrupt from a H/W device is asynchronous [may occur at any time the CPU's internal interrupt enable flag is set]. It may interrupt a userspace application [executing in application mode] OR kernel thread mode OR an existing ISR executing in interrupt mode [from another interrupt]. The resulting ISR is executing in "interrupt mode" or "ISR mode".
Because the ISR can interrupt a kernel thread, it may not do certain things. For example, if the CPU were in [kernel] thread mode, and it was in the middle of a kmalloc call [GFP_KERNEL], the ISR would see partial state and any action that tried to adjust the kernel's heap would result in corruption of the heap.
This is a design choice by linux for speed.
Kernel ISRs may be classified as "fast interrupts". The ISR executes with the CPU interrupt enable [IE] flag cleared. No normal H/W interrupt may interrupt the ISR.
So, if another H/W device asserts its interrupt request line [in the external interrupt controller], that request will be "pending". That is, the request line has been asserted but the CPU has not acknowledged it [and the CPU has not jumped via the interrupt table].
The request will remain pending until the CPU allows further interrupts by asserting IE. Or, the CPU may clear the pending interrupt without taking action by clearing the pending interrupt in the interrupt controller.
For a "slow" interrupt ISR, the interrupt entry code will clear the interrupt in the external interrupt controller. It will then rearm interrupts by setting IE and call the ISR. This ISR can be interrupted by other [higher priority] interrupts. The result is a "stacked" interrupt.
I have been searching all over the places, I come to the conclusion is that the interrupts have a higher priority than some of exceptions in the Linux kernel.
An exception [synchronous interrupt] can be interrupted if the IE flag is enabled. An svc is treated differently but after the entry code is executed, the IE flag is set, so the actual syscall code [executing in kernel thread mode] can be interrupted by a H/W interrupt.
Or, in limited circumstances, the kernel code can generate an exception (e.g. a page fault caused by a kernel action [which is usually deemed fatal]).
but I am still looking on how exactly the context switching happen when executing an exception and letting the processor to execute in a thread mode while the SVCall exception is pending (was preempted and have not returned yet)... I think when I understand that, it would be more clear to me.
I think you have to be very careful with the terminology. In particular, when combining terms from disparate sources. Although user mode, kernel thread mode, or interrupt mode can be considered a context [in the dictionary sense of the word], context switching usually means that the current thread is suspended, the scheduler selects a new thread to run and resumes it. That is separate from the user-to-kernel transition.
And if there is any recommended resources about that for ARM-Cortex-M3/4, it would be nice
Here is something: https://interrupt.memfault.com/blog/arm-cortex-m-exceptions-and-nvic But, be very careful in applying the terminology therein. What it considers "pending" only exists in the kernel during the entry code. What is more relevant is what the kernel does to set up mode/context and the terms are not equivalent.
So, from the kernel's standpoint, it's probably better to not consider an svc as "pending".

what's wrong in using interrupt handlers as event listeners

My system is simple enough that it runs without an OS, I simply use interrupt handlers like I would use event listener in a desktop program. In everything I read online, people try to spend as little time as they can in interrupt handlers, and give the control back to the tasks. But I don't have an OS or real task system, and I can't really find design information on OS-less targets.
I have basically one interrupt handler that reads a chunk of data from the USB and write the data to memory, and one interrupt handler that reads the data, sends the data on GPIO and schedule itself on an hardware timer again.
What's wrong with using the interrupts the way I do, and using the NVIC (I use a cortex-M3) to manage the work hierarchy ?
First of all, in the context of this question, let's refer to the OS as a scheduler.
Now, unlike threads, interrupt service routines are "above" the scheduling scheme.
In other words, the scheduler has no "control" over them.
An ISR enters execution as a result of a HW interrupt, which sets the PC to a different address in the code-section (more precisely, to the interrupt-vector, where you "do a few things" before calling the ISR).
Hence, essentially, the priority of any ISR is higher than the priority of the thread with the highest priority.
So one obvious reason to spend as little time as possible in an ISR, is the "side effect" that ISRs have on the scheduling scheme that you design for your system.
Since your system is purely interrupt-driven (i.e., no scheduler and no threads), this is not an issue.
However, if nested ISRs are not allowed, then interrupts must be disabled from the moment an interrupt occurs and until the corresponding ISR has completed. In that case, if any interrupt occurs while an ISR is in execution, then your program will effectively ignore it.
So the longer you spend inside an ISR, the higher the chances are that you'll "miss out" on an interrupt.
In many desktop programs, events are send to queue and there is some "event loop" that handle this queue. This event loop handles event by event so it is not possible to interrupt one event by other one. It also is good practise in event driven programming to have all event handlers as short as possible because they are not interruptable.
In bare metal programming, interrupts are similar to events but they are not send to queue.
execution of interrupt handlers is not sequential, they can be interrupted by interrupt with higher priority (numerically lower number in Cortex-M3)
there is no queue of same interrupts - e.g. you can't detect multiple GPIO interrupts while you are in that interrupt - this is the reason you should have all routines as short as possible.
It is possible to implement queues by yourself, feed these queues by interrupts and consume these queues in your super loop (consume while disabling all interrupts). By this approach, you can get sequential processing of interrupts. If you keep your handlers short, this is mostly not needed and you can do the work in handlers directly.
It is also good practise in OS based systems that they are using queues, semaphores and "interrupt handler tasks" to handle interrupts.
With bare metal it is perfectly fine to design for application bound or interrupt/event bound so long as you do your analysis. So if you know what events/interrupts are coming at what rate and you can insure that you will handle all of them in the desired/designed amount of time, you can certainly take your time in the event/interrupt handler rather than be quick and send a flag to the foreground task.
The common approach of course is to get in and out fast, saving just enough info to handle the thing in the foreground task. The foreground task has to spin its wheels of course looking for event flags, prioritizing, etc.
You could of course make it more complicated and when the interrupt/event comes, save state, and return to the forground handler in the forground mode rather than interrupt mode.
Now that is all general but specific to the cortex-m3 I dont think there are really modes like big brother ARMs. So long as you take a real-time approach and make sure your handlers are deterministic, and you do your system engineering and insure that no situation happens where the events/interrupts stack up such that the response is not deterministic, not too late or too long or loses stuff it is okay
What you have to ask yourself is whether all events can be services in time in all circumstances:
For example;
If your interrupt system were run-to-completion, will the servicing of one interrupt cause unacceptable delay in the servicing of another?
On the other hand, if the interrupt system is priority-based and preemptive, will the servicing of a high priority interrupt unacceptably delay a lower one?
In the latter case, you could use Rate Monotonic Analysis to assign priorities to assure the greatest responsiveness (the shortest execution-time handlers get the highest priority). In the first case your system may lack a degree of determinism, and performance will be variable under both event load, and code changes.
One approach is to divide the handler into real-time critical and non-critical sections, the time-critical code can be done in the handler, then a flag set to prompt the non-critical action to be performed in the "background" non-interrupt context in a "big-loop" system that simply polls event flags or shared data for work to complete. Often all that might be necessary in the interrupt handler is to copy some data to timestamp some event - making data available for background processing without holding up processing of new events.
For more sophisticated scheduling, there are a number of simple, low-cost or free RTOS schedulers that provide multi-tasking, synchronisation, IPC and timing services with very small footprints and can run on very low-end hardware. If you have a hardware timer and 10K of code space (sometimes less), you can deploy an RTOS.
I am taking your described problem first
As I interpret it your goal is to create a device which by receiving commands from the USB, outputs some GPIO, such as LEDs, relays etc. For this simple task, your approach seems to be fine (if the USB layer can work with it adequately).
A prioritizing problem exists though, in this case it may be that if you overload the USB side (with data from the other end of the cable), and the interrupt handling it is higher priority than that triggered by the timer, handling the GPIO, the GPIO side may miss ticks (like others explained, interrupts can't queue).
In your case this is about what could be considered.
Some general guidance
For the "spend as little time in the interrupt handler as possible" the rationale is just what others told: an OS may realize a queue, etc., however hardware interrupts offer no such concepts. If the event causing the interrupt happens, the CPU enters your handler. Then until you handle it's source (such as reading a receive holding register in the case of a UART), you lose any further occurrences of that event. After this point, until exiting the handler, you may receive whether the event happened, but not how many times (if the event happened again while the CPU was still processing the handler, the associated interrupt line goes active again, so after you return from the handler, the CPU immediately re-enters it provided nothing higher priority is waiting).
Above I described the general concept observable on 8 bit processors and the AVR 32bit (I have experience with these).
When designing such low-level systems (no OS, one "background" task, and some interrupts) it is fundamental to understand what goes on on each priority level (if you utilize such). In general, you would make the most real-time critical tasks the highest priority, taking the most care of serving those fast, while being more relaxed with the lower priority levels.
From an other aspect usually at design phase it can be planned how the system should react to missed interrupts, since where there are interrupts, missing one will eventually happen anyway. Critical data going across communication lines should have adequate checksums, an especially critical timer should be sourced from a count register, not from event counting, and the likes.
An other nasty part of interrupts is their asynchronous nature. If you fail to design the related locks properly, they will eventually corrupt something giving nightmares to that poor soul who will have to debug it. The "spend as little time in the interrupt handler as possible" statement also encourages you to keep the interrupt code reasonably short which means less code to consider for this problem as well. If you also worked with multitasking assisted by an RTOS you should know this part (there are some differences though: a higher priority interrupt handler's code does not need protection against a lower priority handler's).
If you can properly design your architecture regarding the necessary asynchronous tasks, getting around without an OS (from the no multitasking aspect) may even prove to be a nicer solution. It needs way more thinking to design it properly, however later there are much less locking related problems. I got through some mid-sized safety critical projects designed over a single background "task" with very few and little interrupts, and the experience and maintenance demands regarding those (especially the tracing of bugs) were quite satisfactory compared to some others in the company built over multitasking concepts.

Is the abrupt ending of a process using Control + C a trap or an interrupt?

It seems as though the difference between a trap and an interrupt is clear: a trap is a software-invoked call to the kernel (such as through an exception) and an interrupt is pertinent to the hardware (the disk, I/O and peripheral devices such as the mouse and the keyboard...) (learn more about the difference here).
Knowing this, under what category should pressing Control + C to end a process be classified? Is it a software-invoked call and thus a trap since it can be executed from the Shell, etc. or is it an interrupt since it's a signal that the CPU receives from the keyboard? Or are interrupts wholly outside users' domain, meaning that it's the hardware interacting with the CPU at a level that the user cannot reach?
Thank you!
It's first and foremost a signal — pressing Control-C causes the kernel to send a signal (of type SIGINT) to the current foreground process. If that process hasn't set up a handler for that signal (using one of the system calls from the signal() family), it causes the process to be killed.
The signal is, I suppose, the "interrupt" signal, but this is unrelated to hardware interrupts. Those are only used internally by the kernel.
The difference between a trap and an interrupt is not as you described in your question (thanks for the reference) but to the asynchronous nature of the events producing it. A trap means an interrupt in the code execution due to a normally incorrect/internal operation (like dividing by zero or a page fault, or as you signaled, a software interrupt), but it ocurrs always at the same place in the code (synchronously with the code execution) and an interrupt occurs because of external hardware, when some device signals the cpu to interrupt what it is doing, as it's ready to send some data. By nature, traps are synchronous and interrupts aren't.
Said this, both are anomalous events that change the normal course of execution of the cpu. Both are hardware produced, but for different reasons: the first occurs synchronously (you know always when, in which instruction, it will be produced, if produced at all) and the second not (you don't know in advance which instruction will be executing when external hardware would assert the interrupt line) Also, there are two kinds of traps (depending on the event that triggered them), one puts the instruction pointer pointing to the next instruction (for example a divide by zero trap) to be executed, and the other puts it pointing to the same instruction that caused the trap (for example a page fault, that has to be reexecuted once the cause of the trap has been corrected) Of course, software interrupts, as by their nature, are always traps (traps that always change the course of execution) as it can be predicted the exact point in the program flow where the cpu will get interrupted.
So, with this explanation, you probably can answer your question yourshelf, a Ctrl-C interrupt is an interrupt as you cannot predice in advance when it will interrupt the cpu execution and you cannot mark that point in your code.
Remember, interrupts ocurr asynchronously, traps not.
The pressing of Ctrl+C on Linux systems is used to kill a process with the signal SIGINT, and can be intercepted by a program so it can clean its self up before exiting, or not exit at all.
Had it been a trap, it'd have got instantaneously died!
Hence,it is a kind of a software interrupt!
Control-C is not an interrupt... at least not on the PC (and now MAC) hardware. In other words, the keyboard controller doesn't generate a specific interrupt for the key combination "control" and "C".
The keyboard uses only one interrupt vector which is triggered on a key down and a key up. The keyboard is an extremely slow hardware device. With the key repeat rate set to the fastest, holding down a key generate 33 interrupt per second.
If the designer of the operating system believe that control-C is extremely important, they may include the test "is this the key down for "C" AND is the "control" key triggered a keyboard interrupt some billions machine cycle ago? Then, while still processing the keyboard interrupt, they would generate a trap using a software interrupt instruction.
A better operating system would reduce the processing time of the keyboard interrupt to the strict minimum. They would just append to a circular buffer (ring buffer) the key code which include the bit pressed/released and immediately terminate the interrupt.
The operating system would then, whenever it has time, notice the change in ring buffer pointer. It would trigger the code which extract the key code from the ring buffer, verify if that code represent the "ctrl-C" combination and set a flag saying "ctrl-C" detected.
Finally, when the scheduler is ready to run a thread that belong to the current process, it check the flag "ctrl-C' detected. If it is the case, the scheduler set PC to point to the SIGINT routine instead of resuming to the previous execution address.
No matter the detail, "ctrl-C" can not be an interrupt. It is either a trap if called from the keyboard interrupt or it is a synchronization object tested asynchronously by the scheduler.

Context switching in function vs interrupt call? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I understand the basic difference between function call & interrupt (ISR) jump from below SE question.
difference between function call & ISR
But I am still not clear about, what are the registers will be pushed /pop to/from stack in both the cases? How context switching will happen in both the case? As we don't know when interrupt will occur, what we need to save (variables, PC, flags (PSW), registers, context) before entering into ISR?
How can we resume back original context without any data lose with multiple thread environment.
I tried to google it & I found the needed information from this:
Interrupts
Context Switch
Thanks #Drew McGowen
So to sum up, the general sequence for an interrupt is as follows:
Foreground code is running, interrupts are enabled
Interrupt event sends an interrupt request to the CPU
After completing the current instruction(s), the CPU begins the interrupt response
automatically saves current program counter
automatically saves some status (depending on CPU)
jump to correct interrupt service routine for this request
ISR code saves any registers and flags it will modify
ISR services the interrupt and re-arms it if necessary
ISR code restores any saved registers and flags
ISR executes a return-from-interrupt instruction or sequence
return-from-interrupt instruction restores automatically-saved status
return-from-interrupt instruction recovers saved program counter
Foreground code continues to run from the point it responded to the interrupt
As usual, the details of this process will depend on the CPU design. Many devices use the hardware stack for all saved data, but RISC designs typically save the PC in a register (the link register). Many designs also have separate duplicate registers that can be used for interrupt processing, thus reducing the amount of state data that must be saved and restored.
Note that saving and restoring the foreground code state is generally a two-step process for reasons of efficiency. The hardware response to the interrupt automatically saves the most essential state, but the first lines of ISR code are usually dedicated to saving additional state (usually in the form of saving condition flags if not saved by the hardware, along with saving additional registers). This two-step process is used because every ISR will have different requirements for the number of registers it needs, and thus every ISR may need to save save different registers, and different numbers of registers, assuring all appropriate state data is saved without wasting time saving registers unnecessarily (that is, saving registers that are not modified in the ISR and thus didn’t need to be saved). A very simple ISR may not need to use any registers, another ISR may need to use only one or two registers, while a more complicated ISR may need to use a large number of registers. In every case, the ISR should only save and restore those registers it actually uses.
I'm sure there are different implementations based on the CPU you're using. On a general level, the function calls store the input parameters within the given registers (%o0-%o9) on SPARC and are available in the (%i0-%i9) registers in the calle's function. The callee function will then place the return value in the %i0 register to be available in the %o0 register for the caller function. According to the Sparc Manual, each interrupt is:
Accompanied by data, referred to as an “interrupt packet”.
An interrupt packet is 64 bytes long, consisting of eight 64-bit doublewords.
According to this source, the data in your current executing thread is:
Saved either on a stack (PDP-11, VAX, MIPS, x86_64)
or in a single set of dedicated registers (ARM, PowerPC)
The above source mentions how interrupts are handeled on a couple of different architectures as well.
Please let me know if you have any questions!

How to create an interrupt table

I have a homework assignment for my Operating Systems class where I need to write an interrupt table for a simulated OS. I already have, from a previous assignment, the appropriate drivers all set up:
My understanding is that I should have an array of interrupt types, along the lines of interrupt_table[x], where x = 0 for a trap, x = 1 for a clock interrupt, etc. The interrupt_table should contain pointers to the appropriate handlers for each type of interrupt, which should then call the appropriate driver? Am I understanding this correctly? Could anyone point me in the right direction for creating those handlers?
Thanks for the help.
Most details about interrupt handlers vary with the OS. The only thing that's close to universal is that you typically want to do as little as you can reasonably get away with in the interrupt handler itself. Typically, you just acknowledge the interrupt, record enough about the input to be able to deal with it when you're ready, and return. Everything else is done separately.
Your understanding sounds pretty good.
Just how simulated is this simulated OS? If it runs entirely on a 'machine' of your professor's own design, then doubtless she's given some specifications about what interrupts are provided, how to probe for interrupts that may be there, and what sorts of tasks interrupt handlers should do.
If it is for a full-blown x86 computer or something similar, perhaps the Linux arch/x86/pci/irq.c can provide you with tips.
What you do upon receiving interrupt depends on the particular interrupt. The thumb rule is to find out what is critical that needs to be attended to for the particular interrupt, then do "just" that (nothing more nothing less) and come out of the handler as soon as possible. Also, the interrupt handlers are just a small part of your driver (that is how you should design). For example, if you receive an interrupt for an incoming byte on some serial port, then you just read the byte off the in-register and put it on some "volatile" variable, wind up things and get out of the handler. The rest (like, what you will do with the incoming byte on the serial port) can be handled in the driver code.
The thumb rule remains: "nothing more, nothing less"

Resources