What is the difference between FIQ and IRQ interrupt system? - arm

I want to know the difference between FIQ and IRQ interrupt system in
any microprocessor, e.g: ARM926EJ.

ARM calls FIQ the fast interrupt, with the implication that IRQ is normal priority. In any real system, there will be many more sources of interrupts than just two devices and there will therefore be some external hardware interrupt controller which allows masking, prioritization etc. of these multiple sources and which drives the interrupt request lines to the processor.
To some extent, this makes the distinction between the two interrupt modes redundant and many systems do not use nFIQ at all, or use it in a way analogous to the non-maskable (NMI) interrupt found on other processors (although FIQ is software maskable on most ARM processors).
So why does ARM call FIQ "fast"?
FIQ mode has its own dedicated banked registers, r8-r14. R14 is the link register which holds the return address(+4) from the FIQ. But if your FIQ handler is able to be written such that it only uses r8-r13, it can take advantage of these banked registers in two ways:
One is that it does not incur the overhead of pushing and popping any registers that are used by the interrupt service routine (ISR). This can save a significant number of cycles on both entry and exit to the ISR.
Also, the handler can rely on values persisting in registers from one call to the next, so that for example r8 may be used as a pointer to a hardware device and the handler can rely on the same value being in r8 the next time it is called.
FIQ location at the end of the exception vector table (0x1C) means that if the FIQ handler code is placed directly at the end of the vector table, no branch is required - the code can execute directly from 0x1C. This saves a few cycles on entry to the ISR.
FIQ has higher priority than IRQ. This means that when the core takes an FIQ exception, it automatically masks out IRQs. An IRQ cannot interrupt the FIQ handler. The opposite is not true - the IRQ does not mask FIQs and so the FIQ handler (if used) can interrupt the IRQ. Additionally, if both IRQ and FIQ requests occur at the same time, the core will deal with the FIQ first.
So why do many systems not use FIQ?
FIQ handler code typically cannot be written in C - it needs to be written directly in assembly language. If you care sufficiently about ISR performance to want to use FIQ, you probably wouldn't want to leave a few cycles on the table by coding in C in any case, but more importantly the C compiler will not produce code that follows the restriction on using only registers r8-r13. Code produced by a C compiler compliant with ARM's ATPCS procedure call standard will instead use registers r0-r3 for scratch values and will not produce the correct cpsr restoring return code at the end of the function.
All of the interrupt controller hardware is typically on the IRQ pin. Using FIQ only makes sense if you have a single highest priority interrupt source connected to the nFIQ input and many systems do not have a single permanently highest priority source. There is no value connecting multiple sources to the FIQ and then having software prioritize between them as this removes nearly all the advantages the FIQ has over IRQ.

FIQ or fast interrupt is often referred to as Soft DMA in some ARM references.Features of the FIQ are,
Separate mode with banked register including stack, link register and R8-R12.
Separate FIQ enable/disable bit.
Tail of vector table (which is always in cache and mapped by MMU).
The last feature also gives a slight advantage over an IRQ which must branch.
A speed demo in 'C'
Some have quoted the difficulty of coding in assembler to handle the FIQ. gcc has annotations to code a FIQ handler. Here is an example,
void __attribute__ ((interrupt ("FIQ"))) fiq_handler(void)
{
/* registers set previously by FIQ setup. */
register volatile char *src asm ("r8"); /* A source buffer to transfer. */
register char *uart asm ("r9"); /* pointer to uart tx register. */
register int size asm ("r10"); /* Size of buffer remaining. */
if(size--) {
*uart = *src++;
}
}
This translates to the following almost good assembler,
00000000 <fiq_handler>:
0: e35a0000 cmp sl, #0
4: e52d3004 push {r3} ; use r11, r12, etc as scratch.
8: 15d83000 ldrbne r3, [r8]
c: 15c93000 strbne r3, [r9]
10: e49d3004 pop {r3} ; same thing.
14: e25ef004 subs pc, lr, #4
The assembler routine at 0x1c might look like,
tst r10, #0 ; counter zero?
ldrbne r11, [r8] ; get character.
subne r10, #1 ; decrement count
strbne r11, [r9] ; write to uart
subs pc, lr, #4 ; return from FIQ.
A real UART probably has a ready bit, but the code to make a high speed soft DMA with the FIQ would only be 10-20 instructions. The main code needs to poll the FIQ r10 to determine when the buffer is finished. Main (non-interrupt code) may transfer and setup the banked FIQ registers by using the msr instruction to switch to FIQ mode and transfer non-banked R0-R7 to the banked R8-R13 registers.
Typically RTOS interrupt latency will be 500-1000 instructions. For Linux, it maybe 2000-10000 instructions. Real DMA is always preferable, however, for high frequency simple interrupts (like a buffer transfer), the FIQ can provide a solution.
As the FIQ is about speed, you shouldn't consider it if you aren't secure in coding in assembler (or willing to dedicate the time). Assembler written by an infinitely running programmer will be faster than a compiler. Having GCC assist can help a novice.
Latency
As the FIQ has a separate mask bit it is almost ubiquitously enabled. On earlier ARM CPUs (such as the ARM926EJ), some atomic operations had to be implemented by masking interrupts. Still even with the most advanced Cortex CPUs, there are occasions where an OS will mask interrupts. Often the service time is not critical for an interrupt, but the time between signalling and servicing. Here, the FIQ also has an advantage.
Weakness
The FIQ is not scalable. In order to use multiple FIQ sources, the banked registers must be shared among interrupt routines. Also, code must be added to determine what caused the interrupt/FIQ. The FIQ is generally a one trick pony.
If your interrupt is highly complex (network driver, USB, etc), then the FIQ probably makes little sense. This is basically the same statement as multiplexing the interrupts. The banked registers give 6 free variables to use which never load from memory. Register are faster than memory. Registers are faster than L2-cache. Registers are faster than L1-cache. Registers are fast. If you can not write a routine that runs with 6 variables, then the FIQ is not suitable. Note: You can double duty some register with shifts and rotates which are free on the ARM, if you use 16 bit values.
Obviously the FIQ is more complex. OS developers want to support multiple interrupt sources. Customer requirements for a FIQ will vary and often they realize they should just let the customer roll their own. Usually support for a FIQ is limited as any support is likely to detract from the main benefit, SPEED.
Summary
Don't bash my friend the FIQ. It is a system programers one trick against stupid hardware. It is not for everyone, but it has its place. When all other attempts to reduce latency and increase ISR service frequency has failed, the FIQ can be your only choice (or a better hardware team).
It also possible to use as a panic interrupt in some safety critical applications.

A feature of modern ARM CPUs (and some others).
From the patent:
A method of performing a fast
interrupt in a digital data processor
having the capability of handling more
than one interrupt is provided. When a
fast interrupt request is received a
flag is set and the program counter
and condition code registers are
stored on a stack. At the end of the
interrupt servicing routine the return
from interrupt instructions retrieves
the condition code register which
contains the status of the digital
data processor and checks to see
whether the flag has been set or not.
If the flag is set it indicates that a
fast interrupt was serviced and
therefore only the program counter is
unstacked.
In other words, an FIQ is just a higher priority interrupt request, that is prioritized by disabling IRQ and other FIQ handlers during request servicing. Therefore, no other interrupts can occur during the processing of the active FIQ interrupt.

Chaos has already answered well, but an additional point not covered so far is that FIQ is at the end of the vector table and so it's common/traditional to just start the routine right there, whereas the IRQ vector is usually just that. (ie a jump to somewhere else). Avoiding that extra branch immediately after a full stash and context switch is a slight speed gain.

another reason is in case of FIQ, lesser number of register is needed to push in the stack, FIQ mode has R8 to R14_fiq registers

FIQ is higher priority, and can be introduced while another IRQ is being handled. The most critical resource(s) are handled by FIQ's, the rest are handled by IRQ's.

I believe this is what you are looking for:
http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.arm/2005-09/msg00084.html
Essentially, FIQ will be of the highest priority with multiple, lower priority IRQ sources.

FIQs are higher priority, no doubt, remaining points i am not sure..... FIQs will support high speed data transfer (or) channel processing, where high speed data processes is required we use FIQs and generally IRQs are used normal interrupt handlling.

No any magic about FIQ. FIQ just can interrupt any other IRQ which is being served,this is why it is called 'fast'. The system reacts faster on these interrupts but the rest is the same.

It Depends how we design interrupt handlers, as FIQ is at last it may not need one branch instruction, also it has unique set of r8-r14 registers so next time we come back to FIQ interrupt we do not need to push/pop up the stack. Ofcourse it saves some cycles, but again it is not wise to have more handlers serving one FIQ and yes FIQ is having more priority but it is not any reason to say it handles the interrupt faster, both IRQ/FIQ run at same CPU frequency, So they must be running at same speed.

This may be wrong. All I know is that FIQ stands for Fast Interrupt Request and that IRQ stands for Interrupt Request. Judging from these names, I will guess that a FIQ will be handled(thrown?) faster than an IRQ. It probably has something to do with the design of the processor where an FIQ will interrupt the process faster than an IRQ. I apologize if I'm wrong, but I normally do higher level programming, I'm just guessing right now.

Related

DSB on ARM Cortex M4 processors

I have read the ARM documentation and it appears that they say in some places that the Cortex M4 can reorder memory writes, while in other places it indicates that M4 will not.
Specifically I am wondering if the DBM instruction is needed like:
volatile int flag=0;
char buffer[10];
void foo(char c)
{
__ASM volatile ("dbm" : : : "memory");
__disable_irq(); //disable IRQ as we use flag in ISR
buffer[0]=c;
flag=1;
__ASM volatile ("dbm" : : : "memory");
__enable_irq();
}
Uh, it depends on what your flag is, and it also varies from chip to chip.
In case that flag is stored in memory:
DSB is not needed here. An interrupt handler that would access flag would have to load it from memory first. Even if your previous write is still in progress the CPU will make sure that the load following the store will happen in the correct order.
If your flag is stored in peripheral memory:
Now it gets interesting. Lets assume flag is in some hardware peripheral. A write to it may make an interrupt pending or acknowledge an interrupt (aka clear a pending interrupt). Contrary to the memory example above this effect happens without the CPU having to read the flag first. So the automatic ordering of stores and loads won't help you. Also writes to flag may take effect with a surprisingly long delay due to different clock domains between the CPU and the peripheral.
So the following szenario can happen:
you write flag=1 to clear an handled interrupt.
you enable interrupts by calling __enable_irq()
interrupts get enabled, write to flag=1 is still pending.
wheee, an interrupt is pending and the CPU jumps to the interrupt handler.
flag=1 takes effect. You're now in an interrupt handler without anything to do.
Executing a DSB in front of __enable_irq() will prevent this problem because whatever is triggered by flag=1 will be in effect before __enable_irq() executes.
If you think that this case is purely academic: Nope, it's real.
Just think about a real-time clock. These usually runs at 32khz. If you write into it's peripheral space from a CPU running at 64Mhz it can take a whopping 2000 cycles before the write takes effect. Now for real-time clocks the data-sheet usually shows specific sequences that make sure you don't run into this problem.
The same thing can however happen with slow peripherals.
My personal anecdote happened when implementing power-saving late in a project. Everything was working fine. Then we reduced the peripheral clock speed of I²C and SPI peripherals to the lowest possible speed we could get away with. This can save lots of power and extend battery live. What we found out was that suddenly interrupts started to do unexpected things. They seem to fire twice each time wrecking havoc. Putting a DSB at the end of each affected interrupt handler fixed this because - you can guess - the lower clock speed caused us to leave the interrupt handlers before clearing the interrupt source was in effect due to the slow peripheral clock.
This section of the Cortex M4 generic device user guide enumerates the factors which can affect reordering.
the processor can reorder some memory accesses to improve efficiency, providing this does not affect the behavior of the instruction sequence.
the processor has multiple bus interfaces
memory or devices in the memory map have different wait states
some memory accesses are buffered or speculative.
You should also bear in mind that both DSB and ISB are often required (in that order), and that C does not make any guarantees about the ordering (except in-thread volatile accesses).
You will often observe that the short pipeline and instruction sequences can combine in such a way that the race conditions seem unreachable with a specific compiled image, but this isn't something you can rely on. Either the timing conditions might be rare (but possible), or subsequent code changes might change the resulting instruction sequence.

What is the irq latency due to the operating system?

How can I estimate the irq latency on ARM processor?
What is the definition for irq latency?
Interrupt Request (irq) latency is the time that takes for interrupt request to travel from source of the interrupt to the point when it will be serviced.
Because there are different interrupts coming from different sources via different paths, obviously their latency is depending on the type of the interrupt. You can find table with very good explanations about latency (both value and causes) for particular interrupts on ARM site
You can find more information about it in ARM9E-S Core Technical Reference Manual:
4.3 Maximum interrupt latency
If the sampled signal is asserted at the same time as a multicycle instruction has started
its second or later cycle of execution, the interrupt exception entry does not start until
the instruction has completed.
The longest LDM instruction is one that loads all of the registers, including the PC.
Counting the first Execute cycle as 1, the LDM takes 16 cycles.
• The last word to be transferred by the LDM is transferred in cycle 17, and the abort
status for the transfer is returned in this cycle.
• If a Data Abort happens, the processor detects this in cycle 18 and prepares for
the Data Abort exception entry in cycle 19.
• Cycles 20 and 21 are the Fetch and Decode stages of the Data Abort entry
respectively.
• During cycle 22, the processor prepares for FIQ entry, issuing Fetch and Decode
cycles in cycles 23 and 24.
• Therefore, the first instruction in the FIQ routine enters the Execute stage of the
pipeline in stage 25, giving a worst-case latency of 24 cycles.
and
Minimum interrupt latency
The minimum latency for FIQ or IRQ is the shortest time the request can be sampled
by the input register (one cycle), plus the exception entry time (three cycles). The first
interrupt instruction enters the Execute pipeline stage four cycles after the interrupt is
asserted
There are three parts to interrupt latency:
The interrupt controller picking up the interrupt itself. Modern processors tend to do this quite quickly, but there is still some time between the device signalling it's pin and the interrupt controller picking it up - even if it's only 1ns, it's time [or whatever the method of signalling interrupts are].
The time until the processor starts executing the interrupt code itself.
The time until the actual code supposed to deal with the interrupt is running - that is, after the processor has figured out which interrupt, and what portion of driver-code or similar should deal with the interrupt.
Normally, the operating system won't have any influence over 1.
The operating system certainly influences 2. For example, an operating system will sometimes disable interrupts [to avoid an interrupt interfering with some critical operation, such as for example modifying something to do with interrupt handling, or when scheduling a new task, or even when executing in an interrupt handler. Some operating systems may disable interrupts for several milliseconds, where a good realtime OS will not have interrupts disabled for more than microseconds at the most.
And of course, the time it takes from the first instruction in the interrupt handler runs, until the actual driver code or similar is running can be quite a few instructions, and the operating system is responsible for all of them.
For real time behaviour, it's often the "worst case" that matters, where in non-real time OS's, the overall execution time is much more important, so if it's quicker to not enable interrupts for a few hundred instructions, because it saves several instructions of "enable interrupts, then disable interrupts", a Linux or Windows type OS may well choose to do so.
Mats and Nemanja give some good information on interrupt latency. There are two is one more issue I would add, to the three given by Mats.
Other simultaneous/near simultaneous interrupts.
OS latency added due to masking interrupts. Edit: This is in Mats answer, just not explained as much.
If a single core is processing interrupts, then when multiple interrupts occur at the same time, usually there is some resolution priority. However, interrupts are often disabled in the interrupt handler unless priority interrupt handling is enabled. So for example, a slow NAND flash IRQ is signaled and running and then an Ethernet interrupt occurs, it may be delayed until the NAND flash IRQ finishes. Of course, if you have priorty interrupts and you are concerned about the NAND flash interrupt, then things can actually be worse, if the Ethernet is given priority.
The second issue is when mainline code clears/sets the interrupt flag. Typically this is done with something like,
mrs r9, cpsr
biceq r9, r9, #PSR_I_BIT
Check arch/arm/include/asm/irqflags.h in the Linux source for many macros used by main line code. A typical sequence is like this,
lock interrupts;
manipulate some flag in struct;
unlock interrupts;
A very large interrupt latency can be introduced if that struct results in a page fault. The interrupts will be masked for the duration of the page fault handler.
The Cortex-A9 has lots of lock free instructions that can prevent this by never masking interrupts; because of better assembler instructions than swp/swpb. This second issue is much like the IRQ latency due to ldm/stm type instructions (these are just the longest instructions to run).
Finally, a lot of the technical discussions will assume zero-wait state RAM. It is likely that the cache will need to be filled and if you know your memory data rate (maybe 2-4 machine cycles), then the worst case code path would multiply by this.
Whether you have SMP interrupt handling, priority interrupts, and lock free main line depends on your kernel configuration and version; these are issues for the OS. Other issues are intrinsic to the CPU/SOC interrupt controller, and to the interrupt code itself.

fiq & irq handler -- arm

I am new to arm & have some doubs related to IRQ & FIQ. Please try to clarify these.
How many number of FIQ & IRQ channel arm have ?
And what number of handlers can we write for each channel ?
Also if we can register multiple handler for single interrupt channel how arm comes to know which handler to run.
The distinction between IRQ and FIQ goes right the way back to early days of ARM when it was designed by Acorn. It was always the case that the IRQ line was attached to an interrupt controller that multiplexed a large number of interrupt sources together. This is precisely what happens in all modern ARMs
The rationale behind the FIQ was to provide an extremely low latency response with maximum priority (it can safely pre-empt the IRQ handler). The comparatively large number of shadow registers facilitate writing handlers that store the handler's state in CPU registers and not hitting the stack.
The shadow registers are almost of the opposite set to those commonly used by APCS for function call, so writing handlers in C, would cause a push and eventual pop of up to 8 non-shadowed registers. Having any kind of interrupt demultiplexing wipes out any performance advantage that FIQ might have given.
All of this means that there is only really any benefit in using FIQ for very specialised applications where really hard-real time interrupt response is required for one interrupting device, and you're willing to write your handler in assembler. You'll also be left with working out how to synchronise with the rest of the system - some of which would rely on disabling IRQ to keep data synchronised.
traditionally the arm has one interrupt line which you can send to one of two handlers FIQ or IRQ. FIQ has a larger bank of FIQ mode only registers so you have fewer that you need to store on the stack. From there you read the vendor specific registers if any to determine the source of the interrupt and then branch into separate handlers.
More recently there have bend arm architectures with many interrupts 128, 256 each with a separate handler. So generically asking about arm is not as varied but about like asking something generic about x86.
All of this information is easily available in the ARM architectural reference manuals for the different architectures and the pinouts to the core (what the vendor builds its chip around) is documented in the technical reference manuals for the various cores (also very easy to obtain). infocenter.arm.com has the architecture and technical reference manuals as well as amba/axi (the data bus that the vendor connects to). Your question is completely answered in those documents.
The ARM processor directly supports only ONE IRQ and ONE FIQ. ARM supports multiple interrupts through a peripheral called Interrupt Controller. ARM standard interrupt controllers are called GIC (Generic Interrupt Controller).
The GIC has a number of inputs for peripherals to connect their interrupt lines and two output lines that connect to IRQ and FIQ. Basically it acts as a MUX. A GIC driver will setup configurations such as interrupt priority, type (IRQ/FIQ), masking etc.
In traditional ARM systems there is one entry each for IRQ and FIQ in the Exception Vectors. Depending on which line the interrupt fired, IRQ or FIQ handler is called. The interrupt handler queries the GIC (GIC CPU interface registers, to be specific) to get the interrupt number. Based on this interrupt number, corresponding device handler is invoked.
Number of interrupts depends on the specific GIC implementation. So you would have to check the manual for the interrupt controller in your system to get those specifics.
Note: The interrupt handling is slightly different depending on which specific ARM core you are coding for.
Actually the question is a bit tricky. You must specify in the question to which architecture in ARM you work. ARM v7-A and ARM v7-R Architecture Reference Manual (ARM ARM) specifies one FIQ and one IRQ, as many already answered. But ARMv7-M (used in Cortex-M processors) integrates a interrupt controller in the processor, and thus offers one NMI (instead of FIQ) and up to 240 IRQ lines.
For more information: ARMv7 A and ARMv7-R Architecure reference manual: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0406c/index.html
ARMv7-M Architecture Reference Manual: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0403e.b/index.html
As an example, Cortex M4 specs sheet: http://www.arm.com/products/processors/cortex-m/cortex-m4-processor.php

Can an ARM interrupt occur in mid-instruction?

This question will be short and sweet.
I know an instruction can occur between instruction but can an interrupt happen during an instruction? Can a load multiple instruction be interrupted before it's loaded all the values into the registers?
mov r0, r1
< interrupt can happen here
ldm r0, {r1-r4} < can an interrupt happen **during** a load multiple instruction?
The load multiple instructions are explicitly not atomic. See section A3.5.3 of the ARM V7C architecture reference manual.
LDM, LDC, LDC2, LDRD, STM, STC, STC2, STRD, PUSH, POP, RFE, SRS, VLDM,
VLDR, VSTM, and VSTR instructions are executed as a sequence of
word-aligned word accesses. Each 32-bit word access is guaranteed to
be single-copy atomic. The architecture does not require subsequences
of two or more word accesses from the sequence to be single-copy
atomic.
If you read on, you'll find out that the LDM/STM instructions can be aborted by an interrupt (and restarted from the beginning on interrupt return). LDM and STM instructions can always be interrupted by a data abort, so they're non atomic in that sense. Otherwise, the ARMv7-A architecture does its best to help you out. For interrupts, they can only be interrupted if low interrupt latency is enabled, AND normal memory is being accessed. So at the very least, you won't get repeated accesses to device memory. You don't want to do anything that expects atomic read/writes of normal memory though.
On v7-M, LDM and STM can be interrupted at any time (see section B1.5.10 of the ARMv7-M Architecture Reference Manual). It's implementation defined whether or not the instruction is restarted from the beginning of the list of loads/stores, or whether it's restarted from where it left off. As the ARM says:
The ARMv7-M architecture supports continuation of, or restarting from
the beginning, an abandoned LDM or STM instruction as outlined below.
Where an LDM or STM is abandoned and restarted (ICI bits are not
supported), the instructions should not be used with volatile memory.
In other words, don't rely on LDM or STM being atomic if you're trying to write portable code.

Does anyone know how to enable ARM FIQ?

Does anyone know how to enable ARM FIQ?
Other than enabling or disabling the IRQ/FIQ while you're in supervisor mode, there's no special setup you should have to do on the ARM to use it, unless the system (that the ARM chip is running in) has disabled it in hardware (based on your comment, this is not the case since you're seeing the FIQ input pin driven correctly).
For those unaware of the acronyms, FIQ is simply the last interrupt vector in the list which means it's not limited to a branch instruction as the other interrupts are. That means it can execute faster than the other IRQ handlers.
Normal IRQs are limited to a branch instruction since they have to ensure their code fits into a single word. FIQ, because it won't overwrite any other IRQ vectors, can just run the code directly without a branch instruction (hence the "fast").
The FIQ input line is just a way for external elements to kick the ARM chip into FIQ mode and start executing the correct exception. There's nothing on the ARM itself which prevents that from happening except the CPSR.
To enable FIQ in supervisor mode:
MRS r1, cpsr ; get the cpsr.
BIC r1, r1, #0x40 ; enable FIQ (ORR to disable).
MSR cpsr_c, r1 ; copy it back, control field bit update.
A similar thing can be done for normal IRQs, but using #0x80 instead of #0x40.
The FIQ can be closed off to you by the chip manufacturer using trustzone extensions.
Trustzone creates a Secure world and a normal world. The secure world has its own supervisor, user and memory space. The idea is for secure operations to be routed so they never leave the chip and cannot be traced even if you scan the pins on the bus. I think in OMAP it is used for some cryptography operations.
On Reset the core starts in secure mode. It sets up the secure monitor (gateway between secure and non-secure world) and at this time FIQ can be setup to be routed to the monitor. I think it is the SCR.FIQ bit that may be set and then all FIQs ignore the value of CPSR.F and go to monitor mode. Check out the ARM ARM but if I remember correctly if this is happening there is no way for you to know from nonsecure OS code. Then the monitor will reset the Normal world registers and doing an exception return with PC set to the reset exception vector.
The core will take an interrupt to monitor mode, do its thing and return.
Sorry I can't answer you in the comments, I don't have enough reputation, you could always fix that ;), but I hope you see this

Resources