What are legacy interrupts? - arm

I am working on a project where i am trying to figure out how an interrupt is processed in the Global interrupt controller for a ARM architecture. I am working with pl390 interrupt controller. I see there is a line which is mentioned as legacy interrupts which bypasses the distributor logic. It is given that 2 interrupts can be programmed as a legacy interrupt. Can any one help with some explanation of what exactly is a legacy interrupt?. I trying searching online without any luck.

Legacy interrupts are the two interrupts that were in ARM before GIC arrived: nIRQ - normal interrupt request, and fIRQ - fast interrupt request.
Since legacy interrupts were made for single-core processors, and they don't support multi-core processors internally, the reason they bypass the distributor logic should be rather clear - the legacy interrupts are hardwired into one of the cores.
In short - it allows the CPU to work in backwards compatibility with older ARM specification. For example, a four-core ARM CPU will have 4 nIRQs and 4 fIRQs, separate for each of the cores. When you have an old piece of ARM-compatible hardware (which doesn't support GIC), you connect it to one of the core's nIRQ/fIRQ just as if you connected it to an old single-core CPU, and it will always execute on that one core.
More information can be found here - http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0407e/CCHDBEBE.html

Related

Different between Systick and Timer in ARM M4

I completed a basic microprocessor with 8051. In this course I learned using a timer to trigger an event. After a semester, I learned programming Embedded System with ARM Cortex M4 (Tiva C launchpad) and started to use Systick to trigger event ( almost used in FreeeRTOS) and sometimes it is used as a timer.
I wonder what different between timer and systick? Because sometime I
think systick behavior is the same as timer. I have searched the
differ, and know: systick is in arm core and timer is of chip vendor.
And which situation we should use systick intead of using timer?
Please let me know. Thank you.
You basically have it. The systick timer is part of the ARM core. And the other timer(s) are from the chip vendor. You, the programmer are free to use them however you wish.
They most likely have different features, the systick timer is pretty much only for polling or interrupts of simple durations. Where the chip vendor timers can do those things usually and much more, sometimes they can generate clocks for other timers sometimes they can generate clocks or signals that go out a pin, sometimes they can time inputs. Sometimes a vendor will have multiple timers in a chip and those timers have features different from each other. It varies widely.
Note some ARM cores do not have a systick timer or lets say the chip vendor has the option to compile the core without it. In those situations your only choice is the chip vendor supplied timers.
There is no magic here you are the programmer you are free to use the peripherals as you wish.
Now if you use an RTOS like FreeRTOS or others, then your freedom is limited to what the RTOS does not consume for itself (it will likely consume the systick timer if present, but leave others).
The reasoning behind this is any OS developer can write code for any Cortex-M which has SysTick, and not need to worry about the vendor specific details. There is a guarantee that SysTick always works the same way across a wide range of devices so there is less low-level porting to be done.
Same for your course, if you are writing bare metal, you don't need to worry about the device vendor until you use their peripherals (timer, uart, watchdog).

Can the kernel set the interval of the "hardware timer" of the CPU, and does the CPU have a dedicated hardware timer for scheduling?

Based on my understanding, the CPU has a "hardware timer" that fires an interrupt when its interval expires.
The kernel uses this hardware timer to implement the scheduling mechanism for the processes, so if the hardware timer fires an interrupt with the number of 123, the kernel will map this interrupt number to an interrupt handler that executes the scheduler code (which will decide which process to execute next).
I have two questions:
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Edit: The hardware architecture I am more interested in is a PC, but I would like to know if other architectures (for example: a mobile phone, a raspberry PI, etc.) works in a similar way.
Details are hardware specific (might be different with various motherboards, chipsets, processors; read about SouthBridge). Read about High Precision Event Timer (and APIC).
See also OSDEV wiki, notably Programmable Interval Timer.
(so the answer is usually yes to both questions)
From early on, IBM-compatible PCs had PITs (Programmable Interval Timers): IBM PC and IBM PC XT had the Intel 8253, the IBM PC AT introduced the Intel 8254.
From the IBM PC Technical Reference from April 1984, page 1-11:
System Timers
Three programmable timer/counters are used by the system as follows: Channel 0 is a general-purpose timer providing a constant time base for implementing a time-of-day clock, Channel 1 times and requests refresh cycles from the Direct Memory Access (DMA) channel, and Channel 2 supports the tone generation for the speaker. [...]
Channel 0 is exactly the "constant time base," the "interval" you are asking for. And, to answer your 1st question, it is changeable; it is the Programmable Interval Timer.
However, the CPU built into the original IBM PC was the Intel 8088, basically an Intel 8086 with an 8-bit data bus. Real Mode was the state of the art back then; Protected Mode was introduced some years later with the Intel 80286, so effective multitasking, let alone preemptive multitasking or multithreading, were of no concern in those days when DOS reigned the market.
Fast-forwarding to the IBM PC AT, the world was blessed with a Protected Mode-capable CPU, the Intel 80286, and the Intel 8254 was introduced, a "[...] superset of the 8253." (from the 8254 PIT datasheet). If you really want an in-depth understanding of the PITs, read the 8253/8254 datasheets linked at the bottom. It might also be worth looking at Linux. Since the latest kernels are way too complicated to really understand the particular parts in a matter of twenty minutes, I suggest you look at Linux 0.01, the very first release. _timer_interrupt in kernel/system_calls.s might be interesting and from there you can go wherever you want.
Regarding your 2nd question: there are multiple timer sources, but only one is suitable for interval timing, that is, channel 0. IBM-compatibles still comply with the system timer layout shown above. They retain the same functionality, but might add more on top of that or change how the hardware works and how it's packaged. Nowadays, additional timers do exist like high-resolution timers, but using them for interrupt timing instead would break compatibility.
Intel 8253 Datasheet
Intel 8254 Datasheet
IBM PC Technical Reference
IBM PC AT Technical Reference
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Your questions are ENTIRELY processor specific. Some processors have controllable timers. Others have timers that go off at fixed intervals. Most processors you are likely to encounter have adjustable timers, however.
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Some processors have only one timer. Most processors these days have multiple timers.

Is every input port of some microcontroller an interrupt?

I was learning about interrupts and came here to see if someone can help me!
A) I understand that the interrupt is the electrical signal sent by some external hardware to the processor, via one of the input ports.
B) I understand that in some MCU's more than one input port are "attached" to only one interrupt.
Can exist an useful input port in a MCU that is not linked to any interrupt at all?
A) I understand that the interrupt is the electrical signal sent by
some external hardware to the processor, via one of the input ports.
That is surely one class of interrupt, sure, as long as you understand that 'external hardware to the processor' can mean 'internal to the controller chip' - many MCU have extensive integrated peripherals.
B) I understand that in some MCU's more than one input port are
"attached" to only one interrupt.
Yes - that is not uncommon. The intrrupt-handler then has to poll the port to find out which GPIO/whatever pin generated the interrupt.
Can exist an useful input port in a MCU that is not linked to any
interrupt at all?
Sure, especially on 'trivial' controllers that do not require high-performace IO and have no RTOS.
Even higher-performance MCU apps may poll for sundry reasons. One common example is reading keypads. The input rate is very low and the mechanical switches need to be debounced. Fastening every KB read line to an interrupt line may cause unwanted multiple interrupts. Such iputs are better polled, though even then, a timer interrupt often handles the polling.
The answer is probably "yes," but it depends on the microcontroller architecture. There's no guarantee that one vendor's MCU will behave the same as any other (with respect to interrupts, ports, or anything else). If you're tasked with learning a particular MCU, then learn it, live it.
You house may have only one doorbell button. But pretty much anyone can use it for whatever reason. UPS there is a package. neighbor kid to play with your kid. someone trying to sell something and so on. A processor is no different. To reduce latency newer designs may have multiple interrupt signals on the core so that the handler doesnt have to do as much if any work to figure out who caused the interrupt. Kind of like having ringtones for every person on your phone, so you can tell without looking who is calling. Vs. one ringtone for everyone and you have to look.
Do not confuse external gpio ports on the chip with interrupt lines, they are not. they are general purpose I/O. they might have a way to be used as interrupts or not, depends on the design of the chip. Again as with the doorbell on your house, there are many things, technically all of them are within the chip (microcontroller), that create interrupts. Because software has to setup handlers before it can...handle...interrupts, all sources of interrupts are disabled at first, and only the ones software enabled have the ability to actually reach the core and cause an interrupt. Logic in the chip. so you may have an interrupt signal tied to the uart receiver and you might enable that. You might have one for the tx buffer, when it is empty interrupt. but you have to enable those before the processor can get an interrupt. there is a small section of logic that does fire an interrupt every time one of those events occurs, but that signal is gated and cannot reach the core, blocked by logic you control.
You can have timers in the mcu, that interrupt you when they roll over or count to zero. But you have to not only setup the timer to do that with software, you also have to enable the interrupt from making it across the chip from the timer to the processor core.
And yes sometimes the gpio peripheral has a way to interrupt the processor as well. as with everything else you have to with software setup the peripheral and define what interrupts you want and you have to enable them across the chip.
There are more different ways of doing this than there are companies making chips as they dont always do it the same way across their product lines. But generally at a minimum there is an interrupt enable on the peripheral end, one or many depending on the peripheral and features, that you have to enable in order for that signal to leave that peripheral on its way to the core. And there is often an interrupt controller peripheral or something built into the core or near it that takes all the dozens or hundreds of individual interrupt connections in the chip and prioritizes them and orrs them into the one or few interrupt lines into the core. you generally have to also enable the corresponding interrupt that matches the signal coming out of your peripheral to reach the processor core. And then there is sometimes an interrupt enable in the core itself so that even if you have the peripheral enabled, the interrupt controller enabled for that one peripherals interrupt, you still cannot interrupt the processor unless the interrupt enable in the processor core is enabled. That is the simple case, it can get more complicated if there are more layers of interrupt controllers along the way. Well the simple case is when you have something like a cortex-m with dozens or hundreds of individual interrupt signals, still have interrupt enables on both ends and in the core, just easier to manage as you have dozens to hundreds of interrupt handlers instead of one mega handler for everything.
So dont confuse the pins on the chip as being interrupts, on older dedicated processors, like the 8088/86, sure that was the one interrupt pin. But general purpose I/O sometimes called GPIO sometimes called ports, are just a peripheral, they are just pins you can make go high or low, they are not there to be interrupts although there may be a feature in that peripheral for that (or maybe there isnt). And again interrupt signals go through logic gates and have to be enabled, by software, at a minimum on both ends of that signal, at the peripheral and at the interrupt controller.

Low interrupt latency via dedicated architectures and operating systems

This question may seem slightly vague, however I am researching upon how interrupt systems work and their latency times. I am trying to achieve an understanding of how architecture facilities such as FIQ in ARM help decrease latency times. How does this differ from using a operating system that does not have access or can not provide access to this facilities? For example - Windows RT is made for ARM etc, and this operating system is not able to be ported to other architectures.
Simply put - how is interrupt latency different in dedicated architectures that have dedicated operating systems as compared to operating systems that can be ported across many different architectures (Linux for example)?
Sorry for the rant - I'm pretty confused as you can probably tell.
I'll start with your Windows RT example, Windows RT is a port of Windows to the ARM architecture. It is not a 'dedicated operating system'. There are (probably) many OSes that only run on only 1 architecture, but that is more a function of can't be arsed to port them due to some reason.
What does 'port' really mean though?
Windows has a kernel (we'll call is NT here, doesn't matter) and that NT kernel has a bunch of concepts that need to be implemented. These concepts are things like timers, memory virtualisation, exceptions etc...
These concepts are implemented differently between architectures, so the port of the kernel and drivers (I will ignore the rest of the OS here, often that is a recompile only) will be a matter of using the available pieces of silicon to implement the required concepts. This implementation is a called 'port'.
Let's zoom in on interrupts (AKA exceptions) on an ARM that has FIQ and IRQ.
In general an interrupt can occur asynchronously, by that I mean at any time. The CPU is generally busy doing something when an IRQ is asserted so that context (we'll call it UserContext1) needs to be stored before the CPU can use any resources in use by UserContext1. Generally this means storing registers on the stack before using them.
On ARM when an IRQ occurs the CPU will switch to IRQ mode. Registers r13 and r14 have there own copy for IRQ mode, the rest will need to be saved if they are used - so that is what happens. Those stores to memory take some time. The IRQ is handled, UserContext1 is popped back off the stack then IRQ mode is exited.
So the latency in this case might be the time from IRQ assertion to the time the IRQ vector starts executing. That going to be some set number of clock cycles based upon what the CPU was doing when the IRQ happened.
The latency before the IRQ handling can occur is the time from the IRQ assert to the time the CPU has finished storing the context.
The latency before user mode code can execute depends on too much stuff in the OS/Kernel to explain here, but the minimum boils down to the time from the IRQ assertion to the return after restoring UserContext1 + the time for the OS context switch.
FIQ - If you are a hard as nails programmer you might only need to use 7 registers to completely handle your interrupt servicing. I mentioned that IRQ mode has its own copy of 2 registers, well FIQ mode has its own copy of 7 registers. Yup, that's 28 bytes of context that doesn't need to be pushed out into the stack (actually one of them is the link register so it's really 6 you have). That can remove the need to store UserContext1 then restore UserContext1. Thus the latency can be reduced by up to the length of time needed to do that save/restore.
None of this has much to do with the OS. The OS can choose to use or not use these features. The OS can choose to make guarantees regarding how long it will take to execute the OSes concept of an interrupt handler, or it may not. This is one of the basic concepts of an RTOS, the contract about how long before the handler will run.
The OS is designed for some purpose (and that purpose may be 'general') - that target design goal will have a lot more affect on latency than haw many target the OS has been ported to.
Go have a read about something like freertos than buy some hardware and try it. Annotate the code to figure out the latencies you really want to look at. IT will likely be the best way to get your ehad around it.
(*Multi-CPU systems do it the same with but with some synchronization and barrier functions and a sprinkling of complexity)

Send Inter-Processor Interrupts in Zynq (arm-v7 / cortex-a9)

I am trying to add multiprocessor support for an embedded operating system (DNA-OS) on the Zynq platform in the ZedBoard.
The OS is actually flawlessly functional with CPU_0 alone. The OS architecture requires the implementation of a cpu_send_ipi function in order to activate multiprocessing support: Basically, this function would interrupt a processor and give him a new thread to process.
I looked for an IPI register in the ug585 (Technical Reference Manual for Zynq) but couldn't find any.
I tried digging further in the Cortex-A9 spec for an IPI register, and found out that software generated interrupts could be used as IPI.
After adding software interrupt support to my OS, the problem is that CPU_0 can interrupt itself, but cannot interrupt CPU_1 !
PS: for my OS to handle SGIs, I used the register spec from the ug585 in page 1486:
So is there any other special configuration to permit CPUs to interrupt each others? or any other way to implement IPI ?
Regards,
Your reference documentation is a form of the GIC (global interrupt controller). The Cortex-A9 MP cores include an integrated GIC controller. Each CPU includes an Interrupt interface. As well, there is a system wide distributor. In order to receive the IPI (also known as SGI or software generate interrupt), you need to enable the CPU interface to receive the SGI interrupts on the 2nd CPU. This entails several steps,
Configuring the GIC interrupt interface registers on CPU2.
Setting the CP15 vector table for CPU2.
Enabling the CPSR I-bit on CPU2
Possibly setting up some banked PPI distributor registers. note1
Note1: While most distributor registers are system global, some are banked per CPU as well. For instance, see section 3.3.8. PPI Status Register in the Cortex-A9 MPcore TRM. I don't see any from a cursory investigation, but I would not rule it out.
Testing that an unused SPI (shared peripheral interrupt) works by handling the vector on CPU2 by setting the GIC distributor GICD_ISPEND register on the CPU1. This should verify that you have steps 2 and 3 covered. You may also need to set the type to ensure that they are interrupts and not FIQ; especially if you have security support. You need to use the GICD_ITARGETSR register to include CPU2.
GIC reference list
ARM Generic GIC document - registration needed, GICv1 (ignore GICv2 info).
ARM Cortex-A9 MPcore TRM - chapter 3, for specific info.
PL390 TRM - it is not spelled out anywhere, but I think this is the integrated GIC. It maybe worth looking at if you use more esoteric features.
Especially useful in the Appendix B of the Generic GIC manual. For some reason, ARM likes to keep changing the register names in each and every document they publish.

Resources