I was creating some drivers and I found my self stuck in the IRQ Pins, my kernel uses IOAPIC and I don't know how this interrupt mechanism (IRQ Pins) works and how to get them and use them.
Can anyone give a detailled answer on how to use them to make interrupts work.
A PCI device can potentially use up to 4 interrupt pins INTA#, INTB#, INTC# and INTD#. These signals will be wired to the interrupt controller. They are level-sensitive and may be shared by other PCI devices. If it has a single PCI function it will typically use INTA# if it uses one at all. If it has multiple PCI functions, the different PCI functions (up to 8) may use different interrupt pins or share the same one.
The read-only "Interrupt Pin" register at offset 3Dh in the PCI function's Type 00h configuration header says which interrupt pin the PCI function is using: 0 = none, 1 = INTA#, 2 = INTB#, 3 = INTC#, 4 = INTD#.
The read-write "Interrupt Line" register at offset 3Ch in the Type 00h configuration defines which IRQ number has been assigned to the PCI function by the system firmware (BIOS) or operating system. This IRQ number may be shared by other devices in the system.
Drivers don't usually care much about the "Interrupt Pin" register. They are more interested in the "Interrupt Line" register value set up by the firmware or operating system. Operating systems usually provide this information in a more friendly way than the driver having to retrieve the information directly from the PCI configuration memory.
Related
I want to understand how the registers of various peripherals/IPs are mapped to the ARM processor memory map in a microcontroller.
Say, I have a CONTROL register for UART block. When I do a write access to address (40005008), this register gets configured. Where does this mapping happens: Within the peripheral block code itself or while integrating this peripheral to the SoC/microcontroller.
For a simple peripheral like a UART it's straightforward - taking the ARM PL011 UART as an example (since I know where its documentation lives):
The programmer's model defines a bunch of registers at word-aligned offsets in a 4k block.
In terms of the actual hardware, we see the bus interface matches what the programmer's model suggests - PADDR[11:2] means only bits 11:2 of the address are connected, meaning it can only understand word-aligned addresses from 0x000 to 0xffc (similarly, note that only 16 bits of read/write data are connected, since no register is wider than that).
The memory-mapping between the UART's 12-bit address and the full 32-bit address that the CPU core spits out happens in the interconnect hardware between them. At design time, the interconnect address map will be configured to say "this 4k region at 0x40005000 is assigned to the UART0 block", etc., and the resulting bus circuitry will be generated for that.
More complex things like e.g. DMA-capable devices typically have separate interfaces for configuration and data access, so the registers can be mapped in a small relocatable block on a low-speed peripheral bus much like the UART.
Most significant bits are defined by your ASIC design, least significant bits are defined by the IP design. Your IP has several registers. The number of register, their order, is defined by the IP design. Here, your register is at address 8. Then when designing the ASIC, the peripherals are connected to the memory bus, and the way they are connected define their address. Your UART is at 40005000. You may have an other instance of the same IP at (for instance) 40006000. The two UART would be strictly identical, and you would be able to access CONTROL register of your second UART at address 40006008.
I wonder how the major number is allocated for platform device driver.
For example, in the driver code, I don't see any of the function calls like
alloc_chrdev_region()
or
register_chrdev_region()
Somebody, please make me understand this.
Thank you.
Kernel creates a great deal of devices attached to various virtual buses (which may or may not represent a physical one). Only some of those devices can be meaningfully accessed directly from user space. And only a subset of those relies on "device node" interface to do so (as ample other options exist in modern kernels). If this particular interface is not used by a driver, then there's no need whatsoever to allocate device node numbers.
Inside the kernel devices are located by their affiliation to particular buses (using internal device names and bus ids). For example, mcspi driver registers as "device" on "platform bus" and as "bus master" on "spi bus". Upon seeing that bus master had registered, spi subsystem will trigger a "bus rescan" on a newly connected bus.
The spidev driver is rigged in such a way as to always "match" an imaginary device present on every spi bus, so it will get instantiated for every "bus master" registration. It will create the user space device node which can be used for direct communication with its "bus master" (spi bus controller, mcspi in this particular case).
Controller doesn't need to be exposed. Hence, no device nos.
On the other hand, SPI devices do require a MAJOR?MINOR no defined in spidev.c, here it is registering the device. And on the top of the same file, there's a macro for the major no defined as:
56 #define SPIDEV_MAJOR 153 /* assigned */
57 #define N_SPI_MINORS 32 /* ... up to 256 */
How will I map register addresses specifically UART registers to kernel for writing device drivers for UART?
I have gone through the omap-serial.c.But I did not find the mapping of the registers defined in it.
Is it different from the mapping of standalone UART driver?
As a device driver writer, it is your job to read the hardware documentation. The serial port documentation will specify the bits in the control and status registers and provide guidance on how to determine their addresses. Usually that guidance is in a system integrator's document.
Let's say your research determines that the UART's registers are at 0x31080220. Your code would then have:
struct resource *uart_res; // resource handle
uint *uart; // pointer to actual control/status registers
uart_res = request_mem_region (0x31080220, 4*4, "my_uart"); // map 16 bytes
if (!uart_res)
{
error ("unable to map memory region");
return -ENOMEM;
}
uart = ioremap (0x31080220, 4*4);
if (!uart)
{
release_mem_region (0x31080220, 4*4);
error ("unable to map");
return -ENOMEM;
}
Then you can use the uart pointer to access the registers.
status = ioread32 (uart + 0); // read the status register
iowrite32 (0xf0f0, uart + 4); // foo foo to control register
Give precise target information for the manufacturer, model, and options—just like an automobile—and someone will help you find the specifics.
Mapping uart in kernel may be definded as uart device (not driver) in the some place: kernel/arch/arm/'machine'/(devices | serial or some else).
Usualy there is no neet for mapping. When uart driver probe, it connects to device and create tty character driver. To operate tty from kernel, you may add your own line discipline to tty. Then user space progam can open needed ttySX port and attach it to your line discipline. Then your code in kernel will operates communication thrue the uart port (tty->driver).
I am new to arm & have some doubs related to IRQ & FIQ. Please try to clarify these.
How many number of FIQ & IRQ channel arm have ?
And what number of handlers can we write for each channel ?
Also if we can register multiple handler for single interrupt channel how arm comes to know which handler to run.
The distinction between IRQ and FIQ goes right the way back to early days of ARM when it was designed by Acorn. It was always the case that the IRQ line was attached to an interrupt controller that multiplexed a large number of interrupt sources together. This is precisely what happens in all modern ARMs
The rationale behind the FIQ was to provide an extremely low latency response with maximum priority (it can safely pre-empt the IRQ handler). The comparatively large number of shadow registers facilitate writing handlers that store the handler's state in CPU registers and not hitting the stack.
The shadow registers are almost of the opposite set to those commonly used by APCS for function call, so writing handlers in C, would cause a push and eventual pop of up to 8 non-shadowed registers. Having any kind of interrupt demultiplexing wipes out any performance advantage that FIQ might have given.
All of this means that there is only really any benefit in using FIQ for very specialised applications where really hard-real time interrupt response is required for one interrupting device, and you're willing to write your handler in assembler. You'll also be left with working out how to synchronise with the rest of the system - some of which would rely on disabling IRQ to keep data synchronised.
traditionally the arm has one interrupt line which you can send to one of two handlers FIQ or IRQ. FIQ has a larger bank of FIQ mode only registers so you have fewer that you need to store on the stack. From there you read the vendor specific registers if any to determine the source of the interrupt and then branch into separate handlers.
More recently there have bend arm architectures with many interrupts 128, 256 each with a separate handler. So generically asking about arm is not as varied but about like asking something generic about x86.
All of this information is easily available in the ARM architectural reference manuals for the different architectures and the pinouts to the core (what the vendor builds its chip around) is documented in the technical reference manuals for the various cores (also very easy to obtain). infocenter.arm.com has the architecture and technical reference manuals as well as amba/axi (the data bus that the vendor connects to). Your question is completely answered in those documents.
The ARM processor directly supports only ONE IRQ and ONE FIQ. ARM supports multiple interrupts through a peripheral called Interrupt Controller. ARM standard interrupt controllers are called GIC (Generic Interrupt Controller).
The GIC has a number of inputs for peripherals to connect their interrupt lines and two output lines that connect to IRQ and FIQ. Basically it acts as a MUX. A GIC driver will setup configurations such as interrupt priority, type (IRQ/FIQ), masking etc.
In traditional ARM systems there is one entry each for IRQ and FIQ in the Exception Vectors. Depending on which line the interrupt fired, IRQ or FIQ handler is called. The interrupt handler queries the GIC (GIC CPU interface registers, to be specific) to get the interrupt number. Based on this interrupt number, corresponding device handler is invoked.
Number of interrupts depends on the specific GIC implementation. So you would have to check the manual for the interrupt controller in your system to get those specifics.
Note: The interrupt handling is slightly different depending on which specific ARM core you are coding for.
Actually the question is a bit tricky. You must specify in the question to which architecture in ARM you work. ARM v7-A and ARM v7-R Architecture Reference Manual (ARM ARM) specifies one FIQ and one IRQ, as many already answered. But ARMv7-M (used in Cortex-M processors) integrates a interrupt controller in the processor, and thus offers one NMI (instead of FIQ) and up to 240 IRQ lines.
For more information: ARMv7 A and ARMv7-R Architecure reference manual: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0406c/index.html
ARMv7-M Architecture Reference Manual: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0403e.b/index.html
As an example, Cortex M4 specs sheet: http://www.arm.com/products/processors/cortex-m/cortex-m4-processor.php
i'm having hard time trying to understand how interrupts work.
the code below initialize the Programmable Interrupt Controller
#define PIC0_CTRL 0x20 /* Master PIC control register address. */
#define PIC0_DATA 0x21 /* Master PIC data register address. */
/* Mask all interrupts*/
outb (PIC0_DATA, 0xff);
/* Initialize master. */
outb (PIC0_CTRL, 0x11); /* ICW1: single mode, edge triggered, expect ICW4. */
outb (PIC0_DATA, 0x20); /* ICW2: line IR0...7 -> irq 0x20...0x27. */
outb (PIC0_DATA, 0x04); /* ICW3: slave PIC on line IR2. */
outb (PIC0_DATA, 0x01); /* ICW4: 8086 mode, normal EOI, non-buffered. */
/* Unmask all interrupts. */
outb (PIC0_DATA, 0x00);
can someone explain to me how it works:
-the role of outb (i didn't understand the linux man)
-the addresses and their meaning
another unrelated question,i read that outb and inb are for port-mapped I/O, can we use memory-mapped I/o for doing Input/output communication?
thanks.
outb() writes the byte specified by its second argument to the I/O port specified by its first argument. In this context, a "port" is a means for the CPU to communication with another chip.
The specific C code that you present relates to the 8259A Programmable Interrupt Controller (PIC).
You can read about the PIC here and here.
If that doesn't provide enough details to understand the commands and the bit masks, you could always refer to the chip's datasheet.
Device specific code is best read in conjunction with the corresponding datasheet. For example, the "8259A Programmable Interrupt Controller" datasheet (http://pdos.csail.mit.edu/6.828/2005/readings/hardware/8259A.pdf) clearly (but concisely) explains almost everything.
However, this datasheet will only explain how the chip can be used (in any system), and won't explain how the chip is used in a specific system (e.g. in "PC compatible" 80x86 systems). For that you need to rely on "implied de facto standards" (as a lot of the features of the PIC chips aren't used on "PC compatible" 80x86 systems, and may not be supported on modern/integrated chipsets).
Normally (for historical reasons) the PIC chip's IRQs are mapped to interrupts in a strange/bad way. For example, IRQ 0 is mapped to interrupt 8 and conflicts with the CPU's double fault exception. The specific code you've posted remaps the PIC chips so that IRQ0 is mapped to interrupt 0x20 (and IRQ1 to interrupt 0x21, ..., IRQ 15 mapped to IRQ 0x2F). It's something an OS typically does to avoid the conflicts (e.g. so that each interrupt is used for an IRQ or an exception and never both).
To understand "outb()", look at the "OUT" instruction in Intel's manuals. It's like there's 2 address spaces - one for normal physical addresses and a completely separate one for IO ports, where normal instructions (indirectly) access normal physical memory; and the IO port instructions (IN, OUT, INSB/W/D, OUTSB/W/D) access the separate "IO address space".
The traditional 8088/86 had/has a memory control signal that is essentially another address bit tied directly to the instruction. The control signal separating the accesses into I/O and Memory, creating two separate address spaces. Not unlike CS, DS, etc creating separate memory spaces inside the chip (before hitting the external memory space). Other processor families use what is called memory mapped I/O.
These days the memory controllers/system is chopped up inside and outside the chip in all different ways, sometimes for example with many control signals that indicate instruction vs data, cache line fills, write through vs write back, etc. To save on external circuitry the memory mapping happens inside the chip and for example dedicated rom interfaces, separate from ram, etc are found on the edge, far more complicated and separate than I/O space vs Memory space of the old 8088/86.
the out and in instruction and a few family members change whether you are doing an I/O access or memory access, and traditinally the interrupt controller was a chip that decoded the memory bus looking for I/O access with the address allocated for that device. Decades of reverse compatibility later and you have the present code you are looking at.
If you really want to understand it you need to find the datasheets for the device that contains the interrupt controller, likely to be combined with a bunch of other logic on a big support chip. Other datasheets may be required as well.