If I have a Memory-mapped I/O device, and I want to write to a register for this device located at address 0x16D34, the 0x16D34 address is actually a virtual address, and the CPU will translate it to a physical address first, and then write the data to the physical address.
But what about Port-mapped I/O devices (for example: a serial port), so if I want to write to a register for a serial port located at address 0x3F8, is the 0x3F8 address a physical address or a virtual address?
Edit: I am on x86 architecture.
Port-mapped I/O on x86/x86-64 (most other modern architectures don't even support it) happens in an entirely separate address space. This address space is not subject to memory mapping, so there are no virtual port addresses, only physical ones. Special in and out instructions must be used to perform port I/O, simple memory access (e.g. with mov) can't access this separate address space. Access protection based on privilege level is possible; most modern OSes prevent user space processes from accessing I/O ports by default.
For details, you can for example check the chapter "INPUT/OUTPUT" of Intel's "Intel® 64 and IA-32 Architectures Developer's Manual: Vol. 1" (chapter 18 as of this writing).
Note that in the early days of x86, port addresses were hardwired in each device, including ISA add-in cards. If you were lucky, the card had a set of jumpers for selecting one of a limited set of possible port ranges for the device, in order to avoid range clashes between devices. Later, Plug & Play was introduced to make the selection dynamically during system boot. PCI further refined this, so that I/O BARs can pretty much be mapped anywhere within the 0x0000-0xffff address space by the operating system and/or firmware. Port-mapped I/O is now strongly discouraged when designing new hardware due to its many inherent limitations.
It seems your question would be the differences between memory-mapped I/O and Port-mapped IO. There normally two methods for processor to connect external devices, which are memory-mapped or port mapped I/O.
Memory-mapped I/O
Memory-mapped I/O uses the same address space to address both memory and I/O devices. So when an address is accessed by the CPU, it may refer to a portion of physical RAM, but it can also refer to memory of the I/O device (Based on Memory-mapped I/O on Wiki).
The value 0x16D34 in your first example would be virtual memory, and would be mapped to the physical memory. The I/O device would refer the same physical memory as well to allow the access from CPU.
Port mapped I/O
Port mapped I/O uses a separate, dedicated address space and is accessed via a dedicated set of microprocessor instructions. For 0x3F8 in your second example, it's the address of the own address specific to Memory and I/O devices. It's not the address shared between memory and I/O devices as we previously mentioned in Memory-mapped I/O. You might get more detailed in Memory-mapped IO vs Port-mapped IO
Related
I would like to understand how the MMIO works on ARM architecture.
I realized that ARM provides 1:1 mapping from physical address to specific peripheral.
For example, to manage the GPIOX on arm, for example in Raspberry Pi, the processor accesses the specific physical addresses (seems that preconfigured by the manufacturer?) without configuring some registers beforehand.
I thought that there are some specific BAR register that arbitrates the read/write request to specific physical addresses to peripherals. However, when I check the spec for BCM2835 (Raspberry pi 3), the physical addresses are translated by another MMU called VC/ARM MMU. Is it a common design to have another MMU that translates the physical addresses to bus addresses in ARM architecture?
Also, I was wondering how the SMMU (IOMMU in x86) is utilized in this concept. I found one article mentioning that the VC/ARM MMU is an example of SMMU but I think that is not true? When I check some monitor code & kernel driver code implementation (not raspberry pi), it seems that the SMMU is also mapped to specific physical addresses and the monitor/kernel uses those addresses to initialize and communicate with SMMU. If the arm architecture utilize VC/ARM MMU as an SMMU, how that physical address mapping for SMMU itself can be accessed to initialize the SMMU..?
Lastly, I thought that all peripherals are managed by the SMMU. If some peripherals are always mapped to fixed physical addresses, what is the role of SMMU? Why some peripherals are communicated with PE (CPU) through fixed physical addresses, and some are communicated with SMMU..? How exactly the peripherals and PE (Processors) can communicate in ARM architecture..?
I am just answering your questions. There are many things wrong with some assumptions you have, which Old Timer tries to clarify.
Is it a common design to have another MMU that translates the physical addresses to bus addresses in ARM architecture?
No. Broadcom has an architecture license. They designed there own HDL code for the ARM CPU and system details can be different. In fact, it is quite common for even direct license (using ARM HDL) that the systems differ from vendor to vendor.
If the arm architecture utilize VC/ARM MMU as an SMMU, how that physical address mapping for SMMU itself can be accessed to initialize the SMMU?
If some peripherals are always mapped to fixed physical addresses, what is the role of SMMU? Why some peripherals are communicated with PE (CPU) through fixed physical addresses, and some are communicated with SMMU..? How exactly the peripherals and PE (Processors) can communicate in ARM architecture..?
The typical solution to this is 'TrustZone'. Here, the access is defined as having a 'secure' or 'normal' access. The master (CPU) tags the access and either the peripheral or a bus access controller permits or prohibits access to the peripheral. These are best statically mapped at boot time and locked.
By defining permitted use cases, the peripheral/master access patterns can be defined for the system. The complication is 'dynamic' peripherals and masters. The ARM TrustZone CPU is dynamic. A 'world switch' changes the CPU access and there are attacks on the communication interface between 'normal' and 'secure' worlds.
The ARM AXI/AHB buses were originally designed for embedded devices. PCs in contrast have dynamic buses ISA->EISA->PCMIA->PCI (etc.) These addresses are typically dynamic. Note However, this bus structure has an expense. So some of your question is like asking why isn't ARM just like an x86 PC. They had different goals and they are different. You can't put a new grahpics card into your cell phone.
Reference:
Handling ARM Trustzones
Explanation of arm Bus architecture
ARM Differences from PC
Note: It is from the modular nature of the PC system which was part of why the PC dominated Apple, Commodore, Atari, etc. in my opinion (the other aspect was piracy), contrary to others who think the answer is Microsoft. The hardware matters.
Wikipedia says:
To address a PCI device, it must be enabled by being mapped into the system's I/O port address space or memory-mapped address space. The system's firmware, device drivers or the operating system program the Base Address Registers (commonly called BARs) to inform the device of its address mapping by writing configuration commands to the PCI controller.
Does this mean that a PCI device gets initialized when an address is written to the BAR? I'm trying to initialize the Bochs VGA card on Qemu Aarch64 thru bare metal and that's why I'm asking. Thanks!
Writing to the BAR simply tells the device what address range it should respond to. (It doesn't even enable the device to respond to the address; for that you need to set MSE [memory space enable].) There are many steps typically needed to initialize a device. Some of the steps are common for different PCI devices and others are completely device specific.
I'm developing a new Kernel-Mode driver, that should run on Windows 10 (64-bit).
This driver should allocate 48GB of continuous physical memory, and map it (its base address) to a virtual address in the user space of the Windows application that will use it. The system actually has 64GB of RAM installed on it, so it might be needed to make a segment of the memory dedicated for this use, perhaps by changing a boot entry information.
In addition, the driver should reveal its base address somehow to an FPGA-based device, located on the PCIe slot. The purpose of this, is to use this 48GB of continuous physical allocated memory, as a DMA (Direct Memory Access). Namely, the FPGA-based device will generate data, and write it at the appropriate location in the DMA. The host software will try to read from that location, in a cyclic fashion. That is, the FPGA will override the data in the buffer, and the host will try to keep the pace, and read that data before it is overridden.
Please note that this question only deals with the host side (the driver), and not with the FPGA side.
So, basically my questions are:
How do I make such an allocation (as described above)?
How do I map the base address from the virtual address in the Kernel-Space, to the appropriate virtual address in the User-Space?
How do I reveal that base address to the FPGA (located on the PCIe slot), so it will know where to perform its write operations?
What other Callback-Functions should this driver implement, in terms of events handling?
Thanks a lot!
I've gathered some level of knowledge on several components (including software and hardware) which are involved in general DMA transactions in ARM based boards, but I don't understand how is it all perfectly integrated, I didn't find a full coherent description about this.
I'll write down the high level of the knowledge I already have and I hope that someone could fix me where I'm wrong and complete the missing parts so the whole picture would be clear. My description starts with the userspace software and drills down to the hardware components. The misunderstood parts are in italic-bold format.
The user-mode application requests to read/write from some device, i.e. makes I/O operation.
The operating system receives the request and hand it to the appropriate driver (every OS has its own mechanism to do this, I don't need a further drill down here but if you want to share insights here you are welcome)
The driver which is on charge to handle the I/O request, has to know the address to which the device is mapped to (since I'm interested in ARM based boards, afaik there is only memory-mapped I/O and no port I/O). In most of the cases (if we consider smartphone-like boards) there is a linux kernel that parses the devices addresses from the device-tree which is given from the bootloader at the boot time (the modern approach), or the linux is precompiled for the specific model family and board with the device addresses within it (hardcoded in its source code) (in older and obsolete? approach). In some cases (happens a lot in smartphones) part of the drivers are precompiled and are just packaged into the kernel, i.e. their source is closed, thus, the addresses correspond to the devices are unknown. Is it correct?
Given that the driver knows the address of the relevant registers of the device it want to communicate with, it allocate a buffer (usually in the kernel space) to which the device would write its data (with the help of the DMA). The driver needs to inform the device about the location of that buffer, but the addresses that the devices work with (to manipulate memory) are different from the addresses that the drivers (cpu) work with, hence, the driver needs to inform the device about the 'bus address' of the buffer it has just allocated. How does the driver inform the device about that address? How popular is to use an IOMMU? when using IOMMU is there one hardware component that manages addressing or one per device?
Then the driver commands the device to do its job (by manipulating its registers) and the device transfers output data directly to the allocated buffer in the memory. Here I'm confused a bit with the relation of device-driver:bus:bus-controller:actual-device. Take for example some imaginary device which knows to communicate in the I2C protocol; the SoC specify an I2C bus interface - what is this actually? does the I2C bus has some kind of bus controller? Does the cpu communicate with the I2C bus interface or directly with the device? (i.e. the I2C bus interface is seamless). I guess that someone with some experience with device drivers could answer this easily..
The device populates a DMA channel. Since the device is not connected directly to the memory but rather is connected through some bus to the DMA controller (which masters the bus), it interacts with the DMA to transfer the required data to the allocated buffer in the memory. When the board vendor uses ARM IP cores and bus specifications then this step involves transactions over a bus from the AMBA spec (i.e. AHB/multi-AHB/AXI), and some protocol between the device and a DMAC on top of it. I would like to know more about this step, what actually happens? There are many specifications for DMA controller by ARM, which one is the popular? which is obsolete?
When the device is done, it sends an interrupt, which travel to the OS through the interrupt controller, and the OS's interrupt handler direct it to the appropriate driver which now knows that the DMA transfer is completed.
You've slightly conflated two things here - there are some devices (e.g. UARTs, MMC controllers, audio controllers, typically lower-bandwidth devices) which rely on an external DMA controller ("DMA engine" in Linux terminology), but many devices are simply bus masters in their own right and perform their own DMA directly (e.g. GPUs, USB host controllers, and of course the DMA controllers themselves). The former involves a bunch of extra complexity with the CPU programming the DMA controller, so I'm going to ignore it and just consider straightforward bus-master DMA.
In a typical ARM SoC, the CPU clusters and other master peripherals, and the memory controller and other slave peripherals, are all connected together with various AMBA interconnects, forming a single "bus" (generally all mapped to the "platform bus" in Linux), over which masters address slaves according to the address maps of the interconnect. You can safely assume that the device drivers know (whether by device tree or hardcoded) where devices appear in the CPU's physical address map, because otherwise they'd be useless.
On simpler systems, there is a single address map, so the physical addresses used by the CPU to address RAM and peripherals can be freely shared with other masters as DMA addresses. Other systems are more complex - one of the more well-known is the Raspberry Pi's BCM2835, in which the CPU and GPU have different address maps; e.g. the interconnect is hard-wired such that where the GPU sees peripherals at "bus address" 0x7e000000, the CPU sees them at "physical address" 0x20000000. Furthermore, in LPAE systems with 40-bit physical addresses, the interconnect might need to provide different views to different masters - e.g. in the TI Keystone 2 SoCs, all the DRAM is above the 32-bit boundary from the CPUs' point of view, so the 32-bit DMA masters would be useless if the interconnect didn't show them a different addresses map. For Linux, check out the dma-ranges device tree property for how such CPU→bus translations are described. The CPU must take these translations into account when telling a master to access a particular RAM or peripheral address; Linux drivers should be using the DMA mapping API which provides appropriately-translated DMA addresses.
IOMMUs provide more flexibility than fixed interconnect offsets - typically, addresses can be remapped dynamically, and for system integrity masters can be prevented from accessing any addresses other than those mapped for DMA at any given time. Furthermore, in an LPAE or AArch64 system with more than 4GB of RAM, an IOMMU becomes necessary if a 32-bit peripheral needs to be able to access buffers anywhere in RAM. You'll see IOMMUs on a lot of the current 64-bit systems for the purpose of integrating legacy 32-bit devices, but they are also increasingly popular for the purpose of device virtualisation.
IOMMU topology depends on the system and the IOMMUs in use - the system I'm currently working with has 7 separate ARM MMU-401/400 devices in front of individual bus-master peripherals; the ARM MMU-500 on the other hand can be implemented as a single system-wide device with a separate TLB for each master; other vendors have their own designs. Either way, from a Linux perspective, most device drivers should be using the aforementioned DMA mapping API to allocate and prepare physical buffers for DMA, which will also automatically set up the appropriate IOMMU mappings if the device is attached to one. That way, individual device drivers need not care about the presence of an IOMMU or not. Other drivers (typically GPU drivers) however, depend on an IOMMU and want complete control, so manage the mappings directly via the IOMMU API. Essentially, the IOMMU's page tables are set up to map certain ranges of physical addresses* to ranges of I/O virtual addresses, those IOVAs are given to the device as DMA (i.e. bus) addresses, and the IOMMU translates the IOVAs back to physical addresses as the device accesses them. Once the DMA operation is finished, the driver typically removes the IOMMU mapping, both to free up IOVA space and so that the device no longer has access to RAM.
Note that in some cases the DMA transfer is cyclic and never "finishes". With something like a display controller, the CPU might just map a buffer for DMA, pass that address to the controller and trigger it to start, and it will then continuously perform DMA reads to scan out whatever the CPU writes to that buffer until it is told to stop.
Other peripheral buses beyond the SoC interconnect, like I2C/SPI/USB/etc. work as you suspect - there is a bus controller (which is itself a device on the AMBA bus, so any of the above might apply to it) with its own device driver. In a crude generalisation, the CPU doesn't communicate directly with devices on the external bus - where a driver for an AMBA device says "write X to register Y", that just happens by the CPU performing a store to a memory-mapped address; where an I2C device driver says "write X to register Y", the OS usually has some bus abstraction layer which the bus controller driver implements, whereby the CPU programs the controller with a command saying "write X to register Y on device Z", the bus controller hardware will go off and do that, then notify the OS of the peripheral device's response via an interrupt or some other means.
* technically, the IOMMU itself, being more or less "just another device", could have a different address map in the interconnect as previously described, but I would doubt the sanity of anyone actually building a system like that.
I know there are two types of addresses. Virtual and Physical. Printing the address of an integer variable will print its virtual address. Is there a function that will facilitate to print the physical memory of that variable?
Does virtual memory mean a section on hard disk that is treated as RAM by the OS?
No, there is no such (portable) function. In modern operating systems implementing memory protection, user-space (as opposed to kernel-space, i.e. parts of the OS) cannot access physical addresses directly, that's simply not allowed. So there would be little point.
No, virtual memory does not need to involve hard disks, that's "swapping" or "paging". You can implement that once you have virtual memory, since it gives the OS a chance to intervene and manage which pages are kept in physical memory, thus making it possible to "page out" memory to other storage media.
For a very in-depth look at how the Linux kernel manages memory, this blog post is fantastic.
This is a complicated subject.
Physical memory addresses point to a real location in a hardware memory device, whether it be your system memory, graphics card memory, or network card buffer.
Virtual memory is the memory model presented to user-mode processes. Most devices on the system have some virtual memory address space mapped to them, which the processor can write to. When these physical memory addresses are given a virtual memory address, the OS recognises that read/write requests to those addresses need to be serviced by a particular device, and delegates that request off to it.