I want to understand how the registers of various peripherals/IPs are mapped to the ARM processor memory map in a microcontroller.
Say, I have a CONTROL register for UART block. When I do a write access to address (40005008), this register gets configured. Where does this mapping happens: Within the peripheral block code itself or while integrating this peripheral to the SoC/microcontroller.
For a simple peripheral like a UART it's straightforward - taking the ARM PL011 UART as an example (since I know where its documentation lives):
The programmer's model defines a bunch of registers at word-aligned offsets in a 4k block.
In terms of the actual hardware, we see the bus interface matches what the programmer's model suggests - PADDR[11:2] means only bits 11:2 of the address are connected, meaning it can only understand word-aligned addresses from 0x000 to 0xffc (similarly, note that only 16 bits of read/write data are connected, since no register is wider than that).
The memory-mapping between the UART's 12-bit address and the full 32-bit address that the CPU core spits out happens in the interconnect hardware between them. At design time, the interconnect address map will be configured to say "this 4k region at 0x40005000 is assigned to the UART0 block", etc., and the resulting bus circuitry will be generated for that.
More complex things like e.g. DMA-capable devices typically have separate interfaces for configuration and data access, so the registers can be mapped in a small relocatable block on a low-speed peripheral bus much like the UART.
Most significant bits are defined by your ASIC design, least significant bits are defined by the IP design. Your IP has several registers. The number of register, their order, is defined by the IP design. Here, your register is at address 8. Then when designing the ASIC, the peripherals are connected to the memory bus, and the way they are connected define their address. Your UART is at 40005000. You may have an other instance of the same IP at (for instance) 40006000. The two UART would be strictly identical, and you would be able to access CONTROL register of your second UART at address 40006008.
Related
I was creating some drivers and I found my self stuck in the IRQ Pins, my kernel uses IOAPIC and I don't know how this interrupt mechanism (IRQ Pins) works and how to get them and use them.
Can anyone give a detailled answer on how to use them to make interrupts work.
A PCI device can potentially use up to 4 interrupt pins INTA#, INTB#, INTC# and INTD#. These signals will be wired to the interrupt controller. They are level-sensitive and may be shared by other PCI devices. If it has a single PCI function it will typically use INTA# if it uses one at all. If it has multiple PCI functions, the different PCI functions (up to 8) may use different interrupt pins or share the same one.
The read-only "Interrupt Pin" register at offset 3Dh in the PCI function's Type 00h configuration header says which interrupt pin the PCI function is using: 0 = none, 1 = INTA#, 2 = INTB#, 3 = INTC#, 4 = INTD#.
The read-write "Interrupt Line" register at offset 3Ch in the Type 00h configuration defines which IRQ number has been assigned to the PCI function by the system firmware (BIOS) or operating system. This IRQ number may be shared by other devices in the system.
Drivers don't usually care much about the "Interrupt Pin" register. They are more interested in the "Interrupt Line" register value set up by the firmware or operating system. Operating systems usually provide this information in a more friendly way than the driver having to retrieve the information directly from the PCI configuration memory.
I am reading manuals regarding controlling a device with C,and in general its just playing with addresses; however when we are connected through UART we have the BAUDRATE present.
So how putting a value into some address have to do with baud-rate?
Is it necessary in embedded programming?
Those addresses are not memory. They are memory-mapped I/O registers.
The address for your UART's baud rate divisor register is a hardware register. The value in hardware registers directly control the hardware. The value written to baud rate divisor register is typically a counter reload value, and one bit period is the time it takes to count up-to (or down from) the value in the divisor given a specific peripheral clock source. So for exaample if the UART peripheral clock were say 12MHz, and you wanted a baud rate of 19200, you would set the divisor register to 12x106/19200 = 625.
Although you can read and write hardware registers as if they were memory, they do not necessarily behave like memory. Some registers may be read-only, others write only, and some writing may have a different effect than reading, such that if you write a value, the value read back will not be what was written. This often works at the bit level, so that each bit in a register may exhibit different behaviour.
For example on many UART implementations the register to which your write data to be sent, is the same address you read for received data - however they are not the same register, but rather a read-only register and a write-only register mapped to the same address.
It is not specifically an embedded programming thing, but rather an I/O hardware thing; it is simply that outside of embedded systems you are not typically writing directly to the hardware unless you happen to be writing a kernel device driver, where you will encounter the same thing.
As well as the device manuals which necessarily assume existing knowledge and expertise, perhaps you should consult a more general reference. Now you know the key term: "memory-mapped I/O" or MMIO, you are in a better position to Google it. Examples:
http://www.cs.uwm.edu/classes/cs315/Bacon/Lecture/HTML/ch14s03.html
https://en.wikipedia.org/wiki/Memory-mapped_I/O
I enabled DMA peripheral to memory tranfer for ADC1 in CubeMX and generated the code. However I'm confused as to where the data from the ADC will be written to? Should I explicitly define a variable to contain this data? How can I retrieve the data in the DMA Channel 1 ISR?
The DMA does not manage memory nor choose a valid address to set the data. General speaking, the DMA allows data transfers without using the CPU, but no more.
The STM32 microcontrollers provide transfers from:
memory to memory
memory to peripheral
peripheral to memory
In all of them, the developers have to be aware about them purpose in order to configure (besides of DMA) the source and destination places, such as address of the peripherals, reserve memory (and what kind of memory), etc.
In your particular case (check RM, AN, docs, etc), the main actors in an ADC to memory (peripheral to memory) transfer are:
Source: ADC peripheral, the developer has to know where the ADC peripheral is located and configure (besides of ADC) the DMA based on the ADC parameters as the source of information.
Destination: memory, the developer has to reserve a bunch of memory (heap/stack/global/etc) and configure the DMA according to the already allocated space of memory. Doing that, DMA will allow you to set the values in different ways (depending on the device), such as continuous ring buffer, one cycle, ping-pong buffer (stm32 uses the term "circular double buffer"), etc.
DMA and ADC configuration: there are vast amount of factors which for the sake of simplicity I am not going to include, usually simplified by the manufacturer's HAL (it is up to you to use it).
You instruct the HAL DMA ADC driver where to put the sample data when you start the conversion:
volatile uint32_t adcBuffer[SAMPLE_COUNT];
HAL_ADC_Start_DMA( &hadc,
adcBuffer,
SAMPLE_COUNT );
Note that some STM32 parts have SRAM divided across multiple buses with one section very much smaller than others. There are performance benefits to be had in reserving this section for DMA buffers since it reduces bus contention with normal software data fetches. So you may want to customise your linker script to create sections and explicitly place DMA buffers in one while excluding placement of application data there.
If you have a look at the HAL documents and examples you findet an example how to use the ADC with DMA.
In short :
To start the conversion you use the function:
HAL_StatusTypeDef HAL_ADC_Start_DMA(ADC_HandleTypeDef* hadc, uint32_t* pData, uint32_t Length);
Where pData is your variable / array where the DMA should put the data.
DMA and uC do not know anything about the variables. DMA peripheral has two configuration registers where you store the peripheral address and the memory address. If you start from reading the uC documentation instead of HAL everything would be clear instantly
The base address for THR and RHR registers are same. So is it possible to transmit and receive at the same time?
It is specific to your particular UART hardware implementation, but it is unlikely that they are in fact the same register. They are two registers that have the same address - one is read-only (RHR), the other write-only (THR), so they do not need separate addresses.
In the hardware logic the correct register will be selected depending on the state of the read/write logic state as if that were an additional address line.
So yes, full duplex operation will be supported. You should read the user manual and/or data sheet for your particular part.
i'm having hard time trying to understand how interrupts work.
the code below initialize the Programmable Interrupt Controller
#define PIC0_CTRL 0x20 /* Master PIC control register address. */
#define PIC0_DATA 0x21 /* Master PIC data register address. */
/* Mask all interrupts*/
outb (PIC0_DATA, 0xff);
/* Initialize master. */
outb (PIC0_CTRL, 0x11); /* ICW1: single mode, edge triggered, expect ICW4. */
outb (PIC0_DATA, 0x20); /* ICW2: line IR0...7 -> irq 0x20...0x27. */
outb (PIC0_DATA, 0x04); /* ICW3: slave PIC on line IR2. */
outb (PIC0_DATA, 0x01); /* ICW4: 8086 mode, normal EOI, non-buffered. */
/* Unmask all interrupts. */
outb (PIC0_DATA, 0x00);
can someone explain to me how it works:
-the role of outb (i didn't understand the linux man)
-the addresses and their meaning
another unrelated question,i read that outb and inb are for port-mapped I/O, can we use memory-mapped I/o for doing Input/output communication?
thanks.
outb() writes the byte specified by its second argument to the I/O port specified by its first argument. In this context, a "port" is a means for the CPU to communication with another chip.
The specific C code that you present relates to the 8259A Programmable Interrupt Controller (PIC).
You can read about the PIC here and here.
If that doesn't provide enough details to understand the commands and the bit masks, you could always refer to the chip's datasheet.
Device specific code is best read in conjunction with the corresponding datasheet. For example, the "8259A Programmable Interrupt Controller" datasheet (http://pdos.csail.mit.edu/6.828/2005/readings/hardware/8259A.pdf) clearly (but concisely) explains almost everything.
However, this datasheet will only explain how the chip can be used (in any system), and won't explain how the chip is used in a specific system (e.g. in "PC compatible" 80x86 systems). For that you need to rely on "implied de facto standards" (as a lot of the features of the PIC chips aren't used on "PC compatible" 80x86 systems, and may not be supported on modern/integrated chipsets).
Normally (for historical reasons) the PIC chip's IRQs are mapped to interrupts in a strange/bad way. For example, IRQ 0 is mapped to interrupt 8 and conflicts with the CPU's double fault exception. The specific code you've posted remaps the PIC chips so that IRQ0 is mapped to interrupt 0x20 (and IRQ1 to interrupt 0x21, ..., IRQ 15 mapped to IRQ 0x2F). It's something an OS typically does to avoid the conflicts (e.g. so that each interrupt is used for an IRQ or an exception and never both).
To understand "outb()", look at the "OUT" instruction in Intel's manuals. It's like there's 2 address spaces - one for normal physical addresses and a completely separate one for IO ports, where normal instructions (indirectly) access normal physical memory; and the IO port instructions (IN, OUT, INSB/W/D, OUTSB/W/D) access the separate "IO address space".
The traditional 8088/86 had/has a memory control signal that is essentially another address bit tied directly to the instruction. The control signal separating the accesses into I/O and Memory, creating two separate address spaces. Not unlike CS, DS, etc creating separate memory spaces inside the chip (before hitting the external memory space). Other processor families use what is called memory mapped I/O.
These days the memory controllers/system is chopped up inside and outside the chip in all different ways, sometimes for example with many control signals that indicate instruction vs data, cache line fills, write through vs write back, etc. To save on external circuitry the memory mapping happens inside the chip and for example dedicated rom interfaces, separate from ram, etc are found on the edge, far more complicated and separate than I/O space vs Memory space of the old 8088/86.
the out and in instruction and a few family members change whether you are doing an I/O access or memory access, and traditinally the interrupt controller was a chip that decoded the memory bus looking for I/O access with the address allocated for that device. Decades of reverse compatibility later and you have the present code you are looking at.
If you really want to understand it you need to find the datasheets for the device that contains the interrupt controller, likely to be combined with a bunch of other logic on a big support chip. Other datasheets may be required as well.