What is the link between Baud rate for embedded C pointers - c

I am reading manuals regarding controlling a device with C,and in general its just playing with addresses; however when we are connected through UART we have the BAUDRATE present.
So how putting a value into some address have to do with baud-rate?
Is it necessary in embedded programming?

Those addresses are not memory. They are memory-mapped I/O registers.
The address for your UART's baud rate divisor register is a hardware register. The value in hardware registers directly control the hardware. The value written to baud rate divisor register is typically a counter reload value, and one bit period is the time it takes to count up-to (or down from) the value in the divisor given a specific peripheral clock source. So for exaample if the UART peripheral clock were say 12MHz, and you wanted a baud rate of 19200, you would set the divisor register to 12x106/19200 = 625.
Although you can read and write hardware registers as if they were memory, they do not necessarily behave like memory. Some registers may be read-only, others write only, and some writing may have a different effect than reading, such that if you write a value, the value read back will not be what was written. This often works at the bit level, so that each bit in a register may exhibit different behaviour.
For example on many UART implementations the register to which your write data to be sent, is the same address you read for received data - however they are not the same register, but rather a read-only register and a write-only register mapped to the same address.
It is not specifically an embedded programming thing, but rather an I/O hardware thing; it is simply that outside of embedded systems you are not typically writing directly to the hardware unless you happen to be writing a kernel device driver, where you will encounter the same thing.
As well as the device manuals which necessarily assume existing knowledge and expertise, perhaps you should consult a more general reference. Now you know the key term: "memory-mapped I/O" or MMIO, you are in a better position to Google it. Examples:
http://www.cs.uwm.edu/classes/cs315/Bacon/Lecture/HTML/ch14s03.html
https://en.wikipedia.org/wiki/Memory-mapped_I/O

Related

When is a Cortex write to a device realised

When writing to device registers on a Cortex M0 (in my case, on an STM32L073), a question arises as to how careful one should be in a) ordering accesses to device memory and b) deciding that a change to a peripheral configuration has actually completed to the point that any dependencies become valid.
Taking a specific example to change the internal voltage regulator to a different voltage. You write the change to PWR->CR and read the status from PWR->CSR. I see code that does something like this:
Write to PWR->CR to set the voltage range
Spin until (PWR->CSR & voltage flag) becomes zero
In my mind there are three issues here:
Access ordering. This is Device Memory so transaction order is preserved relative to other Device access transactions. I would assume this means a DSB is not required between the write to CR and the read from CSR. A linked question and the answer to this is: [ARM CortexA]Difference between Strongly-ordered and Device Memory Type
Device memory can be buffered. Is there a possibility that a write to CR could still be in process when the read from CSR occurs. This would mean that the voltage flag would be clear and the code would proceed. In actual fact the flag hasn't gone high yet!
Hardware response time. Is there a latency between the write and the effects becoming final? In actuality this should always be documented - for the STM32 the docs definitively say that the flag is set when the CR register changes.
Are there any race condition possibilities here? It's really the buffering that worries me - that a peripheral write is still in progress when a peripheral read takes place.
Access ordering.
Accesses are strongly ordered and you do not need barrier instructions to read back the same register.
Device memory can be buffered. Is there a possibility that a write to CR
Yes, it is possible. But it is not because of buffering but because of the bus propagation time. It may take several clocks before a particular operation will go through all bridges.
Hardware response time. Is there a latency between the write and the
effects becoming final
Even if there is a latency it is not important from your point of view. If you set bit in the CR register and wait for the result in the status register. Simply wait for the status bit to have the expected value.

Register mapping in a ARM based SoC

I want to understand how the registers of various peripherals/IPs are mapped to the ARM processor memory map in a microcontroller.
Say, I have a CONTROL register for UART block. When I do a write access to address (40005008), this register gets configured. Where does this mapping happens: Within the peripheral block code itself or while integrating this peripheral to the SoC/microcontroller.
For a simple peripheral like a UART it's straightforward - taking the ARM PL011 UART as an example (since I know where its documentation lives):
The programmer's model defines a bunch of registers at word-aligned offsets in a 4k block.
In terms of the actual hardware, we see the bus interface matches what the programmer's model suggests - PADDR[11:2] means only bits 11:2 of the address are connected, meaning it can only understand word-aligned addresses from 0x000 to 0xffc (similarly, note that only 16 bits of read/write data are connected, since no register is wider than that).
The memory-mapping between the UART's 12-bit address and the full 32-bit address that the CPU core spits out happens in the interconnect hardware between them. At design time, the interconnect address map will be configured to say "this 4k region at 0x40005000 is assigned to the UART0 block", etc., and the resulting bus circuitry will be generated for that.
More complex things like e.g. DMA-capable devices typically have separate interfaces for configuration and data access, so the registers can be mapped in a small relocatable block on a low-speed peripheral bus much like the UART.
Most significant bits are defined by your ASIC design, least significant bits are defined by the IP design. Your IP has several registers. The number of register, their order, is defined by the IP design. Here, your register is at address 8. Then when designing the ASIC, the peripherals are connected to the memory bus, and the way they are connected define their address. Your UART is at 40005000. You may have an other instance of the same IP at (for instance) 40006000. The two UART would be strictly identical, and you would be able to access CONTROL register of your second UART at address 40006008.

How is full duplex transmission possible in OMAP4460 UART?

The base address for THR and RHR registers are same. So is it possible to transmit and receive at the same time?
It is specific to your particular UART hardware implementation, but it is unlikely that they are in fact the same register. They are two registers that have the same address - one is read-only (RHR), the other write-only (THR), so they do not need separate addresses.
In the hardware logic the correct register will be selected depending on the state of the read/write logic state as if that were an additional address line.
So yes, full duplex operation will be supported. You should read the user manual and/or data sheet for your particular part.

Writing Flash on STM32

I am implementing a emulated EEPROM in flash memory on a STM32 microprocessor, mostly based on the Application Note by ST (AN2594 - EEPROM emulation in STM32F10x microcontrollers).
The basics outline there and in the respective Datasheet and Programming manual (PM0075) are quite clear. However, I am unsure regarding the implications of power-out/system reset on flash programming and page erasure operations. The AppNote considers this case, too but does not clarify what exactly happens when a programming (write) operations is interrupted:
Does the address have a arbitrary (random) value? OR
Are only part of the bits written? OR
Does it have the default erase value 0xFF?
Thanks for hints or pointers to the relevant documentation.
Arne
This is not really a software question (much less C++). It belongs on electronics.se, but there does not seem to be an option to migrate questions thereā€¦ only to sites such as superuser or webmasters.se.
The short answer is that hardware is inherently unreliable. Something can always in theory go wrong that interrupts the write process or causes the wrong bit to be written.
The long answer is that Flash circuits are usually designed for maximum reliability. A sudden power loss on write will probably not cause corruption because the driver circuit may have enough capacitance or the capability to operate under a low-voltage condition long enough to finish draining the charge as necessary. A power loss on erasure might be trickier. You really need to consult the manufacturer.
For a "soft" system reset with no power interruption, it would be pretty surprising if the hardware didn't always completely erase whatever bytes it was immediately working on. Usually the bytes are erased in a predefined order, so you can use the first or last ones to indicate whether a page is full or empty.
#include "stm32f10x.h"
#define FLASH_KEY1 ((uint32_t)0x45670123)
#define FLASH_KEY2 ((uint32_t)0xCDEF89AB)
#define Page_127 0x0801FC00
uint16_t i;
int main()
{
//FLASH_Unlock
FLASH->KEYR = FLASH_KEY1;
FLASH->KEYR = FLASH_KEY2;
//FLASH_Erase Page
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR |= FLASH_CR_PER; //Page Erase Set
FLASH->AR = Page_127; //Page Address
FLASH->CR |= FLASH_CR_STRT; //Start Page Erase
while((FLASH->SR&FLASH_SR_BSY));
FLASH->CR &= ~FLASH_CR_PER; //Page Erase Clear
//FLASH_Program HalfWord
FLASH->CR |= FLASH_CR_PG;
for(i=0; i<1024; i+=2)
{
while((FLASH->SR&FLASH_SR_BSY));
*(__IO uint16_t*)(Page_127 + i) = i;
}
FLASH->CR &= ~FLASH_CR_PG;
FLASH->CR |= FLASH_CR_LOCK;
while(1);
}
If you are using the EEProm Emulation driver, you shouldn't worry too much about the flash corruption issues as the EEProm emulation driver always keeps a shadow copy in another page. Worst come worst, you will lose the most recent values that are being written into the flash. If you look closely on the emulation driver, you will notice that it is nothing but essentially a wrapper to stm32fxx_flash.c in the standard peripheral library.
If you look at the application note, you will see the times that the emulation library take for the flash operations. Erasing a page typically takes the longest time (tens of milliseconds on M0 core - this depends on the clock frequency).
If you are using the EEProm Emulation driver, you had bettern add a function such as check the data after write finished.
For example, if you have 10 data to save, so you need write 11 bytes to flash. The last byte is checksum. And check the data after read from flash.

Low level I/O access using outb and inb

i'm having hard time trying to understand how interrupts work.
the code below initialize the Programmable Interrupt Controller
#define PIC0_CTRL 0x20 /* Master PIC control register address. */
#define PIC0_DATA 0x21 /* Master PIC data register address. */
/* Mask all interrupts*/
outb (PIC0_DATA, 0xff);
/* Initialize master. */
outb (PIC0_CTRL, 0x11); /* ICW1: single mode, edge triggered, expect ICW4. */
outb (PIC0_DATA, 0x20); /* ICW2: line IR0...7 -> irq 0x20...0x27. */
outb (PIC0_DATA, 0x04); /* ICW3: slave PIC on line IR2. */
outb (PIC0_DATA, 0x01); /* ICW4: 8086 mode, normal EOI, non-buffered. */
/* Unmask all interrupts. */
outb (PIC0_DATA, 0x00);
can someone explain to me how it works:
-the role of outb (i didn't understand the linux man)
-the addresses and their meaning
another unrelated question,i read that outb and inb are for port-mapped I/O, can we use memory-mapped I/o for doing Input/output communication?
thanks.
outb() writes the byte specified by its second argument to the I/O port specified by its first argument. In this context, a "port" is a means for the CPU to communication with another chip.
The specific C code that you present relates to the 8259A Programmable Interrupt Controller (PIC).
You can read about the PIC here and here.
If that doesn't provide enough details to understand the commands and the bit masks, you could always refer to the chip's datasheet.
Device specific code is best read in conjunction with the corresponding datasheet. For example, the "8259A Programmable Interrupt Controller" datasheet (http://pdos.csail.mit.edu/6.828/2005/readings/hardware/8259A.pdf) clearly (but concisely) explains almost everything.
However, this datasheet will only explain how the chip can be used (in any system), and won't explain how the chip is used in a specific system (e.g. in "PC compatible" 80x86 systems). For that you need to rely on "implied de facto standards" (as a lot of the features of the PIC chips aren't used on "PC compatible" 80x86 systems, and may not be supported on modern/integrated chipsets).
Normally (for historical reasons) the PIC chip's IRQs are mapped to interrupts in a strange/bad way. For example, IRQ 0 is mapped to interrupt 8 and conflicts with the CPU's double fault exception. The specific code you've posted remaps the PIC chips so that IRQ0 is mapped to interrupt 0x20 (and IRQ1 to interrupt 0x21, ..., IRQ 15 mapped to IRQ 0x2F). It's something an OS typically does to avoid the conflicts (e.g. so that each interrupt is used for an IRQ or an exception and never both).
To understand "outb()", look at the "OUT" instruction in Intel's manuals. It's like there's 2 address spaces - one for normal physical addresses and a completely separate one for IO ports, where normal instructions (indirectly) access normal physical memory; and the IO port instructions (IN, OUT, INSB/W/D, OUTSB/W/D) access the separate "IO address space".
The traditional 8088/86 had/has a memory control signal that is essentially another address bit tied directly to the instruction. The control signal separating the accesses into I/O and Memory, creating two separate address spaces. Not unlike CS, DS, etc creating separate memory spaces inside the chip (before hitting the external memory space). Other processor families use what is called memory mapped I/O.
These days the memory controllers/system is chopped up inside and outside the chip in all different ways, sometimes for example with many control signals that indicate instruction vs data, cache line fills, write through vs write back, etc. To save on external circuitry the memory mapping happens inside the chip and for example dedicated rom interfaces, separate from ram, etc are found on the edge, far more complicated and separate than I/O space vs Memory space of the old 8088/86.
the out and in instruction and a few family members change whether you are doing an I/O access or memory access, and traditinally the interrupt controller was a chip that decoded the memory bus looking for I/O access with the address allocated for that device. Decades of reverse compatibility later and you have the present code you are looking at.
If you really want to understand it you need to find the datasheets for the device that contains the interrupt controller, likely to be combined with a bunch of other logic on a big support chip. Other datasheets may be required as well.

Resources