According to some tutorials, we will disable MMU and I/D-Caches at the beginning of bootlaoder. If I understand correctly, it aims to use the physical address directly in the program, so please correct me if I'm wrong. Thank you!
Secondly, we do this to disable MMU and Caches:
mrc P15, 0, R0, C1, C0, 0
bic R0, R0, #0x00002300 # clear bits 13, 9:8
bic R0, R0, #0x00000087 # clear bits 7, 2:0
orr R0, R0, #0x00000002 # set bit 2 (A) Align
orr R0, R0, #0x00001000 # set bit 12 (I) I-Cache
mcr P15, 0, R0, C1, C0, 0
D-Cache, MMU and Data Address Alignment Fault Checking have been disabled by clear bits 2:0, but why we enable bit 2 immediately in the following instrument? To make sure this manipulation is valid?
Last question is why D-cache is disabled but I-caches is able? To speed up instrument process?
Last question is why D-cache is disabled but I-caches is able? To speed up instrument process?
The MMU has settings to determine which memory regions are cacheable or not. If you do not have the mmu on but you have the data cache on (if possible) then you cannot safely talk to peripherals. if you read the uart status register for example that goes through the cache just like any other data operation, whatever that status is stays in the cache for subsequent reads until such time as that cache line is evicted and you get one more shot at the actual register. Lets say for example you have some code that polls the uart status register waiting for a character in the rx buffer. If that first read shows there is no character, that status goes in the cache, you will remain in the loop forever since you will never get to talk to the status register again you will simply get the cached copy of the register. if there was a character in there then that status also gets cached, you read the rx register, and perhaps do something, if when you come back again if the status has not been evicted from the data cache then you get the stale status which shows there is a character, you rx buffer read may or may not also be cached so you may get the stale value in the cache, you may get a stale value or whatever the peripheral does when you read and there is no new value or you might get a new value, but what you dont get in these situations is proper access to the peripheral. When the mmu is on, you use the mmu to mark the address space used by that peripheral as non-(data)-cacheable, and you dont have this problem. With the mmu off you need the data cache off for arm systems.
Leaving the I-cache on is okay because instruction fetches only read instructions...Well for a bare metal application that is okay, it helps for example if you are using a flash that has a potential for read disturb (spi or i2c flashes). The problem is this application is a bootloader, so you must take some extra care. For example your bootloader has some code at address 0x8000 that it runs through at least once, then you choose to use it as a bootloader, the bootloader might be at say address 0x10000000 allowing you to load a new program at 0x8000, this load uses data accesses so it does not go through the instruction cache. So there is a potential that the instruction cache has some or all of the code from the last time you were in the 0x8000 area, and when you branch to the bootloaded code at 0x8000 you will get either the old program from cache or a nasty mixture of old program and new program for the parts that are cached and not cached. So if your bootloader allows for the i-cache to be on, you need to invalidate the cache before branching to bootloaded code.
Lastly, if you or anyone using this bootloader wants to use jtag, then you have that same problem but worse, data cycles that do not go through the i-cache are used to write the new program to ram, when you tell the jtag debugger to then run the new program you will get 1) only the new program, 2) a mixture of the new program and old program fragments from cache 3) the old program from cache.
So d-cache is bad without an mmu because of things that are not in ram, peripherals, etc. The i-cache is a use at your own risk kind of thing which you can mitigate except for the times that jtag is used for debugging.
If you have concerns or have confirmed read-disturb in your (external) flash, then I recommend turn on the i-cache, use a tight loop to copy your application to ram, branch to the ram copy and run there, turn off the i-cache (or use at your own risk) and dont touch the flash again, certainly not heavy read accesses to small areas. A tight uart polling loop like you might have for a command line parser, is a really good place to get hit with read-disturb.
You did not specified on which ARM you are working. Capabilities may vary from one ARM to an other (there is a huge gap between an ARM9 and an ARM Cortex A15).
In the given code, bit 2 is cleared and then set, but it does not matter, as those changes are done in R0. There is no change in the ARM behavior until the write in CP15 register (done by the instruction mcr P15, 0, R0, C1, C0, 0).
Concerning d-cache/i-cache enabling, it is only a matter of choice, there is no requirement. On the products I work on, the bootloader enables L1 I-cache, D-cache, L2 cache, and MMU (and it disables all that stuff before jumping on Linux). Be sure to follow ARM documentations about cache invalidation and memory barriers (according to your actual ARM Core) if you use cache and MMU in your bootloader.
Related
I am trying to figure out how to start cores other than core0 for a quad core allwinner h5. the C_RST_CTRL register (a.k.a CPU2 Reset Control Register) has four bits at the bottom that imply they are four reset controls. The lsbit is one the other three zeros implying setting those releases reset on the other cores, but I dont see that happening (nothing is running code I have left at address zero), at the same time zeroing that lsbit does stop core0 implying that it is a reset control. So I assume there are clock gates somewhere but I cannot find them.
The prcm registers which are not documented in the H5 docs but are on a sunxi wiki page for older allwinners do show what seem to be real PLL settings but the cpu enable registers are marked as A31 only and the cpu0 register(s) are not setup so that would imply that is not how you enable any cpu including 0 for this chip.
What am I missing?
For a pure bare metal solution look at sunxi_cpu_ops.c from the plat/sun50iw1p1 directory of https://github.com/apritzel/arm-trusted-firmware.git
You need to deactivate various power clamps as well as clock gates.
Alternatively, include the Arm Trusted Firmware code and enable a core by an SMC call:
ldr x2,=entry_point
mov x1,#corenumber
mov x0,#0x03
movk x0,#0x8400,lsl #16
smc #0
I've now confirmed this works on an H5.
Does C_CPU_STATUS STANDBY_WFI=0x0E suggest that the secondary cores are sitting in WFI?
Not an answer, I don't have enough rep to comment but I'm just starting the same exercise myself.
As an aside, how did you put code at address 0? Isn't that BROM? I was going to play with the RVBARADDR registers.
I want to try cache clean and cache invalidate in Raspberry Pi
Can someone guide on this. I just want to do some DMA transfer and try some stuff.
Also if some one can give me a code snippet on how to check if cache has been cleaned in ARM It will be really helpfull
For Eg. I want to see state of cache memory and give instruction
Cache invalidate and see the state of memory
For the Raspberry Pi's ARM1176, cache maintenance is performed via the c7 group of System Control Coprocessor (CP15) operations. Data cache clean+invalidate is mcr p15, 0, r0, c7, c14, 0, and the Cache Dirty Status Register (which simply tells you if the cache is clean or has been written to) can be read with mrc p15, 0, <Rd>, c7, c10, 6.
There's too much information to regurgitate here, so I'd recommend referring to the TRM for the details - if the Raspberry Pi TRM doesn't cover it, you can find the full ARM1176 TRM here (which contains a few example code snippets). As always, the Linux source also provides a handy real-world usage reference.
As for inspecting the actual cache contents, AFAIK you're going to need a JTAG hardware debugger for that, if at all.
This question will be short and sweet.
I know an instruction can occur between instruction but can an interrupt happen during an instruction? Can a load multiple instruction be interrupted before it's loaded all the values into the registers?
mov r0, r1
< interrupt can happen here
ldm r0, {r1-r4} < can an interrupt happen **during** a load multiple instruction?
The load multiple instructions are explicitly not atomic. See section A3.5.3 of the ARM V7C architecture reference manual.
LDM, LDC, LDC2, LDRD, STM, STC, STC2, STRD, PUSH, POP, RFE, SRS, VLDM,
VLDR, VSTM, and VSTR instructions are executed as a sequence of
word-aligned word accesses. Each 32-bit word access is guaranteed to
be single-copy atomic. The architecture does not require subsequences
of two or more word accesses from the sequence to be single-copy
atomic.
If you read on, you'll find out that the LDM/STM instructions can be aborted by an interrupt (and restarted from the beginning on interrupt return). LDM and STM instructions can always be interrupted by a data abort, so they're non atomic in that sense. Otherwise, the ARMv7-A architecture does its best to help you out. For interrupts, they can only be interrupted if low interrupt latency is enabled, AND normal memory is being accessed. So at the very least, you won't get repeated accesses to device memory. You don't want to do anything that expects atomic read/writes of normal memory though.
On v7-M, LDM and STM can be interrupted at any time (see section B1.5.10 of the ARMv7-M Architecture Reference Manual). It's implementation defined whether or not the instruction is restarted from the beginning of the list of loads/stores, or whether it's restarted from where it left off. As the ARM says:
The ARMv7-M architecture supports continuation of, or restarting from
the beginning, an abandoned LDM or STM instruction as outlined below.
Where an LDM or STM is abandoned and restarted (ICI bits are not
supported), the instructions should not be used with volatile memory.
In other words, don't rely on LDM or STM being atomic if you're trying to write portable code.
Can you explain how the ARM mode get changed in case of a system call handling?
I heard ARM mode change can happen only in privileged mode, but in case of a system call handling while the ARM is in user mode (which is a non-privileged mode), how does the ARM mode change?
Can anybody explain the whole action flow for the user mode case, and also more generally the system call handling (especially how the ARM mode change)?
Thanks in advance.
In the case of system calls on ARM, normally the system call causes a SWI instruction to be executed. Anytime the processor executes a SWI (software interrupt) instruction, it goes into SVC mode, which is privileged, and jumps to the SWI exception handler. The SWI handler then looks at the cause of the interrupt (embedded in the instruction) and then does whatever the OS programmer decided it should do. The other exceptions - reset, undefined instruction, prefetch abort, data abort, interrupt, and fast interrupt - all also cause the processor to enter privileged modes.
How file handling works is entirely up to whoever wrote your operating system - there's nothing ARM specific about that at all.
You need to get a copy of the ARM ARM (Architectural Reference Manual).
http://infocenter.arm.com -> ARM Architecture -> Reference Manuals -> ARMv5 Architectural Reference Manual then download the pdf.
It used to be a single ARM ARM for the ARM world but there are too many cores and starting to diverge so they split off the old one as ARMv5 ARM and made new Architectural Reference Manuals for each of the major ARM processor families.
In the Programmers Model chapter it talks about the modes, it says that you can change freely among the modes other than user. ARM startup code will often go through a series of mode changes so that the stack pointers, etc can be configured. Then as needed go back to System mode or User mode.
In that same chapter look at the Exceptions section, this describes the exceptions and what mode the processor switches to for each exception.
The Software interrupt exception which happens when an SWI instruction is executed, is a way to implement system calls. The processor is put in Supervisor mode and if in thumb mode switches to arm mode.
There needs to be code to support that exception handler of course. You need to verify with the operating system, if any, you are running, what is supported and what the calling convention is, etc.
Not all ARM processors work this way. The Cortex-M (ARMv7-M) does not have the same modes and same exception table, etc. As with any time you are using an ARM (at this level) you need to get the ARM ARM for the family you are using and you need to get the TRM (Techincal Reference Manual) for the core(s) you are using, ideally the exact revision, even if ARM marks the TRM as having been replaced by a newer version the chip manufacturer has purchased and uses a specific rev of the core and there can be enough differences between revs that you will want the correct manual.
When an SVC instruction is encountered by the PC, the following behaviour takes place:
The current status (CPSR) is saved (to the supervisor SPSR)
The mode is switched to supervisor mode
Normal (IRQ) interrupts are disabled
ARM mode is entered (if not already in use)
The address of the following instruction (the return address) is saved into the link register (R14) - It's worth noting that this is
the link register that belongs to the supervisor mode
The PC is altered to jump to address 0x00000008
An exception vector (just a branch instruction) should be at the address 0x0000008, which will branch the program to another area of code used to determine which supervisor call has been made.
Determining which supervisor call has been made is usually accomplished by loading the SVC instruction into a register (by offsetting the LR by one word - since the LR is still pointing to the instruction next to the supervisor call), bit clearing the last 8 bits and using the value in the remaining 24 bits of the register to calculate an offset in a jump table, to branch to the corresponding SVC code.
When the supervisor call code wishes to return to the user application, the processor needs to context switch back into user mode and return to the address contained within the LR (which is only available in supervisor mode, since certain registers are banked for both modes). This problem is overcome using the MOVS instruction, as illustrated below:
(consider this to also be your explanation on how to change mode)
MRS R0, CPSR ; load CPSR into R0
BIC R0, R0, #&1F ; clear mode field
ORR R0, R0, #&10 ; user mode code
MSR SPSR, R0 ; store modified CPSR into SPSR
MOVS PC, LR ; context switch and branch
The MRS and MSR instructions are used to transfer content between an ARM register and the CPSR or SPSR.
The MOVS instruction is a special instruction, which operates as a standard MOV instruction, but also sets the CPSR equal to the SPSR upon branching. This allows the processor to branch back (since we're moving the LR into the PC) and change mode to the mode specified by the SPSR.
I quote from the ARM documentation available here:
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0471c/BABDCIEH.html
When an exception is generated, the processor performs the following
actions:
Copies the CPSR into the appropriate SPSR. This saves the current mode, interrupt mask, and condition flags.
Switches state automatically if the current state does not match the instruction set used in the exception vector table.
Changes the appropriate CPSR mode bits to:
Change to the appropriate mode, and map in the appropriate banked out registers for that mode.
Disable interrupts. IRQs are disabled when any exception occurs. FIQs are disabled when an FIQ occurs and on reset.
Sets the appropriate LR to the return address.
Sets the PC to the vector address for the exception.
where, CPSR refers to Current Program Status Register and SPSR to Saved Program Status register used to restore the state of the process that was interrupted. Thus, as seen in point 3, the processor circuitry is designed in a way that the hardware itself changes the mode when user mode executes a Supervisor call instruction.
"I heard ARM mode change can happen only in privileged mode". You are partly right here. By partly I mean the control field of the CPSR register can be manually modified (manually modified means through code) in the privileged modes only not in the unprivileged mode (i.e. user mode). When a system call happens in the user mode it happens because of SWI instruction. An SWI instruction has inbuilt mechanism to change the mode to supervisor mode.
So to conclude , there are two ways to change the mode:
1) Explicitly through code. Only allowed in a privileged mode.
2) Implicitly through IRQ, FIQ, SWI, RESET, undefined instruction encountered, data abort, prefetch abort. This is allowed in all the modes.
Does anyone know how to enable ARM FIQ?
Other than enabling or disabling the IRQ/FIQ while you're in supervisor mode, there's no special setup you should have to do on the ARM to use it, unless the system (that the ARM chip is running in) has disabled it in hardware (based on your comment, this is not the case since you're seeing the FIQ input pin driven correctly).
For those unaware of the acronyms, FIQ is simply the last interrupt vector in the list which means it's not limited to a branch instruction as the other interrupts are. That means it can execute faster than the other IRQ handlers.
Normal IRQs are limited to a branch instruction since they have to ensure their code fits into a single word. FIQ, because it won't overwrite any other IRQ vectors, can just run the code directly without a branch instruction (hence the "fast").
The FIQ input line is just a way for external elements to kick the ARM chip into FIQ mode and start executing the correct exception. There's nothing on the ARM itself which prevents that from happening except the CPSR.
To enable FIQ in supervisor mode:
MRS r1, cpsr ; get the cpsr.
BIC r1, r1, #0x40 ; enable FIQ (ORR to disable).
MSR cpsr_c, r1 ; copy it back, control field bit update.
A similar thing can be done for normal IRQs, but using #0x80 instead of #0x40.
The FIQ can be closed off to you by the chip manufacturer using trustzone extensions.
Trustzone creates a Secure world and a normal world. The secure world has its own supervisor, user and memory space. The idea is for secure operations to be routed so they never leave the chip and cannot be traced even if you scan the pins on the bus. I think in OMAP it is used for some cryptography operations.
On Reset the core starts in secure mode. It sets up the secure monitor (gateway between secure and non-secure world) and at this time FIQ can be setup to be routed to the monitor. I think it is the SCR.FIQ bit that may be set and then all FIQs ignore the value of CPSR.F and go to monitor mode. Check out the ARM ARM but if I remember correctly if this is happening there is no way for you to know from nonsecure OS code. Then the monitor will reset the Normal world registers and doing an exception return with PC set to the reset exception vector.
The core will take an interrupt to monitor mode, do its thing and return.
Sorry I can't answer you in the comments, I don't have enough reputation, you could always fix that ;), but I hope you see this