cortex M3 bare metal execution - arm

I am working on a STm32 project in Keil IDE. It contains a start-up file named startup_stm32f10x_xl.s has the following code in it
; Reset handler
Reset_Handler PROC
EXPORT Reset_Handler [WEAK]
IMPORT __main
IMPORT SystemInit
LDR R0, =SystemInit
BLX R0
LDR R0, =__main
BX R0;-----------------;After execution this instruction
ENDP
After the execution of the BX R0 instruction, the target system hangs and the execution does not enter the main function.
Kindly requesting every one to specify what may be causing this.
Thankyou

Related

DMB instructions in an interrupt-safe FIFO

Related to this thread, I have a FIFO which should work across different interrupts on a Cortex M4.
The head index must be
atomically written (modified) by multiple interrupts (not threads)
atomically read by a single (lowest level) interrupt
The function for moving the FIFO head looks similar to this (there are also checks to see if the head overflowed in the actual code but this is the main idea):
#include <stdatomic.h>
#include <stdint.h>
#define FIFO_LEN 1024
extern _Atomic int32_t _head;
int32_t acquire_head(void)
{
while (1)
{
int32_t old_h = atomic_load(&_head);
int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
if (atomic_compare_exchange_strong(&_head, &old_h, new_h))
{
return old_h;
}
}
}
GCC will compile this to:
acquire_head:
ldr r2, .L8
.L2:
// int32_t old_h = atomic_load(&_head);
dmb ish
ldr r1, [r2]
dmb ish
// int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
adds r3, r1, #1
ubfx r3, r3, #0, #10
// if (atomic_compare_exchange_strong(&_head, &old_h, new_h))
dmb ish
.L5:
ldrex r0, [r2]
cmp r0, r1
bne .L6
strex ip, r3, [r2]
cmp ip, #0
bne .L5
.L6:
dmb ish
bne .L2
bx lr
.L8:
.word _head
This is a bare metal project without an OS/threads. This code is for a logging FIFO which is not time critical, but I don't want the acquiring of the head to make an impact on the latency of the rest of my program, so my question is:
do I need all these dmbs?
will there be a noticeable performance penalty with these instructions, or can I just ignore this?
if an interrupt happens during a dmb, how many additional cycles of latency does it create?
TL:DR yes, LL/SC (STREX/LDREX) can be good for interrupt latency compared to disabling interrupts, by making an atomic RMW interruptible with a retry.
This may come at the cost of throughput, because apparently disabling / re-enabling interrupts on ARMv7 is very cheap (like maybe 1 or 2 cycles each for cpsid if / cpsie if), especially if you can unconditionally enable interrupts instead of saving the old state. (Temporarily disable interrupts on ARM).
The extra throughput costs are: if LDREX/STREX are any slower than LDR / STR on Cortex-M4, a cmp/bne (not-taken in the successful case), and any time the loop has to retry the whole loop body runs again. (Retry should be very rare; only if an interrupt actually comes in while in the middle of an LL/SC in another interrupt handler.)
C11 compilers like gcc don't have a special-case mode for uniprocessor systems or single-threaded code, unfortunately. So they don't know how to do code-gen that takes advantage of the fact that anything running on the same core will see all our operations in program order up to a certain point, even without any barriers.
(The cardinal rule of out-of-order execution and memory reordering is that it preserves the illusion of a single-thread or single core running instructions in program order.)
The back-to-back dmb instructions separated only by a couple ALU instructions are redundant even on a multi-core system for multi-threaded code. This is a gcc missed-optimization, because current compilers do basically no optimization on atomics. (Better to be safe and slowish than to risk ever being too weak. It's hard enough to reason about, test, and debug lockless code without worrying about possible compiler bugs.)
Atomics on a single-core CPU
You can vastly simplify it in this case by masking after an atomic_fetch_add, instead of simulating an atomic add with earlier rollover using CAS. (Then readers must mask as well, but that's very cheap.)
And you can use memory_order_relaxed. If you want reordering guarantees against an interrupt handler, use atomic_signal_fence to enforce compile-time ordering without asm barriers against runtime reordering. User-space POSIX signals are asynchronous within the same thread in exactly the same way that interrupts are asynchronous within the same core.
// readers must also mask _head & (FIFO_LEN - 1) before use
// Uniprocessor but with an atomic RMW:
int32_t acquire_head_atomicRMW_UP(void)
{
atomic_signal_fence(memory_order_seq_cst); // zero asm instructions, just compile-time
int32_t old_h = atomic_fetch_add_explicit(&_head, 1, memory_order_relaxed);
atomic_signal_fence(memory_order_seq_cst);
int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
return new_h;
}
On the Godbolt compiler explorer
## gcc8.2 -O3 with your same options.
acquire_head_atomicRMW:
ldr r3, .L4 ## load the static address from a nearby literal pool
.L2:
ldrex r0, [r3]
adds r2, r0, #1
strex r1, r2, [r3]
cmp r1, #0
bne .L2 ## LL/SC retry loop, not load + inc + CAS-with-LL/SC
adds r0, r0, #1 ## add again: missed optimization to not reuse r2
ubfx r0, r0, #0, #10
bx lr
.L4:
.word _head
Unfortunately there's no way I know of in C11 or C++11 to express a LL/SC atomic RMW that contains an arbitrary set of operations, like add and mask, so we could get the ubfx inside the loop and part of what gets stored to _head. There are compiler-specific intrinsics for LDREX/STREX, though: Critical sections in ARM.
This is safe because _Atomic integer types are guaranteed to be 2's complement with well-defined overflow = wraparound behaviour. (int32_t is already guaranteed to be 2's complement because it's one of the fixed-width types, but the no-UB-wraparound is only for _Atomic). I'd have used uint32_t, but we get the same asm.
Safely using STREX/LDREX from inside an interrupt handler:
ARM® Synchronization Primitives (from 2009) has some details about the ISA rules that govern LDREX/STREX. Running an LDREX initializes the "exclusive monitor" to detect modification by other cores (or by other non-CPU things in the system? I don't know). Cortex-M4 is a single-core system.
You can have a global monitor for memory shared between multiple CPUs, and local monitors for memory that's marked non-shareable. That documentation says "If a region configured as Shareable is not associated with a global monitor, Store-Exclusive operations to that region always fail, returning 0 in the destination register." So if STREX seems to always fail (so you get stuck in a retry loop) when you test your code, that might be the problem.
An interrupt does not abort a transaction started by an LDREX. If you were context-switching to another context and resuming something that might have stopped right before a STREX, you could have a problem. ARMv6K introduced clrex for this, otherwise older ARM would use a dummy STREX to a dummy location.
See When is CLREX actually needed on ARM Cortex M7?, which makes the same point I'm about to, that CLREX is often not needed in an interrupt situation, when not context-switching between threads.
(Fun fact: a more recent answer on that linked question points out that Cortex M7 (or Cortex M in general?) automatically clears the monitor on interrupt, meaning clrex is never necessary in interrupt handlers. The reasoning below can still apply to older single-core ARM CPUs with a monitor that doesn't track addresses, unlike in multi-core CPUs.)
But for this problem, the thing you're switching to is always the start of an interrupt handler. You're not doing pre-emptive multi-tasking. So you can never switch from the middle of one LL/SC retry loop to the middle of another. As long as STREX fails the first time in the lower-priority interrupt when you return to it, that's fine.
That will be the case here because a higher-priority interrupt will only return after it does a successful STREX (or didn't do any atomic RMWs at all).
So I think you're ok even without using clrex from inline asm, or from an interrupt handler before dispatching to C functions. The manual says a Data Abort exception leaves the monitors architecturally undefined, so make sure you CLREX in that handler at least.
If an interrupt comes in while you're between an LDREX and STREX, the LL has loaded the old data in a register (and maybe computed a new value), but hasn't stored anything back to memory yet because STREX hadn't run.
The higher-priority code will LDREX, getting the same old_h value, then do a successful STREX of old_h + 1. (Unless it is also interrupted, but this reasoning works recursively). This might possibly fail the first time through the loop, but I don't think so. Even if so, I don't think there can be a correctness problem, based on the ARM doc I linked. The doc mentioned that the local monitor can be as simple as a state-machine that just tracks LDREX and STREX instructions, letting STREX succeed even if the previous instruction was an LDREX for a different address. Assuming Cortex-M4's implementation is simplistic, that's perfect for this.
Running another LDREX for the same address while the CPU is already monitoring from a previous LDREX looks like it should have no effect. Performing an exclusive load to a different address would reset the monitor to open state, but for this it's always going to be the same address (unless you have other atomics in other code?)
Then (after doing some other stuff), the interrupt handler will return, restoring registers and jumping back to the middle of the lower-priority interrupt's LL/SC loop.
Back in the lower-priority interrupt, STREX will fail because the STREX in the higher-priority interrupt reset the monitor state. That's good, we need it to fail because it would have stored the same value as the higher-priority interrupt that took its spot in the FIFO. The cmp / bne detects the failure and runs the whole loop again. This time it succeeds (unless interrupted again), reading the value stored by the higher-priority interrupt and storing & returning that + 1.
So I think we can get away without a CLREX anywhere, because interrupt handlers always run to completion before returning to the middle of something they interrupted. And they always begin at the beginning.
Single-writer version
Or, if nothing else can be modifying that variable, you don't need an atomic RMW at all, just a pure atomic load, then a pure atomic store of the new value. (_Atomic for the benefit or any readers).
Or if no other thread or interrupt touches that variable at all, it doesn't need to be _Atomic.
// If we're the only writer, and other threads can only observe:
// again using uniprocessor memory order: relaxed + signal_fence
int32_t acquire_head_separate_RW_UP(void) {
atomic_signal_fence(memory_order_seq_cst);
int32_t old_h = atomic_load_explicit(&_head, memory_order_relaxed);
int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
atomic_store_explicit(&_head, new_h, memory_order_relaxed);
atomic_signal_fence(memory_order_seq_cst);
return new_h;
}
acquire_head_separate_RW_UP:
ldr r3, .L7
ldr r0, [r3] ## Plain atomic load
adds r0, r0, #1
ubfx r0, r0, #0, #10 ## zero-extend low 10 bits
str r0, [r3] ## Plain atomic store
bx lr
This is the same asm we'd get for non-atomic head.
Your code is written in a very not "bare metal" way. Those "general" atomic functions do not know if the value read or stored is located in the internal memory or maybe it is a hardware register located somewhere far from the core and connected via buses and sometimes write/read buffers.
That is the reason why the generic atomic function has to place so many DMB instructions. Because you read or write the internal memory location they are not needed at all (M4 does not have any internal cache so this kind of strong precautions are not needed as well)
IMO it is just enough to disable the interrupts when you want to access the memory location the atomic way.
PS the stdatomic is in a very rare use in the bare metal uC development.
The fastest awy to guarantee the exclusive access on M4 uC is to disable and enable the interrupts.
__disable_irq();
x++;
__enable_irq();
71 __ASM volatile ("cpsid i" : : : "memory");
080053e8: cpsid i
79 x++;
080053ea: ldr r2, [pc, #160] ; (0x800548c <main+168>)
080053ec: ldrb r3, [r2, #0]
080053ee: adds r3, #1
080053f0: strb r3, [r2, #0]
60 __ASM volatile ("cpsie i" : : : "memory");
which will cost only 2 or 4 additional clocks for both instructions.
It guarantees the atomicity and does not provide unnecessary overhead
dmb is required in situations like
p1:
str r5, [r1]
str r0, [r2]
and
p2:
wait([r2] == 0)
ldr r5, [r1]
(from http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/Barrier_Litmus_Tests_and_Cookbook_A08.pdf, section 6.2.1 "Weakly-Ordered Message Passing problem").
In-CPUI optimizations can reorder the instructions on p1 so you have to insert an dmb betweeen both stores.
In your example, there are too much dmb which is probably caused by expanding atomic_xxx() which might have dmb both at start and end.
In should be enough to have
acquire_head:
ldr r2, .L8
dmb ish
.L2:
// int32_t old_h = atomic_load(&_head);
ldr r1, [r2]
...
bne .L5
.L6:
bne .L2
dmb ish
bx lr
and no other dmb between.
Performance impact is difficult to estimate (you would have to benchmark code with and without dmb). dmb does not consume cpu cycles; it just stops pipelining within the cpu.

Atomic access to ARM peripheral registers

I want to use the overflow, compare match and capture functionality of a general purpose timer on a ST2M32F103REY Cortex M3 at the same time. CC1 is configured as compare match and CC3 is configured as capture. The IRQ handler looks as follows:
void TIM3_IRQHandler(void) {
if(TIM3->SR & TIM_SR_UIF){
TIM3->SR &= ~TIM_SR_UIF;
// do something on overflow
}
if(TIM3->SR & TIM_SR_CC1IF) {
TIM3->SR &= ~TIM_SR_CC1IF;
// do something on compare match
}
if(TIM3->SR & TIM_SR_CC3IF) {
TIM3->SR &= ~TIM_SR_CC3IF;
// do something on capture
}
}
In principle, it works good, but it sometimes seems to skip a part. My theory is that this is because the operations of resetting the IRQ flags, e.g. TIM3->SR &= ~TIM_SR_UIF, is not atomic*, so it might happen that for example a TIM_SR_CC1IF occurring between load and store is overwritten.
* The disassembly of the instruction is as follows
8012e02: 8a13 ldrh r3, [r2, #16]
8012e06: f023 0301 bic.w r3, r3, #1
8012e0a: 041b lsls r3, r3, #16
8012e0c: 0c1b lsrs r3, r3, #16
8012e0e: 8213 strh r3, [r2, #16]
Is this plausible? Can the content of the TIM3->SR register change during the execution of the IRQ handler?
Is there a possibility to do an atomic read and write to the TIM3->SR register?
Is there another suitable solution?
By the way: There is similar question but that one is about protecting access by multiple processes or cores and not about protecting simultaneous access by software and hardware.
Section 15.4.5 of the reference manual (CD00171190) states that all bits in TIMx->SR work in rc_w0 mode (or are reserved).
According to the programming manual (PM0056):
read/clear (rc_w0): Software can read as well as clear this bit by writing 0. Writing ‘1’ has no effect on the bit value.
This means that you can simplify your code to entirely avoid the read-modify-write cycle and instead just use TIM3->SR = ~TIM_SR_UIF instead.
Many application notes use a read-modify-write to clear interrupts, such as examples by Keil, but this is unnecessary and potentially dangerous, as you have experienced.
In the ST application note DM00236305 (section 1.3.2), only a write operation is used.
Note, however, that when working with the NVIC, the register used for resetting is rc_w1.

Temporarily disable interrupts on ARM

I am starting working with the ARM platform (specifically the TI TMS570 family).
I have some code with critical regions where I don't want an exception to occur. So I want to save the IRQ and FIR enabled flags on entering the regions and restore them on exiting.
How do I do that?
To temporarily mask IRQs and FIQs at the CPU, the nicest option for ARMv7 is to use cps:
// assembly code assuming interrupts unmasked on entry
cpsid if // mask IRQ and FIQ
... // do critical stuff
cpsie if // unmask
Some compilers provide a set of __disable_irq() etc. intrinsics usable from C code, but for others (like GCC) it's going to be a case of dropping to assembly.
If you want critical sections to be nested, reentrant, taken in interrupt handlers or anything else which requires restoring the previous state as opposed to just uncondionally unmasking at the end, then you'll need to copy that state out of the CPSR before masking anything, then restore it on exit. At that point the unmasking probably ends up simpler to handle the old-fashioned way of a direct read-modify-write of the CPSR. Here's one idea off the top of my head:
// int enter_critical_section(void);
enter_critical_section:
mrs r0, cpsr
cpsid if
and r0, r0, #0xc0 // leave just the I and F flags
bx lr
// void leave_critical_section(int flags);
leave_critical_section:
mrs r1, cpsr
bic r1, r1, r0
msr cpsr_c, r1
bx lr
You can use _disable_interrupt_();_enable_interrupt_(); from Halcogen generated code (sys_core.h)

Page fault handling in Kernel for ARM

I am writing a interrupt handlers for a basic kernel running on the ARM processor of a Raspberry Pi. I have the handlers for system call and irq handlers which are handled respectively. The code is taken from https://github.com/pykello/arunos. And I want to add some exokernel features one of which requires a user level page fault handler.
I have experience in x86 kernel development, but being an ARM rookie, I'm stumped on how to implement a page fault handler, that is getting the trapframe, faulting address and then allocating the pages for that address.
Any steps, sites or instructions would be useful.
interrupt_table_start:
nop
subs pc, lr, #4
ldr pc, syscall_entry_address
subs pc, lr, #4
subs pc, lr, #4
subs pc, lr, #4
ldr pc, irq_entry_address
syscall_entry_address: .word syscall_entry
irq_entry_address: .word irq_entry
interrupt_table_end:
syscall_entry:
ldr sp, =kernel_stack_start
SAVE_CONTEXT
stmfd r13!, {r1-r12, r14}
bl handle_syscall
ldmfd r13!, {r1-r12, pc}^
irq_entry:
sub r14, r14, #4
ldr sp, =irq_stack_start
SAVE_CONTEXT
stmfd r13!, {r0-r12, r14}
bl dispatch_interrupts
ldmfd r13!, {r0-r12, pc}^

relocating Vector interrrupt table to SRAM for Cortex-M3?

Every one
In my project application, am executing FreeRtos from External ddr3 memory(arm cortex-M3).
Code executed Upto VportstartFirstTask(), after this function code is not running.
below is the VportstartFirsttask which we used in our application.
void vPortStartFirstTask(void)
{
/ Use the NVIC offset register to locate the stack. /
__asm volatile(ldr r0, =0xE000ED08
ldr r0, [r0]
ldr r0, [r0]
/ Set the msp back to the start of the stack. /
msr msp, r0
/ Call SVC to start the first task. /
cpsie i
svc 0
nop
)
}`
If i run the same project using some other linker file it works fine. which runs from Onchip memory it works fine.
When i am running from external memory i am facing an issue.
Can anyone help me how to relocate vector table to SRAM or any otherplace.
Thanks in Advance.

Resources