Is there a need for DMB if we are using DSB - arm

Is DSB the superset of DMB ?
If performnace is not a consideration can DMB be replaced by a DSB

DSB is superset of DMB so it can be replaced by a DSB if performance is not a concern.
From Cortex-A Series Programmer's Guide:
Data Synchronization Barrier (DSB)
This instruction forces the core to
wait for all pending explicit data accesses to complete before any
additional instructions stages can be executed. There is no effect on
pre-fetching of instructions.
Data Memory Barrier (DMB)
This
instruction ensures that all memory accesses in program order before
the barrier are observed in the system before any explicit memory
accesses that appear in program order after the barrier. It does not
affect the ordering of any other instructions executing on the core,
or of instruction fetches.

Related

Do I need to use smp_mb() after binding the CPU

Suppose my system is a multicore system, if I bind my program on a cpu core, still I need the smp_mb() to guard the cpu would not reorder
the cpu instructions?
I have this point because I know that the smp_mb() on a single-core systems is not necessary,but I'm no sure this point is correct.
You rarely need a full barrier anyway, usually acquire/release is enough. And usually you want to use C11 atomic_load_explicit(&var, memory_order_acquire), or in Linux kernel code, use one of its functions for an acquire-load, which can be done more efficiently on some ISAs than a plain load and an acquire barrier. (Notably AArch64 or 32-bit ARMv8 with ldar or ldapr)
But yeah, if all threads are sharing the same logical core, run-time memory reordering is impossible, only compile-time. So you just need a compiler memory barrier like asm("" ::: "memory") or C11 atomic_signal_fence(seq_cst), not a CPU run-time barrier like atomic_thread_fence(seq_cst) or the Linux kernel's SMP memory barrier (smp_mb() is x86 mfence or equivalent, or ARM dmb ish, for example).
See Why memory reordering is not a problem on single core/processor machines? for more details about the fact that all instructions on the same core observe memory effects to have happened in program order, regardless of interrupts. e.g. a later load must see the value from an earlier store, otherwise the CPU is not maintaining the illusion of instructions on that core running in program order.
And if you can convince your compiler to emit atomic RMW instructions without the x86 lock prefix, for example, they'll be atomic wrt. context switches (and interrupts in general). Or use gcc -Wa,-momit-lock-prefix=yes to have GAS remove lock prefixes for you, so you can use <stdatomic.h> functions efficiently. At least on x86; for RISC ISAs, there's no way to do a read-modify-write of a memory location in a single instruction.
Or if there is (ARMv8.1), it implies an atomic RMW that's SMP-safe, like x86 lock add [mem], eax. But on a CISC like x86, we have instructions like add [mem], eax or whatever which are just like separate load / ADD / store glued into a single instruction, which either executes fully or not at all before an interrupt. (Note that "executing" a store just means writing into the store buffer, not globally visible cache, but that's sufficient for later code on the same core to see it.)
See also Is x86 CMPXCHG atomic, if so why does it need LOCK? for more about non-locked use-cases.

aarch64; Load-Acquire Exclusive vs Load Exclusive

What is the difference between LDAXR & LDXR instructions out of AArch64 instruction set?
From reference manual they looks totally the same (with exception of 'acquire' word):
LDAXR - Load-Acquire Exclusive Register: loads word from memory addressed by base to Wt. Records the physical address as an exclusive access.
LDXR - Load Exclusive Register: loads a word from memory addressed by base to Wt. Records the physical address as an exclusive access.
Thanks
In the simplest form, LDAEX == LDXR +DMB_SY.
This is the description which I find for LDAXR:
C6.2.104 LDAXR
Load-Acquire Exclusive Register derives an address from a base
register value, loads a 32-bit word or 64-bit doubleword from memory,
and writes it to a register. The memory access is atomic. The PE marks
the physical address being accessed as an exclusive access. This
exclusive access mark is checked by Store Exclusive instructions. See
Synchronization and semaphores on page B2-135. The instruction also
has memory ordering semantics as described in Load-Acquire,
Load-AcquirePC, and Store-Release on page B2-108. For information
about memory accesses see Load/Store addressing modes on page C1-157.
From section K11.3 of DDI0487 Da
The ARMv8 architecture adds the acquire and release semantics to
Load-Exclusive and Store-Exclusive instructions, which allows them to
gain ordering acquire and/or release semantics. The Load-Exclusive
instruction can be specified to have acquire semantics, and the
Store-Exclusive instruction can be specified to have release
semantics. These can be arbitrarily combined to allow the atomic
update created by a successful Load-Exclusive and Store-Exclusive pair
to have any of:
No Ordering semantics (using LDREX and STREX).
Acquire only semantics (using LDAEX and STREX).
Release only semantics (using LDREX and STLEX).
Sequentially consistent semantics (using LDAEX and STLEX).
Also (B2.3.5),
The basic principle of a Load-Acquire instruction is to introduce
order between the memory access generated by the Load-Acquire
instruction and the memory accesses appearing in program order after
the Load-Acquire instruction, such that the memory access generated by
the Load-Acquire instruction is Observed-by each PE, to the extent
that that PE is required to observe the access coherently, before any
of the memory accesses appearing in program order after the
Load-Acquire instruction are Observed-by that PE, to the extent that
the PE is required to observe the accesses coherently.

LDREX/STREX with Cortex M3 and M4

I was reading up on the LDREX and STREX to implement mutexes. From looking at the ARM reference manual:
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100166_0001_00_en/ric1417175928887.html
It appears that LDREX/STREX only store address granularity is the whole memory space, hence you are only allowed to use LDREX/STREX on at maximum one 32bit register.
Is this correct or am I missing something? If so it kind of makes the LDREX/STREX very limited. I mean you could do a bit mapped mutex and maybe get 32 mutexes.
Does anyone use the LDREX/STREX on a M3 or M4 and if so how do they use it?
So I contacted ARM and got some more information. For example if you did this it LDREX/STREX would fail:
LDREX address1
LDREX address2
STREX address1
The STREX to address1 would pass even though the last LDREX was not for address1. This is correct as that the LDREX/STREX address resolution is the entire memory space.
So I was worried that if you have a two tasks: and the first one got interrupted after the first LDREX, and then the second task got interrupted after the second LDREX to address2 and then the first task got processor back and tried the STREX it would cause a problem. However it appears that ARM issues CLREX on every exception/interrupt entry and exit. Therefore the STREX would fail as that the tasks had to be preemptive by an interrupt. That is if any interrupt occurs between LDREX and STREX the STREX will fail. So you want to keep the code as small as possible between LDREX and STREX to reduce the chances of interrupt. Additionally if the STREX fails you most likely want to try the LDREX/STREX process once or twice more before giving up.
Again this is for a single core M3/M4/M7.
Note the only place I found the reference to the CLREX being cleared with exception was in the ArmV7-M Architecture Reference Manual in section A3.4.4 Context switch support. This document is much better than anything I found online describing how the LDREX/STREX actually works.

When is CLREX actually needed on ARM Cortex M7?

I found a couple of places online which state that CLREX "must" be called whenever an interrupt routine is entered, which I don't understand. The docs for CLREX state (added the numbering for easier reference):
(1) Clears the local record of the executing processor that an address has had a request for an exclusive access.
(2) Use the CLREX instruction to return a closely-coupled exclusive access monitor to its open-access state. This removes the requirement for a dummy store to memory.
(3) It is implementation-defined whether CLREX also clears the global record of the executing processor that an address has had a request for an exclusive access.
I don't understand pretty much anything here.
I had the impression that writing something along the lines the example in the docs was enough to guarantee atomicity:
MOV r1, #0x1 ; load the ‘lock taken’ value
try: <---\
LDREX r0, [LockAddr] ; load the lock value |
CMP r0, #0 ; is the lock free? |
STREXEQ r0, r1, [LockAddr] ; try and claim the lock |
CMPEQ r0, #0 ; did this succeed? |
BNE try ; no - try again ------------/
.... ; yes - we have the lock
Why should the "local record" need to be cleared? I thought that LDREX/STREX are enough to guarantee atomic access to an address from several interrupts? I.e. GCC for ARM compiles all C11 atomic functions using LDREX/STREX and I don't see CLREX being called anywhere.
What "requirement for a dummy store" is the second paragraph referring to?
What is the difference between the global record and a local record? Is global record needed for multi-core scenarios?
Taking (and paraphrasing) your three questions separately:
1. Why clear the access record?
When strict nesting of code is enforced, such as when you're working with interrupts, then CLREX is not usually required. However, there are cases where it's important. Imagine you're writing a context switch for a preemptive operating system kernel, which can asynchronously suspend a running task and resume another. Now consider the following pathological situation, involving two tasks of equal priority (A and B) manipulating the same shared resource using LDREX and STREX:
Task A Task B
...
LDREX
-------------------- context switch
LDREX
STREX (succeeds)
...
LDREX
-------------------- context switch
STREX (succeeds, and should not)
...
Therefore the context switch must issue a CLREX to avoid this.
2. What 'requirement for a dummy store' is avoided?
If there wasn't a CLREX instruction then it would be necessary to use a STREX to relinquish the exclusive-access flag, which involves a memory transaction and is therefore slower than it needs to be if all you want to do is clear the flag.
3. Is the 'global record' for multi-core scenarios?
Yes, if you're using a single-core machine, there's only one record because there's only one CPU.
Actually CLREX isn't needed for exceptions/interrupts on the M7, it appears to only be included for compatibility reasons. From the documenation (Version c):
CLREX enables compatibility with other ARM Cortex processors that have
to force the failure of the store exclusive if the exception occurs
between a load exclusive instruction and the matching store exclusive
instruction in a synchronization operation. In Cortex-M processors,
the local exclusive access monitor clears automatically on an
exception boundary, so exception handlers using CLREX are optional.
So, since Cortex-M processors clear the local exclusive access flag on exception/interrupt entry/exit, this negates most (all?) of the use cases for CLREX.
With regard to your third question, as others have mentioned you are correct in thinking that the global record is used in multi-core scenarios. There may still be use cases for CLREX on multi-core processors depending on the implementation defined effects on local/global flags.
I can see why there is confusion around this, as the initial version of the M7 documentation doesn't include these sentences (not to mention the various other versions of more generic documentation on the ARM website). Even now, I cannot even link to the latest revision. The page displays 'Version a' by default and you have to manually change the version via a drop down box (hopefully this will change in future).
Update
In response to comments, an additional documentation link for this. This is the part of the manual that describes the usage of these instructions outside of the specific instruction documentation (and also has been there since the first revision):
The processor removes its exclusive access tag if:
It executes a CLREX instruction.
It executes a STREX instruction, regardless of whether the write succeeds.
An exception occurs. This means the processor can resolve semaphore conflicts between different threads.
In a multiprocessor implementation:
Executing a CLREX instruction removes only the local exclusive access tag for the processor.
Executing a STREX instruction, or an exception, removes the local exclusive access tags for the processor.
Executing a STREX instruction to a Shareable memory region can also remove the global exclusive access tags for the processor in the
system.

ARM64: LDXR/STXR vs LDAXR/STLXR

On iOS, there are two similar functions, OSAtomicAdd32 and OSAtomicAdd32Barrier. I'm wondering when you would need the Barrier variant.
Disassembled, they are:
_OSAtomicAdd32:
ldxr w8, [x1]
add w8, w8, w0
stxr w9, w8, [x1]
cbnz w9, _OSAtomicAdd32
mov x0, x8
ret lr
_OSAtomicAdd32Barrier:
ldaxr w8, [x1]
add w8, w8, w0
stlxr w9, w8, [x1]
cbnz w9, _OSAtomicAdd32Barrier
mov x0, x8
ret lr
In which scenarios would you need the Load-Acquire / Store-Release semantics of the latter? Can LDXR/STXR instructions be reordered? If they can, is it possible for an atomic update to be "lost" in the absence of a barrier? From what I've read, it doesn't seem like that can happen, and if true, then why would you need the Barrier variant? Perhaps only if you also happened to need a DMB for other purposes?
Thanks!
Oh, the mind-bending horror of weak memory ordering...
The first snippet is your basic atomic read-modify-write - if someone else touches whatever address x1 points to, the store-exclusive will fail and it will try again until it succeeds. So far so good. However, this only applies to the address (or more rightly region) covered by the exclusive monitor, so whilst it's good for atomicity, it's ineffective for synchronisation of anything other than that value.
Consider a case where CPU1 is waiting for CPU0 to write some data to a buffer. CPU1 sits there waiting on some kind of synchronisation object (let's say a semaphore), waiting for CPU0 to update it to signal that new data is ready.
CPU0 writes to the data address.
CPU0 increments the semaphore (atomically, as you do) which happens to be elsewhere in memory.
???
CPU1 sees the new semaphore value.
CPU1 reads some data, which may or may not be the old data, the new data, or some mix of the two.
Now, what happened at step 3? Maybe it all occurred in order. Quite possibly, the hardware decided that since there was no address dependency it would let the store to the semaphore go ahead of the store to the data address. Maybe the semaphore store hit in the cache whereas the data didn't. Maybe it just did so because of complicated reasons only those hardware guys understand. Either way it's perfectly possible for CPU1 to see the semaphore update before the new data has hit memory, thus read back invalid data.
To fix this, CPU0 must have a barrier between steps 1 and 2, to ensure the data has definitely been written before the semaphore is written. Having the atomic write be a barrier is a nice simple way to do this. However since barriers are pretty performance-degrading you want the lightweight no-barrier version as well for situations where you don't need this kind of full synchronisation.
Now, the even less intuitive part is that CPU1 could also reorder its loads. Again since there is no address dependency, it would be free to speculate the data load before the semaphore load irrespective of CPU0's barrier. Thus CPU1 also needs its own barrier between steps 4 and 5.
For the more authoritative, but pretty heavy going, version have a read of ARM's Barrier Litmus Tests and Cookbook. Be warned, this stuff can be confusing ;)
As an aside, in this case the architectural semantics of acquire/release complicate things further. Since they are only one-way barriers, whilst OSAtomicAdd32Barrier adds up to a full barrier relative to code before and after it, it doesn't actually guarantee any ordering relative to the atomic operation itself - see this discussion from Linux for more explanation. Of course, that's from the theoretical point of view of the architecture; in reality it's not inconceivable that the A7 hardware has taken the 'simple' option of wiring up LDAXR to just do DMB+LDXR, and so on, meaning they can get away with this since they're at liberty to code to their own implementation, rather than the specification.
OSAtomicAdd32Barrier() exists for people that are using OSAtomicAdd() for something beyond just atomic increment. Specifically, they are implementing their own multi-processing synchronization primitives based on OSAtomicAdd(). For example, creating their own mutex library. OSAtomicAdd32Barrier() uses heavy barrier instructions to enforce memory ordering on both side of the atomic operation. This is not desirable in normal usage.
To summarize:
1) If you just want to increment an integer in a thread-safe way, use OSAtomicAdd32()
2) If you are stuck with a bunch of old code that foolishly assumes OSAtomicAdd32() can be used as an interprocessor memory ordering and speculation barrier, replace it with OSAtomicAdd32Barrier()
I would guess that this is simply a way of reproducing existing architecture-independent semantics for this operation.
With the ldaxr/stlxr pair, the above sequence will assure correct ordering if the AtomicAdd32 is used as a synchronization mechanism (mutex/semaphore) - regardless of whether the resulting higher-level operation is an acquire or release.
So - this is not about enforcing consistency of the atomic add, but about enforcing ordering between acquiring/releasing a mutex and any operations performed on the resource protected by that mutex.
It is less efficient than the ldxar/stxr or ldxr/stlxr you would use in a normal native synchronization mechanism, but if you have existing platform-independent code expecting an atomic add with those semantics, this is probably the best way to implement it.

Resources