what is difference between _lwsync and _sync_synchronize in AIX environment? - c

It seems _lwsync is synchronizing multiple processors and
_sync_synchronize is synchronizing in all threads using memory barriers.
But I want to know more specific about differences.

__lwsync:
The lwsync barrier is broadly similar to
sync, including cumulativity properties, except that does not
order store/load pairs and it is cheaper to execute; it suffices to guarantee SC behaviour in MP+lwsyncs (MP with
lwsync in each thread), WRC+lwsync+addr (WRC with
lwsync on Thread 1 and an address dependency on Thread
2), and ISA2+lwsync+data+addr, while SB+lwsyncs and
IRIW+lwsyncs are still allowed.
__sync_synchronize:
This function synchronizes data in all threads.
A full memory barrier is created when this function is invoked.It is a atomic builtin for full memory barrier.No memory operand will be moved across the operation, either forward or backward. Further, instructions will be issued as necessary to prevent the processor from speculating loads across the operation and from queuing stores after the operation.

Related

Are mutexes alone sufficient for thread safe operations?

Suppose we have multiple threads incrementing a common variable X, and each thread synchronizes by using a mutex M;
function_thread_n(){
ACQUIRE (M)
X++;
RELEASE (M)
}
The mutex ensures that only one thread is updating X at any time, but does a mutex ensure that once updated the value of X is visible to the other threads too. Say the initial values of X is 2; thread 1 increments it to 3. However, the cache of another processor might have the earlier value of 2, and another thread can still end up incrementing the value of 2 to 3. The third condition for cache coherence only requires that the order of writes made by different processors holds, right?
I guess this is what memory barriers are for and if a memory barrier is used before releasing the mutex, then the issue can be avoided.
This is a great question.
TL;DR: The short answer is "yes".
Mutexes provide three primary services:
Mutual exclusion, to ensure that only one thread is executing instructions within the critical section between acquire and release of a given mutex.
Compiler optimization fences, which prevent the compiler's optimizer from moving load/store instructions out of that critical section during compilation.
Architectural memory barriers appropriate to the current architecture, which in general includes a memory acquire fence instruction during mutex acquire and a memory release fence instruction during mutex release. These fences prevent superscalar processors from effectively reordering memory load/stores across the fence at runtime in a way that would cause them to appear to be "performed" outside the critical section.
The combination of all three ensure that data accesses within the critical section delimited by the mutex acquire/release will never observably race with data accesses from another thread who also protects its accesses using the same mutex.
Regarding the part of your question involving caches, coherent cache memory systems separately ensure that at any particular moment, a given line of memory is only writeable by at most one core at a time. Furthermore, memory store operations do not complete until they have evicted any "newly stale" copies cached elsewhere in the caching system (e.g. the L1 of other cores). See this question for more details.

Is volatile necessary for the resource used in a critical section?

I am curious about whether volatile is necessary for the resources used in a critical section. Consider I have two threads executed on two CPUs and they are competing on a shared resource. I know I need to a locking mechanism to make sure only one thread is performing operations on that shared resource. Below is the pseudo code that will be executed on those two threads.
take_lock();
// Read shared resource.
read_shared_resouce();
// Write something to shared resource.
write_shared_resource();
release_lock();
I am wondering if I need to make that shared resource volatile to make sure that when one thread is reading shared resource, a thread won't just get the value from registers, it will actually read from that shared resource. Or maybe I should use a accessor functions to make the access to that shared resource volatile with some memory barrier operations instead of make that shared resource volatile?
I am curious about whether volatile is necessary for the resources used in a critical section. Consider I have two threads executed on two CPUs and they are competing on a shared resource. I know I need to a locking mechanism to make sure only one thread is performing operations on that shared resource.
Making sure that only one thread accesses a shared resource at a time is only part of what a locking mechanism adequate for the purpose will do. Among other things, such a mechanism will also ensure that all writes to shared objects performed by thread Ti before it releases lock L are visible to all other threads Tj after they subsequently acquire lock L. And that in terms of the C semantics of the program, notwithstanding any questions of compiler optimization, register usage, CPU instruction reordering, or similar.
When such a locking mechanism is used, volatile does not provide any additional benefit for making threads' writes to shared objects be visible to each other. When such a locking mechanism is not used, volatile does not provide a complete substitute.
C's built-in (since C11) mutexes provide a suitable locking mechanism, at least when using C's built-in threads. So do pthreads mutexes, Sys V and POSIX semaphores, and various other, similar synchronization objects available in various environments, each with respect to corresponding multithreading systems. These semantics are pretty consistent across C-like multithreading implementations, extending at least as far as Java. The semantic requirements for C's built-in multithreading are described in section 5.1.2.4 of the current (C17) language spec.
volatile is for indicating that an object might be accessed outside the scope of the C semantics of the program. That may happen to produce properties that interact with multithreaded execution in a way that is taken to be desirable, but that is not the purpose or intended use of volatile. If it were, or if volatile were sufficient for such purposes, then we would not also need _Atomic objects and operations.
The previous remarks focus on language-level semantics, and that is sufficient to answer the question. However, inasmuch as the question asks specifically about accessing variables' values from registers, I observe that compilers don't actually have to do anything much multithreading-specific in that area as long as acquiring and releasing locks requires calling functions.
In particular, if an execution E of function f writes to an object o that is visible to other functions or other executions of f, then the C implementation must ensure that that write is actually performed on memory before E evaluates any subsequent function call (such as is needed to release a lock). This is necessary because because the value written must be visible to the execution of the called function, regardless of any other threads.
Similarly, if E uses the value of o after return from a function call (such as is needed to acquire a lock) then it must load that value from memory to ensure that it sees the effect of any write that the function may have performed.
The only thing special to multithreading in this regard is that the implementation must ensure that interprocedural analysis optimizations or similar do not subvert the needed memory reads and writes around the lock and unlock functions. In practice, this rarely requires special attention.
The answer is no; volatile is not necessary (assuming the critical-section functions you are using were implemented correctly, and you are using them correctly, of course). Any proper critical-section API's implementation will include the memory-barriers necessary to handle flushing registers, etc, and therefore avoid the need for the volatile keyword.
volatile is normally used inform compiler that this data might be change by others (interrupt, DMA, other CPU,...) to prevent un-expected optimization in compiler.
So in your case you may need or don't need:
If you don't have some while loop with some info from share resource in the thread for value change, you don't really need for volatile.
If you have some wait like while (shareVal == 0) in the source code, you need to tell compiler explicit by attribute volatile.
For case 2 CPUs, there is also possibility issue with cache that a CPU is only reading value from cache memory. Please consider to configure memory attribute properly for shared resource.

two processes accessing shared memory without lock

There are two variables in a shared memory (shared by two processes), one is a general int variable (call it intVar) with initial value say 5 and the other variable of pid_t type (call it pidVar) with initial value 0.
Now the code to be developed is that both the processes try to add a number (which is fixed and different for both these processes) to the intVar variable in the shared memory (there is no lock on memory ever) and if a process finds the pidVar variable in the shared memory to be having the value 0, only then it will update it with it's own pid.
So the point of confusion for me is:
1: Since both the processes are trying to add a number, then ultimately the final value in the intVar variable will be same all the time after both the processes are done adding a number?
2: The pidVar in the shared memory is going to be totally random? Right?
Not necessarily. Adding to intVar involves reading from memory, calculating a new value then writing back to memory. The two processes could read at the same time, which would mean two writes, so whichever came last would win.
pidVar is defined to be initially zero. Either or both of the processes may find it that way, causing either or both to update it. So you will get one of them.
Your assignment says no locking, which is a loaded and imprecise term. Locking can refer to physical layer operations such as bus protocols right through high level operations such as inter process communication. I prefer synchronizing is locking; thus everything from a bus protocol to message passing is locking.
Some processors create the illusion of atomic memory, for example by having opcodes like "lock: addl $2, (%sp)"; but in reality that is a form of synchronization implemented in the processor via busy wait. Synchronization is locking; thus that would be in violation of your assignment.
Some processors slice the memory protocol down so you can control interference. Typical implementations use a load linked, store conditional pattern, where the store fails if anything has interfered with cache line identified by the load linked instruction. This permits composing atomics by retry operations; but is again synchronization, thus locking.

Are writes on the PCIe bus atomic?

I am a newbie to PCIe, so this might be a dumb question. This seems like fairly basic information to ask about PCIe interfaces, but I am having trouble finding the answer so I am guessing that I am missing some information which makes the answer obvious.
I have a system in which I have an ARM processor (host) communicating to a Xilinx SoC via PCIe (device). The endpoint within the SoC is an ARM processor as well.
The external ARM processor (host) is going to be writing to the register space of the SoC's ARM processor (device) via PCIe. This will command the SoC to do various things. That register space will be read-only with respect to the SoC (device). The external ARM processor (host) will make a write to this register space, and then signal an interrupt to indicate to the SoC that new parameters have been written and it should process them.
My question is: are the writes made by the external ARM (host) guaranteed to be atomic with respect to the reads by the SoC (device)? In conventional shared memory situations, a write to a single byte is guaranteed to be an atomic operation (i.e. you can never be in a situation where the reader had read the first 2 bits of the byte, but before it reads the last 6 bits the writer replace them with a new value, leading to garbage data). Is this the case in PCIe as well? And if so, what is the "unit" of atomic-ness? Are all bytes in a single transaction atomic with respect to the entire transaction, or is each byte atomic only in relation to itself?
Does this question make sense?
Basically I want to know to what extent memory protection is necessary in my situation. If at all possible, I would like to avoid locking memory regions as both processors are running RTOSes and avoiding memory locks would make design simpler.
So on the question of atomicity the PCIe 3.0 specification (only one I have) is mentioned a few times.
First you have SECTION 6.5 Locked Transactions this is likely not what you need but I want to document it anyway. Basically it's the worst case scenario of what you were describing earlier.
Locked Transaction support is required to prevent deadlock in systems that use legacy software
which causes the accesses to I/O devices
But you need to properly check using this anyway as it notes.
If any read associated with a locked sequence is completed unsuccessfully, the Requester must
assume that the atomicity of the lock is no longer assured, and that the path between the
Requester and Completer is no longer locked
With that said Section 6.15 Atomic Operations (AtomicOps) is much more like what you are interested in. There are 3 types of operations you can perform with the AtomicOps instruction.
FetchAdd (Fetch and Add): Request contains a single operand, the “add” value
Swap (Unconditional Swap): Request contains a single operand, the “swap” value
CAS (Compare and Swap): Request contains two operands, a “compare” value and a “swap” value
Reading Section 6.15.1 we see mention that these instructions are largely implemented for cases where multiple producers/consumers exist on a singular bus.
AtomicOps enable advanced synchronization mechanisms that are particularly useful when there are
multiple producers and/or multiple consumers that need to be synchronized in a non-blocking fashion. For example, multiple producers can safely enqueue to a common queue without any explicit locking.
Searching the rest of the specification I find little mention of atomicity outside of the sections pertaining to these AtomicOps. That would imply to me that the spec only insures such behavior when these operations are used however the context around why this was implemented suggests that they only expect such questions when a multi producer/consumer environment exists which yours clearly does not.
The last place I would suggest looking to answer your question is Section 2.4 Transaction Ordering To note I am fairly sure the idea of transactions "passing" others only makes sense with switches in the middle as these switches can make such decisions, once your put bits on the bus in your case there is no going back. So this likely only applies if you place a switch in there.
Your concern is can a write bypass a read. Write being posted, read being non-posted.
A3, A4 A Posted Request must be able to pass Non-Posted Requests to avoid deadlocks.
So in general the write is allowed to bypass the read to avoid deadlocks.
With that concern raised I do not believe it is possible for the write to bypass the read on your system since there is no device on the bus to do this transaction reordering. Since you have RTOSes I highly doubt they are enquing the PCIe transactions and reordering them before sending although I have not looked into that personally.

mmap thread safety in a multi-core and multi-cpu environment

I am a little confused as to the real issues between multi-core and multi-cpu environments when it comes to shared memory, with particular reference to mmap in C.
I have an application that utilizes mmap to share multiple segments of memory between 2 processes. Each process has access to:
A Status and Control memory segment
Raw data (up to 8 separate raw data buffers)
The Status and Control segment is used essentially as an IPC. IE, it may convey that buffer 1 is ready to receive data, or buffer 3 is ready for processing or that the Status and Control memory segment is locked whilst being updated by either parent or child etc etc.
My understanding is, and PLEASE correct me if I am wrong, is that in a multi-core CPU environment on a single boarded PC type infrastructure, mmap is safe. That is, regardless of the number of cores in the CPU, RAM is only ever accessed by a single core (or process) at any one time.
Does this assumption of single-process RAM access also apply to multi-cpu systems? That is, a single PC style board with multiple CPU's (and I guess, multiple cores within each CPU).
If not, I will need to seriously rethink my logic to allow for multi-cpu'd single-boarded machines!
Any thoughts would be greatly appreciated!
PS - by single boarded I mean a single, standalone PC style system. This excludes mainframes and the like ... just to clarify :)
RAM is only ever accessed by a single core (or process) at any one time.
Take a step back and think about your assumption means. Theoretically, yes, this statement is true, but I don't think it means what you think it means. There are no practical conclusions you can draw from this other than maybe "the memory will not catch fire if two CPUs write to the same address at the same time". Let me explain.
If one CPU/process writes to a memory location, then a different CPU/process writes to the same location, the memory writes will not happen at the same time, they will happen one at a time. You can't generally reason about which write will happen before the other, you can't reason about if a read from one CPU will happen before the write from the other CPU, one some older CPUs you can't even reason if multi-byte (multi-word, actually) values will be stored/accessed one byte at a time or multiple bytes at a time (which means that reads and writes to multibyte values can get interleaved between CPUs or processes).
The only thing multiple CPUs change here is the order of memory reads and writes. On a single CPU reading memory you can be pretty sure that your reads from memory will see earlier writes to the same memory (iff no other hardware is reading/writing the memory, then all bets are off). On multiple CPUs the order of reads and writes to different memory locations will surprise you (cpu 1 writes to address 1 and then 2, but cpu 2 might just see the new value at address 2 and the old value at address 1).
So unless you have specific documentation from your operating system and/or CPU manufacturer you can't make any assumptions (except that when two writes to the same memory location happen one will happen before the other). This is why you should use libraries like pthreads or stdatomic.h from C11 for proper locking and synchronization or really dig deep down into the most complex parts of the CPU documentation to actually understand what will happen. The locking primitives in pthreads not only provide locking, they are also guarantee that memory is properly synchronized. stdatomic.h is another way to guarantee memory synchronization, but you should carefully read the C11 standard to see what it promises and what it doesn't promise.
One potential issue is that each core has it's own cache (usually just level1, as level2 and level3 caches are usually shared). Each cpu would also have it's own cache. However most systems ensure cache coherency, so this isn't the issue (except for performance impact of constantly invalidating caches due to writes to the same memory shared in a cache line by each core or processor).
The real issue is that there is no guarantee against reordering of reads and writes due to optimizations by the compiler and/or the hardware. You need to use a Memory Barrier to flush out any pending memory operations to synchronize the state of the threads or shared memory of processes. The memory barrier will occur if you use one of the synchronization types such as an event, mutex, semaphore, ... . Not all of the shared memory reads and writes need to be atomic, but you need to use synchronization between threads and/or processes before accessing any shared memory possibly updated by another thread and/or process.
This does not sound right to me. Two processes on two different cores can both load and store data to RAM at the same time. In addition to this caching strategies can result in all kinds of strangeness-es. So please make sure all access to shared memory is properly synchronized using (interprocess) synchronization objects.
My understanding is, and PLEASE correct me if I am wrong, is that in a multi-core CPU environment on a single boarded PC type infrastructure, mmap is safe. That is, regardless of the number of cores in the CPU, RAM is only ever accessed by a single core (or process) at any one time.
Even if this holds true for some particular architecture, such an assumption is entirely wrong in general. You should have proper synchronisation between the processes that modify the shared memory segment, unless atomic intrinsics are used and the algorithm itself is lock-free.
I would advise you to put a pthread_mutex_t in the shared memory segment (shared across all processes). You will have to initialise it with the PTHREAD_PROCESS_SHARED attribute:
pthread_mutexattr_t mutex_attr;
pthread_mutexattr_init(&mutex_attr);
pthread_mutexattr_setpshared(&mutex_attr, PTHREAD_PROCESS_SHARED);
pthread_mutex_init(mutex, &mutex_attr);

Resources