I am new to thread programming. I know that mutexes are used to protect access to shared data in a multi-threaded program.
Suppose I have one thread with variable a and a second one with the pointer variable p that holds the address of a. Is the code safe if, in the second thread, I lock a mutex before I modify the value of a using the pointer variable? From my understanding it is safe.
Can you confirm? And also can you provide the reason why it is true or why it is not true?
I am working with c and pthreads.
The general rule when doing multithreading is that shared variables among threads that are read and written need to be accessed serially, which means that you must use some sort of synchronization primitive. Mutexes are a popular choice, but no matter what you end up using, you just need to remember that before reading from or writing to a shared variable, you need to acquire a lock to ensure consistency.
So, as long as every thread in your code agrees to always use the same lock before accessing a given variable, you're all good.
Now, to answer your specific questions:
Is the code safe if, in the second thread, I lock a mutex before I
modify the value of a using the pointer variable?
It depends. How do you read a on the first thread? The first thread needs to lock the mutex too before accessing a in any way. If both threads lock the same mutex before reading or writing the value of a, then it is safe.
It's safe because the region of code between the mutex lock and unlock is exclusive (as long as every thread respects the rule that before doing Y, they need to acquire lock X), since only one thread at a time can have the lock.
As for this comment:
And if the mutex is locked before p is used, then both a and p are
protected? The conclusion being that every memory reference present in
a section where a mutex is locked is protected, even if the memory is
indirectly referenced?
Mutexes don't protect memory regions or references, they protect a region of code. Whatever you make between locking and unlocking is exclusive; that's it. So, if every thread accessing or modifying either of a or p locks the same mutex before and unlocks afterwards, then as a side-effect you have synchronized accesses.
TL;DR Mutexes allow you to write code that never executes in parallel, you get to choose what that code does - a remarkably popular pattern is to access and modify shared variables.
Related
I have a question on the interplay between condition variables and associated mutex locks (it arose from a simplified example I was presenting in a lecture, confusing myself in the process). Two threads are exchanging data (lets say an int n indicating an array size, and a double *d which points to the array) via shared variables in the memory of their process. I use an additional int flag (initially flag = 0) to indicate when the data (n and d) is ready (flag = 1), a pthread_mutex_t mtx, and a condition variable pthread_cond_t cnd.
This part is from the receiver thread which waits until flag becomes 1 under the protection of the mutex lock, but afterward processes n and d without protection:
while (1) {
pthread_mutex_lock(&mtx);
while (!flag) {
pthread_cond_wait(&cnd, &mtx);
}
pthread_mutex_unlock(&mtx);
// use n and d
}
This part is from the sender thread which sets n and d beforehand without protection by the mutex lock, but sets flag while the mutex is locked:
n = 10;
d = malloc(n * sizeof(float));
pthread_mutex_lock(&mtx);
flag = 1;
pthread_cond_signal(&cnd);
pthread_mutex_unlock(&mtx);
It is clear that you need the mutex in the sender since otherwise you have a "lost wakeup call" problem (see https://stackoverflow.com/a/4544494/3852630).
My question is different: I'm not sure which variables have to be set (in the sender thread) or read out (in the receiver thread) inside the region protected by the mutex lock, and which variables don't need to be protected by the mutex lock. Is it sufficient to protect flag on both sides, or do n and d also need protection?
Memory visiblity (see the rule below) between sender and receiver should be guaranteed by the call to pthread_cond_signal(), so the necessary pairwise memory barriers should be there (in combination with pthread_cond_wait()).
I'm aware that this is an unusual case. Typically my applications modify a task list in the sender and pop tasks from the list in the receiver, and the associated mutex lock protects of the list operations on both sides. However I'm not sure what would be necessary in the case above. Could the danger be that the compiler (which is not aware of the concurrent access to variables) optimizes away the write and/or read access to the variables? Are there other problems if n and d are not protected by the mutex?
Thanks for your help!
David R. Butenhof: Programming with POSIX Threads, p.89: "Whatever memory values a thread can see when is signals ... a condition variable can also be seen by any thread that is awakened by that signal ...".
At the low level of memory ordering, the rules aren't specified in terms of "the region protected by a mutex", but in terms of synchronization. Whenever two different threads access the same non-atomic object, then unless they are both reads, there must be a synchronization operation in between, to ensure that one of the accesses definitely happens before the other.
The way to achieve synchronization is to have one thread perform a release operation (such as unlocking a mutex) after accessing the shared variables, and have the other thread perform an acquire operation (such as locking a mutex) before accessing them, in such a way that program logic guarantees the acquire must have happened after the release.
And here, you have that. The sender thread does perform a mutex unlock after accessing n and d (last line of the sender code). And the receiver does perform a mutex lock before accessing them (inside pthread_cond_wait). The setting and testing of flag ensures that, when we exit the while (!flag) loop, the most recent lock of the mutex by the receiver did happen after the unlock by the sender. So synchronization is achieved.
The compiler and CPU must not perform any optimization that would defeat this, so in particular they can't optimize away the accesses to n and d, or reorder them around the synchronizing operations. This is usually ensured by treating the release/acquire operations as barriers. Any accesses to potentially shared objects that occur in program order before a release barrier must actually be performed and flushed to coherent memory/cache prior to anything that comes after the barrier (in this case, before any other thread may see the mutex as unlocked). If special CPU instructions are needed to ensure global visibility, the compiler must emit them. And vice versa for acquire barriers: any access that occurs in program order after an acquire barrier must not be reordered before it.
To say it another way, the compiler treats the release barrier as an operation that may potentially read all of memory; so all variables must be written out before that point, so that the actual contents of memory at that point will match what an abstract machine would have. Likewise, an acquire barrier is treated as an operation that may potentially write all of memory, and all variables must be reloaded from memory afterwards. The only exception would be local variables for which the compiler can prove that no other thread could legally know their address; those can be safely kept in registers or otherwise reordered.
It's true that, after the synchronizing lock operation, the receiver happened to unlock the mutex again, but that isn't relevant here. That unlock doesn't synchronize with anything in this particular program, and it has no impact on its execution. Likewise, for purposes of synchronizing access to n and d, it didn't matter whether the sender locked the mutex before or after accessing them. (Though it was important that the sender locked the mutex before writing to flag; that's how we ensure that any earlier reads of flag by the receiver really did happen before the write, instead of racing with it.)
The principle that "accesses to shared variables should be inside a critical region protected by a mutex" is just a higher-level abstraction that is one way to ensure that accesses by different threads always have a synchronizing unlock-lock pair in between them. And in cases where the variables could be accessed over and over again, you normally would want a lock before, and an unlock after, every such access, which is equivalent to the "critical section" principle. But this principle is not itself the fundamental rule.
That said, in real life you probably do want to follow this principle as much as possible, since it will make it easier to write correct code and avoid subtle bugs, and likewise make it easier for other programmers to verify that your code is correct. So even though it is not strictly necessary in this program for the accesses to n,d to be "protected" by the mutex, it would probably be wise to do so anyway, unless there is a significant and measurable benefit (performance or otherwise) to be gained by avoiding it.
The condition variable doesn't play a role in the race avoidance here, except insofar as the pthread_cond_wait locked the mutex. Functionally, it is equivalent to having the receiver simply do a tight spin loop of "lock mutex; test flag; unlock mutex", but without wasting all those CPU cycles.
And I think that the quote from Butenhof is mistaken, or at best misleading. My understanding is that pthread_cond_signal by itself is not guaranteed to be a barrier of any kind, and in fact has no memory ordering effect whatsoever. POSIX doesn't directly address memory ordering, but this is the case for the standard C equivalent cnd_signal. There would be no point in having pthread_cond_signal ensure global visibility unless you could make use of it by assuming that all those accesses were visible by the time the corresponding pthread_cond_wait returns. But since pthread_cond_wait can wake up spuriously, you cannot ever safely make such an assumption.
In this program, the necessary release barrier in the sender comes not from pthread_cond_signal, but rather from the subsequent pthread_mutex_unlock.
Is it sufficient to protect flag on both sides, or do n and d also need protection?
In principle, if you use your mutex, CV, and flag in such a way that the the writer does not modify n, d, or *d after setting flag under mutex protection, and the reader cannot access n, d, or *d until after it observes the modification of flag (under protection of the same mutex), then you can rely on the reader observing the writer's last-written values of n, d, and *d. This is more or less a hand-rolled semaphore.
In practice, you should use whichever system synchronization objects you have chosen (mutexes, semaphores, etc.) to protect all the shared data. Doing so is easier to reason about and less prone to spawn bugs. Often, it's simpler, too.
I am writing a program in which a memory array is modified by one thread under 2 possible operations (modify the array content, or dealloc the array and replace it by allocating a new array). The memory array can be read by many threads except when the array is modified or deallocated and replaced.
I know how to use mutex lock to allow the memory to be modified by only one thread at all time. How can I use it (or other multithreading tools in c) to allow arbitrary number of read threads to access the memory, except when the write thread modifies the memory?
The best solution to achieve this is using read-write locks i.e pthread_rwlock_* as already answered in above comments. Providing a little more detail about it.
Read-write locks are used for shared read access or exclusive write access. A thread that needs read access can't continue while any thread currently has write access. A thread that needs write access can't continue when any other thread has either write access or read access. When both readers and writers are waiting for the access at the same time, there is default action to give precedence to any of them, this rule can be changed.
Read-write lock functions with arguments are much clearly explained here:
https://docs.oracle.com/cd/E19455-01/806-5257/6je9h032u/index.html
There is a post in stackoverflow itself about the same:
concurrent readers and mutually excluding writers in C using pthreads
This (read-write locks) may cause the writer thread to starve if precedence are not properly defined and implementation has not taken care of case when too many readers are waiting for writer to finish. So read about it too:
How to prevent writer starvation in a read write lock in pthreads
I've never had the chance to play with the pthreads library before, but I am reviewing some code involving pthread mutexes. I checked the documentation for pthread_mutex_lock and pthread_mutex_init, and my understanding from reading the man pages for both these functions is that I must call pthread_mutex_init before I call pthread_mutex_lock.
However, I asked a couple colleagues, and they think it is okay to call pthread_mutex_lock before calling pthread_mutex_init. The code I'm reviewing also calls pthread_mutex_lock without even calling pthread_mutex_init.
Basically, is it safe and smart to call pthread_mutex_lock before ever calling pthread_mutex_init (if pthread_mutex_init even gets called)?
EDIT: I also see some examples where pthread_mutex_lock is called when pthread_mutex_init is not used, such as this example
EDIT #2: Here is specifically the code I'm reviewing. Please note that the configure function acquires and attaches to some shared memory that does not get initialized. The Java code later on will call lock(), with no other native functions called in-between. Link to code
Mutexes are variables containing state (information) that functions need to do their job. If no information was needed, the routine wouldn't need a variable. Likewise, the routine can't possibly function properly if you feed random garbage to it.
Most platforms do accept a mutex object filled with zero bytes. This is usually what pthread_mutex_init and PTHREAD_MUTEX_INITIALIZER create. As it happens, the C language also guarantees that uninitialized global variables are zeroed out when the program starts. So, it may appear that you don't need to initialize pthread_mutex_t objects, but this is not the case. Things that live on the stack or the heap, in particular, often won't be zeroed.
Calling pthread_mutex_init after pthread_lock is certain to have undesired consequences. It will overwrite the variable. Potential results:
The mutex gets unlocked.
A race condition with another thread attempting to get the lock, leading to a crash.
Resources leaked in the library or kernel (but will be freed on process termination).
The POSIX standard says:
If mutex does not refer to an initialized mutex object, the behavior
of pthread_mutex_lock(), pthread_mutex_trylock(), and
pthread_mutex_unlock() is undefined.
So you do need to initialise the mutex. This can be done either by a call to pthread_mutex_init(); or, if the mutex has static storage duration, by using the static initializer PTHREAD_MUTEX_INITIALIZER. Eg:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
here is the text from the link I posted in a comment:
Mutual exclusion locks (mutexes) prevent multiple threads
from simultaneously executing critical sections of code that
access shared data (that is, mutexes are used to serialize
the execution of threads). All mutexes must be global. A
successful call for a mutex lock by way of mutex_lock()
will cause another thread that is also trying to lock the
same mutex to block until the owner thread unlocks it by way
of mutex_unlock(). Threads within the same process or
within other processes can share mutexes.
Mutexes can synchronize threads within the **same process** or
in ***other processes***. Mutexes can be used to synchronize
threads between processes if the mutexes are allocated in
writable memory and shared among the cooperating processes
(see mmap(2)), and have been initialized for this task.
Initialize Mutexes are either intra-process or inter-process,
depending upon the argument passed implicitly or explicitly to the initialization of that mutex.
A statically allocated mutex does not need to be explicitly initialized;
by default, a statically allocated mutex is initialized with all zeros and its scope is set to be within the calling process.
For inter-process synchronization, a mutex needs to be allo-
cated in memory shared between these processes. Since the
memory for such a mutex must be allocated dynamically, the
mutex needs to be explicitly initialized using mutex_init().
also, for inter-process synchronization,
besides the requirement to be allocated in shared memory,
the mutexes must also use the attribute PTHREAD_PROCESS_SHARED,
otherwise accessing the mutex from another process than its creator results in undefined behaviour
(see this: linux.die.net/man/3/pthread_mutexattr_setpshared):
The process-shared attribute is set to PTHREAD_PROCESS_SHARED to permit a
mutex to be operated upon by any thread that has access to the memory
where the mutex is allocated, even if the mutex is allocated in memory that is shared by multiple processes
What if all threads read a global variable which was assigned a value by the main() prior to the creation of the threads. Do we need any Mutex for synchronization?
For reading the variable: no
For writing to and reading the variable: yes
No.
A data race happens when multiple threads access a memory location (through a non-atomic value) and at least one of the accesses is a write and the operations are not ordered.
Since thread creation is a synchronization point, all the accesses after thread creation are ordered after the initial write access, and the later accesses are only reads. So there is no race.
If any of the threads wants to change the value of your global variable then yes, you need a new mutex. Otherwise no synchronization is needed.
If I have two threads and one global variable (one thread constantly loops to read the variable; the other constantly loops to write to it) would anything happen that shouldn't? (ex: exceptions, errors). If it, does what is a way to prevent this. I was reading about mutex locks and that they allow exclusive access to a variable to one thread. Does this mean that only that thread can read and write to it and no other?
Would anything happen that shouldn't?
It depends in part on the type of the variables. If the variable is, say, a string (long array of characters), then if the writer and the reader access it at the same time, it is completely undefined what the reader will see.
This is why mutexes and other coordinating mechanisms are provided by pthreads.
Does this mean that only that thread can read and write to it and no other?
Mutexes ensure that at most one thread that is using the mutex can have permission to proceed. All other threads using the same mutex will be held up until the first thread releases the mutex. Therefore, if the code is written properly, at any time, only one thread will be able to access the variable. If the code is not written properly, then:
one thread might access the variable without checking that it has permission to do so
one thread might acquire the mutex and never release it
one thread might destroy the mutex without notifying the other
None of these is desirable behaviour, but the mere existence of a mutex does not prevent any of these happening.
Nevertheless, your code could reasonably use a mutex carefully and then the access to the global variable would be properly controlled. While it has permission via the mutex, either thread could modify the variable, or just read the variable. Either will be safe from interference by the other thread.
Does this mean that only that thread can read and write to it and no other?
It means that only one thread can read or write to the global variable at a time.
The two threads will not race amongst themselves to access the global variable neither will they access it at the same time at any given point of time.
In short the access to the global variable is Synchronized.
First; In C/C++ unsynchronized read/write of variable does not generate any exceptions or system error, BUT it can generate application level errors -- mostly because you are unlikely to fully understand how the memory is accessed, and whether it is atomic unless you look at the generated assembler. A multi core CPU may likely create hard-to-debug race conditions when you access shared memory without synchronization.
Hence
Second; You should always use synchronization -- such as mutex locks -- when dealing with shared memory. A mutex lock is cheap; so it will not really impact performance if done right. Rule of thumb; keep the lcok for as short as possible, such as just for the duration of reading/incrementing/writing the shared memory.
However, from your description, it sounds like that one of your threads is doing nothing BUT waiting for the shared meory to change state before doing something -- that is a bad multi-threaded design which cost unnecessary CPU burn, so
Third; Look at using semaphores (sem_create/wait/post) for synchronization between your threads if you are trying to send a "message" from one thread to the other
As others already said, when communicating between threads through "normal" objects you have to take care of race conditions. Besides mutexes and other lock structures that are relatively heavy weight, the new C standard (C11) provides atomic types and operations that are guaranteed to be race-free. Most modern processors provide instructions for such types and many modern compilers (in particular gcc on linux) already provide their proper interfaces for such operations.
If the threads truly are only one producer and only one consumer, then (barring compiler bugs) then
1) marking the variable as volatile, and
2) making sure that it is correctly aligned, so as to avoid interleaved fetches and stores
will allow you to do this without locking.