Suppose there are two threads, the main thread and say thread B(created by main). If B acquired a mutex(say pthread_mutex) and it has called pthread_exit without unlocking the lock. So what happens to the mutex? Does it become free?
nope. The mutex remaines locked. What actually happens to such a lock depends on its type, You can read about that here or here
If you created a robust mutex by setting up the right attributes before calling pthread_mutex_init, the mutex will enter a special state when the thread that holds the lock terminates, and the next thread to attempt to acquire the mutex will obtain an error of EOWNERDEAD. It is then responsible for cleaning up whatever state the mutex protects and calling pthread_mutex_consistent to make the mutex usable again, or calling pthread_mutex_unlock (which will make the mutex permanently unusable; further attempts to use it will return ENOTRECOVERABLE).
For non-robust mutexes, the mutex is permanently unusable if the thread that locked it terminates without unlocking it. Per the standard (see the resolution to issue 755 on the Austin Group tracker), the mutex remains locked and its formal ownership continues to belong to the thread that exited, and any thread that attempts to lock it will deadlock. If another thread attempts to unlock it, that's normally undefined behavior, unless the mutex was created with the PTHREAD_MUTEX_ERRORCHECK attribute, in which case an error will be returned.
On the other hand, many (most?) real-world implementations don't actually follow the requirements of the standard. An attempt to lock or unlock the mutex from another thread might spuriously succeed, since the thread id (used to track ownership) might have been reused and may now refer to a different thread (possibly the one making the new lock/unlock request). At least glibc's NPTL is known to exhibit this behavior.
Related
So assuming 1 thread,
the thread acquires a lock via:
pthread_mutex_lock(&lock);
then before unlocking, it again reaches a line that is:
pthread_mutex_lock(&lock);
Will the pthread library block the advancement of the thread, or will it recognize that the thread already holds the lock, thus letting it pass?
The behaviour depends on the kind of the mutex. The POSIX standard says that recursive locking behaviour depends on the type of the lock
If a thread attempts to relock a mutex that it has already locked, pthread_mutex_lock() shall behave as described in the Relock column of the following table.
With the Relock column saying that
a mutex of type PTHREAD_MUTEX_NORMAL can deadlock
a mutex of type PTHREAD_MUTEX_ERRORCHECK shall return an error
a mutex of type PTHREAD_MUTEX_RECURSIVE will work as a recursive lock, that you must then unlock as many times as you locked it
a mutex of type PTHREAD_MUTEX_DEFAULT will have undefined behaviour, which actually means that if on that platform the default lock is of any of the previous 3 types, it will behave characteristically as in the columns above, and if it is some other type then the behaviour will be undefined.
Thus there is especially no point in testing a PTHREAD_MUTEX_DEFAULT lock to find out what the behaviour is.
And the Linux manuals pthread_mutex_lock(3) rephrases with the following:
If the mutex is already locked by the calling thread,
the behavior of pthread_mutex_lock depends on the
kind of the mutex. If the mutex is of the fast
kind, the calling thread is suspended until the mutex
is unlocked, thus effectively causing the calling
thread to deadlock. If the mutex is of the error
checking kind, pthread_mutex_lock returns immediately with the error code EDEADLK. If the mutex is
of the recursive kind, pthread_mutex_lock succeeds and returns immediately, recording the number
of times the calling thread has locked the mutex. An
equal number of pthread_mutex_unlock operations must
be performed before the mutex returns to the unlocked
state.
In Linux according to the documentation, the default type is fast, but you cannot rely that to be portable.
POSIX supports many different mutex types in several variants. All use the same pthread_mutex_t type, so it is not possible to tell what pthread_mutex_lock does when re-locking a lock which has already been acquired by the current thread.
Common behaviors include:
A self-deadlock for a regular mutex (pthread_mutex_lock never returns).
An error for an error-checking mutex (pthread_mutex_lock returns with an error code).
The lock operation succeeds and one more unlock operation will be required before the lock becomes available to other threads for locking (a recursive mutex).
Mutex behavior can be selected when the mutex is created with pthread_mutex_init using attributes; see pthread_mutexattr_init. Some systems also offer non-standard initializers like the standard PTHREAD_MUTEX_INITIALIZER one which create different variants.
Several processes access shared memory, locking it with the mutex and pthread_mutex_lock() for synchronization, and each process can be killed at any moment (in fact I described php-fpm with APC extension, but it doesn't matter).
Will the mutex be unlocked automatically, if the process locked the mutex and then was killed?
Or is there a way to unlock it automatically?
Edit: As it turns out, dying processes and threads have similar behavior in this situation, which depends on robust attribute of mutex.
That depends on the type of mutex. A "robust" mutex will survive the death of the thread/process. See this question: POSIX thread exit/crash/exception-crash while holding mutex
The next thread that will attempt to lock it will receive a EOWNERDEAD error code
Note: Collected information from the comments.
Suppose a condition variable is used in a situation where the signaling thread modifies the state affecting the truth value of the predicate and calls pthread_cond_signal without holding the mutex associated with the condition variable? Is it true that this type of usage is always subject to race conditions where the signal may be missed?
To me, there seems to always be an obvious race:
Waiter evaluates the predicate as false, but before it can begin waiting...
Another thread changes state in a way that makes the predicate true.
That other thread calls pthread_cond_signal, which does nothing because there are no waiters yet.
The waiter thread enters pthread_cond_wait, unaware that the predicate is now true, and waits indefinitely.
But does this same kind of race condition always exist if the situation is changed so that either (A) the mutex is held while calling pthread_cond_signal, just not while changing the state, or (B) so that the mutex is held while changing the state, just not while calling pthread_cond_signal?
I'm asking from a standpoint of wanting to know if there are any valid uses of the above not-best-practices usages, i.e. whether a correct condition-variable implementation needs to account for such usages in avoiding race conditions itself, or whether it can ignore them because they're already inherently racy.
The fundamental race here looks like this:
THREAD A THREAD B
Mutex lock
Check state
Change state
Signal
cvar wait
(never awakens)
If we take a lock EITHER on the state change OR the signal, OR both, then we avoid this; it's not possible for both the state-change and the signal to occur while thread A is in its critical section and holding the lock.
If we consider the reverse case, where thread A interleaves into thread B, there's no problem:
THREAD A THREAD B
Change state
Mutex lock
Check state
( no need to wait )
Mutex unlock
Signal (nobody cares)
So there's no particular need for thread B to hold a mutex over the entire operation; it just need to hold the mutex for some, possible infinitesimally small interval, between the state change and signal. Of course, if the state itself requires locking for safe manipulation, then the lock must be held over the state change as well.
Finally, note that dropping the mutex early is unlikely to be a performance improvement in most cases. Requiring the mutex to be held reduces contention over the internal locks in the condition variable, and in modern pthreads implementations, the system can 'move' the waiting thread from waiting on the cvar to waiting on the mutex without waking it up (thus avoiding it waking up only to immediately block on the mutex).
As pointed out in the comments, dropping the mutex may improve performance in some cases, by reducing the number of syscalls needed. Then again it could also lead to extra contention on the condition variable's internal mutex. Hard to say. It's probably not worth worrying about in any case.
Note that the applicable standards require that pthread_cond_signal be safely callable without holding the mutex:
The pthread_cond_signal() or pthread_cond_broadcast() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits [...]
This usually means that condition variables have an internal lock over their internal data structures, or otherwise use some very careful lock-free algorithm.
The state must be modified inside a mutex, if for no other reason than the possibility of spurious wake-ups, which would lead to the reader reading the state while the writer is in the middle of writing it.
You can call pthread_cond_signal anytime after the state is changed. It doesn't have to be inside the mutex. POSIX guarantees that at least one waiter will awaken to check the new state. More to the point:
Calling pthread_cond_signal doesn't guarantee that a reader will acquire the mutex first. Another writer might get in before a reader gets a chance to check the new status. Condition variables don't guarantee that readers immediately follow writers (After all, what if there are no readers?)
Calling it after releasing the lock is actually better, since you don't risk having the just-awoken reader immediately going back to sleep trying to acquire the lock that the writer is still holding.
EDIT: #DietrichEpp makes a good point in the comments. The writer must change the state in such a way that the reader can never access an inconsistent state. It can do so either by acquiring the mutex used in the condition-variable, as I indicate above, or by ensuring that all state-changes are atomic.
The answer is, there is a race, and to eliminate that race, you must do this:
/* atomic op outside of mutex, and then: */
pthread_mutex_lock(&m);
pthread_mutex_unlock(&m);
pthread_cond_signal(&c);
The protection of the data doesn't matter, because you don't hold the mutex when calling pthread_cond_signal anyway.
See, by locking and unlocking the mutex, you have created a barrier. During that brief moment when the signaler has the mutex, there is a certainty: no other thread has the mutex. This means no other thread is executing any critical regions.
This means that all threads are either about to get the mutex to discover the change you have posted, or else they have already found that change and ran off with it (releasing the mutex), or else have not found they are looking for and have atomically given up the mutex to gone to sleep (and are guaranteed to be waiting nicely on the condition).
Without the mutex lock/unlock, you have no synchronization. The signal will sometimes fire as threads which didn't see the changed atomic value are transitioning to their atomic sleep to wait for it.
So this is what the mutex does from the point of view of a thread which is signaling. You can get the atomicity of access from something else, but not the synchronization.
P.S. I have implemented this logic before. The situation was in the Linux kernel (using my own mutexes and condition variables).
In my situation, it was impossible for the signaler to hold the mutex for the atomic operation on shared data. Why? Because the signaler did the operation in user space, inside a buffer shared between the kernel and user, and then (in some situations) made a system call into the kernel to wake up a thread. User space simply made some modifications to the buffer, and then if some conditions were satisfied, it would perform an ioctl.
So in the ioctl call I did the mutex lock/unlock thing, and then hit the condition variable. This ensured that the thread would not miss the wake up related to that latest modification posted by user space.
At first I just had the condition variable signal, but it looked wrong without the involvement of the mutex, so I reasoned about the situation a little bit and realized that the mutex must simply be locked and unlocked to conform to the synchronization ritual which eliminates the lost wakeup.
Is there a well defined behavior for POSIX mutex ownership in case of
Thread exits
Thread crashes
Thread crashes due to exception
Suppose thread-1 owns a mutex. And thread-2 is waiting to acquire the same mutex. And thread-1 goes the 1/2/3 scenario. What is the effect on thread-2 ?
PS : I believe the behavior for spin-lock is, NOT to unblock thread-2, with reasoning that the section protected by spin-lock is in bad shape anyways.
If you're worried about these issues, Robust Mutexes may be the tool you're looking for:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutexattr_setrobust.html
After a thread that owns a robust mutex terminates without unlocking it, the next thread that attempts to lock it will get EOWNERDEAD and become the new owner. This signals that it's responsible for cleaning up the state the mutex protects, and marking it consistent again with the pthread_mutex_consistent function before unlocking it. Unlocking it without marking it consistent puts the mutex in a permanently unrecoverable state.
Note that with robust mutexes, all code that locks the mutex must be aware of the possibility that EOWNERDEAD could be returned.
It's really simple. If you don't explicitly unlock the mutex, it remains locked, regardless of what happened or why. This is c, not ruby on rails or visual basic.
C Programming:
What happens when a thread tries to acquire a mutex lock, and fails to get it?
Does it go to sleep?
Will the thread be woken up when pthread_mutex_unlock(&mutex); is called?
Then try to obtain the lock again?
From the man page:
The pthread_mutex_lock() function locks mutex. If the mutex is already locked, the calling thread will block until the mutex becomes available.
So yes - your thread is blocked until the lock is available and it can obtain it.
Yes, it is a blocking call and will block until it gets the lock.
The non-blocking version is pthread_mutex_trylock(pthread_mutex_t *mutex) and will return EBUSY if someone else has the lock, or 0 if it got the lock. (Or some other error, of course)
Normally, pthread_mutex_lock cannot return until it acquires the lock, even if this means that it never returns (deadlock). There are a few notable exceptions though:
For recursive mutexes, it can return EAGAIN if the maximum reference count would be exceeded.
For error-checking mutexes, it can return EDEADLK if the thread tries to lock a mutex it already holds a lock on.
For robust mutexes, it can return EOWNERDEAD if another process died while holding the (shared) mutex. In this case, despite getting an error return, the caller holds the mutex lock and can mark the mutex-protected state valid again by calling pthread_mutex_consistent.
For robust mutexes whose owner died and for which the new owner called pthread_mutex_unlock without calling pthread_mutex_consistent first, it will return ENOTRECOVERABLE.
There may be a few cases I missed. Note that none of these apply to normal mutexes (PTHREAD_MUTEX_NORMAL type) without the robust attribute set, so if you only use normal mutexes, you can reasonably assume the call never returns without succeeding.
From the POSIX standard:
If the mutex is already locked, the calling thread shall block until the mutex becomes available.
(...)
If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex.
Where the "resulting in" clause is necessary because
(In the case of PTHREAD_MUTEX_RECURSIVE mutexes, the mutex shall become available when the count reaches zero and the calling thread no longer has any locks on this mutex.)