When will a thread woken via a condition variable run? - c

When I wake up a thread waiting on a condition variable while holding the corresponding mutex, can I assume that the woken thread will run after I release the mutex and before anyone else (myself included) can lock the mutex again? Or can I only be sure that it will run at some point in the future?
To be precise, assume that I have the following functions.
bool taken = false;
int waiting = 0;
pthread_mutex_t m; // properly initialised elsewhere
pthread_cond_t c;
void enter() {
pthread_mutex_lock(&m);
// Is `taken || (waiting == 0)` guaranteed here?
while (taken) {
++ waiting;
pthread_cond_wait(&c, &m);
-- waiting;
}
taken = true;
pthread_mutex_unlock(&m);
}
void leave() {
pthread_mutex_lock(&m);
taken = false;
if (waiting > 0) {
pthread_cond_signal(&c);
}
pthread_mutex_unlock(&m);
}
(This is a toy example, not meant to be useful.)
Also assume that all threads are using these functions as
enter()
// some work here
leave()
Can I then be sure that directly after acquiring the mutex in enter() (see comment in the code), if taken is false waiting has to be zero? [It seems to me that this should be the case, because the woken thread will assume to find the state that the waking thread has left behind (if the wake-up was not spurious) and this could otherwise not be guaranteed, but I did not find it clearly worded anywhere.]
I am mainly interested in the behaviour on (modern) Linux, but of course knowing whether this is defined by POSIX would also be of interest.
Note: This may have already been asked in another question, but I hope that mine is clearer.
After t0 does pthread_cond_signal, t1 is no longer waiting for the condition variable, but it is also not running, because t0 still holds the mutex; t1 is instead waiting for the mutex. The thread t2 may also be waiting for the mutex, at the beginning of enter(). Now t0 releases the mutex. Both t1 and t2 are waiting for it. Is t1 handled in a special way and guaranteed to get it, or could t2 get it instead?

Thanks for your comment, Carsten, that cleared it up.
No, see this answer for more reference. Given your example, t2 could acquire the mutex before t1 and a race condition may occur leading to unexpected outcomes.
Reiterating, t0 may initially have the mutex and be in the while loop holding on the line pthread_cond_wait(&c, &m);and the mutex is released atomically reference. t1 could call leave() acquiring the mutex signaling the condition c, and then release the mutex. t0 will prepare to run --waiting by trying to acquire the now-freed mutex by t1, but it can become context switched by the OS. Some other thread t2 waiting on the mutex can grab it and run enter() causing undesired outcomes, see non-reentrant. t2 then releases the mutex. t0 may swaps back only to see values have been mutated.

Related

Implementation of a lock using test and set

Below is the code is given in OSTEP book regarding the implementation of a lock using test and set instruction. My question is that in such implementation, couldn't a thread that is not holding the lock call the unlock function and take away the lock?
typedef struct __lock_t {
int flag;
} lock_t;
void init(lock_t *lock) {
// 0 indicates that lock is available, 1 that it is held
lock->flag = 0;
}
void lock(lock_t *lock) {
while (TestAndSet(&lock->flag, 1) == 1)
; // spin-wait (do nothing)
}
void unlock(lock_t *lock) {
lock->flag = 0;
}
It's assumed that it's a coding error so that case should not be happening.
My question is that in such implementation, couldn't a thread that is not holding the lock call the unlock function and take away the lock?
There's nothing wrong with (e.g.) one thread acquiring a lock, notifying another thread that it can proceed (or maybe spawning a new thread), then the other thread releasing the lock.
Something is very wrong if a single thread acquires a lock and then releases it twice.
To detect bugs (while still supporting all legitimate scenarios), you'd want to detect "unlock() called when lock not held". You could do that by using an atomic TestAndClear() in the unlock().
If you want to enforce "only the thread that acquired the lock may release the lock" then you have to know which thread acquired the lock, so you have to store a thread ID in the lock when its acquired (e.g. maybe by using an atomic "compare, and exchange if it was equal").

pthread_cond_wait() waking up two threads at the same time

I am trying to better understand how to use pthread_cond_wait() and how it works.
I am just looking for a bit of clarification to an answer I saw on this site.
The answer is the last reply on this page
understanding of pthread_cond_wait() and pthread_cond_signal()
I am wondering how this would look with three threads. Imagine Thread 1 wants to tell Thread 2 and Thread 3 to wake up
For example
pthread_mutex_t mutex;
pthread_cond_t condition;
Thread 1:
pthread_mutex_lock(&mutex);
/*Initialize things*/
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&condition); //wake up thread 2 & 3
/*Do other things*/
Thread 2:
pthread_mutex_lock(&mutex); //mutex lock
while(!condition){
pthread_cond_wait(&condition, &mutex); //wait for the condition
}
pthread_mutex_unlock(&mutex);
/*Do work*/
Thread 3:
pthread_mutex_lock(&mutex); //mutex lock
while(!condition){
pthread_cond_wait(&condition, &mutex); //wait for the condition
}
pthread_mutex_unlock(&mutex);
/*Do work*/
I am wondering if such a setup is valid. Say that Thread 2 and 3 relied on some intialization options that thread 1 needs to process.
First: If you wish thread #1 to wake up thread #2 and #3, it should use pthread_cond_broadcast.
Second: The setup is valid (with broadcast). Thread #2 and #3 are scheduled for wakeup and they will try to reacquire the mutex as part of waking up. One of them will, the other will have to wait for the mutex to be unlocked again. So thread #2 and #3 access the critical section sequentically (to re-evaluate the condition).
If I understand correctly, you want thr#2 and thr#3 ("workers") to block until thr#1 ("boss") has performed some initialization.
Your approach is almost workable, but you need to broadcast rather than signal, and are missing a predicate variable separate from your condition variable. (In the question you reference, the predicate and condition variables were very similarly named.) For example:
pthread_mutex_t mtx;
pthread_cond_t cv;
int initialized = 0; // our predicate of interest, signaled via cv
...
// boss thread
initialize_things();
pthread_mutex_lock(&mtx);
initialized = 1;
pthread_cond_broadcast(&cv);
pthread_mutex_unlock(&mtx);
...
// worker threads
pthread_mutex_lock(&mtx);
while (! initialized) {
pthread_cond_wait(&cv, &mtx);
}
pthread_mutex_unlock(&mtx);
do_things();
That's common enough that you might want to combine the mutex/cv/flag into a single abstraction. (For inspiration, see Python's Event object.) POSIX barriers which are another way of synchronizing threads: every thread waits until all threads have "arrived." pthread_once is another way, as it runs a function once and only once no matter how many threads call it.
pthread_cond_signal wakes up one (random) thread waiting on the cond variable. If you want to wake up all threads waiting on this cond variable use pthread_cond_broadcast.
Depending what you are doing on the critical session there might be also another solution apart from the ones in the previous answers.
Suppose thread1 is executing first (i.e. it is the creator thread) and suppose thread2 and thread3 do not perform any write in to shared resource in the critical session. In this case with pthread_cond_wait you are forcing one thread to wait the other when actually there is no need.
You can use read-write mutex of type pthread_rwlock_t. Basically the thread1 performs a write-lock so the other threads will be blocked when trying to acquire a read-lock.
The functions for this lock are quite self-explanatory:
//They return: 0 if OK, error number on failure
int pthread_rwlock_init(pthread_rwlock_t *restrict rwlock,
const pthread_rwlockattr_t *restrict attr);
int pthread_rwlock_destroy(pthread_rwlock_t *rwlock);
int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_unlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_tryrdlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_trywrlock(pthread_rwlock_t *rwlock);
When thread1 has finished its initialization it unlocks. The other threads will perform a read-lock and since more read-locks can co-exist they can execute simultaneously. Again: this is valid if you do not perform any write in thread2&3 on the shared resources.

Once we have signaled a condition variable, will we keep on executing original thread?

I have a problem below.
Process A Process B
int A = 0; int B = 0;
pthread_mutex_lock(&mutex);
while (condition == FALSE)
pthread_cond_wait(&cond, &mutex);
pthread_mutex_lock(&mutex);
condition = TRUE;
pthread_cond_signal(&cond);
pthread_mutex_unlock((&mutex)
A += 10; B += 10;
My problem is, if process B has other instructions, ex: int B += 10;,
B will execute B += 10 immediately or A will take control otherwise?
It is, whether process B will continue executing or A will be wake up & take control?
For example, will B += 10 precede A += 10 or vise versa?
Every condition variable is associated with a mutex. The mutex must be held when pthread_cond_wait is called; pthread_cond_wait releases the mutex, waits for the condition to be signalled, and then reacquires the mutext before returning.
You may call pthread_cond_signal with the mutex held, or after releasing the mutex. If it is called with the mutex held, no pthread_cond_wait can continue until the mutex is released.
The example code in the question does not release the mutex before executing B += 10; [Note 1], so that will definitely execute before A += 10;. The mutex must at some point be released, of course.
Once the mutex is released, both threads execute in an unspecified order. If your computer has more than one core (quite common these days), they might both be executing at the same time.
Note:
int B += 10; is invalid. You can't declare a variable and increment it in one statement (where would the variable be initialized?)
It is an undefined behaviour. As you haven't synchronized access, you can't be sure which one is the first. Though in real life it depends upon OS implementation of pthread_cond_signal, you shouldn't rely on it.
Your code is incomplete because process B never unlocks the mutex.
When process A executes pthread_cond_wait it atomically unlocks the mutex and waits for the signal from process B. When process B calls pthread_cond_signal, process A will attempt to lock the mutex again. Since B never unlocks the mutex, A will be stuck trying to acquire the mutex.
Hence, only the B += 10 statement will run. The A += 10 will run if and only if process B unlocks the mutex.

Why do we need a condition check before pthread_cond_wait

I am trying to learn basics of pthread_cond_wait. In all the usages, I see either
if(cond is false)
pthread_cond_wait
or
while(cond is false)
pthread_cond_wait
My question is, we want to cond_wait only because condition is false. Then why should i take the pain of explicitly putting an if/while loop. I can understand that without any if/while check before cond_wait we will directly hit that and it wont return at all. Is the condition check solely for solving this purpose or does it have anyother significance. If it for solving an unnecessary condition wait, then putting a condition check and avoiding the cond_wait is similar to polling?? I am using cond_wait like this.
void* proc_add(void *name){
struct vars *my_data = (struct vars*)name;
printf("In thread Addition and my id = %d\n",pthread_self());
while(1){
pthread_mutex_lock(&mutexattr);
while(!my_data->ipt){ // If no input get in
pthread_cond_wait(&mutexaddr_add,&mutexattr); // Wait till signalled
my_data->opt = my_data->a + my_data->b;
my_data->ipt=1;
pthread_cond_signal(&mutexaddr_opt);
}
pthread_mutex_unlock(&mutexattr);
if(my_data->end)
pthread_exit((void *)0);
}
}
The logic is, I am asking the input thread to process the data whenever an input is available and signal the output thread to print it.
You need a while loop because the thread that called pthread_cond_wait might wake up even when the condition you are waiting for isn't reached. This phenomenon is called "spurious wakeup".
This is not a bug, it is the way the conditional variables are implemented.
This can also be found in man pages:
Spurious wakeups from the pthread_cond_timedwait() or
pthread_cond_wait() functions may occur. Since the return from
pthread_cond_timedwait() or pthread_cond_wait() does not imply
anything about the value of this predicate, the predicate should be
re-evaluated upon such return.
Update regarding the actual code:
void* proc_add(void *name)
{
struct vars *my_data = (struct vars*)name;
printf("In thread Addition and my id = %d\n",pthread_self());
while(1) {
pthread_mutex_lock(&mutexattr);
while(!my_data->ipt){ // If no input get in
pthread_cond_wait(&mutexaddr_add,&mutexattr); // Wait till signalled
}
my_data->opt = my_data->a + my_data->b;
my_data->ipt=1;
pthread_cond_signal(&mutexaddr_opt);
pthread_mutex_unlock(&mutexattr);
if(my_data->end)
pthread_exit((void *)0);
}
}
}
You must test the condition under the mutex before waiting because signals of the condition variable are not queued (condition variables are not semaphores). That is, if a thread calls pthread_cond_signal() when no threads are blocked in pthread_cond_wait() on that condition variable, then the signal does nothing.
This means that if you had one thread set the condition:
pthread_mutex_lock(&m);
cond = true;
pthread_cond_signal(&c);
pthread_mutex_unlock(&m);
and then another thread unconditionally waited:
pthread_mutex_lock(&m);
pthread_cond_wait(&c, &m);
/* cond now true */
this second thread would block forever. This is avoided by having the second thread check for the condition:
pthread_mutex_lock(&m);
if (!cond)
pthread_cond_wait(&c, &m);
/* cond now true */
Since cond is only modified with the mutex m held, this means that the second thread waits if and only if cond is false.
The reason a while () loop is used in robust code instead of an if () is because pthread_cond_wait() does not guarantee that it will not wake up spuriously. Using a while () also means that signalling the condition variable is always perfectly safe - "extra" signals don't affect the program's correctness, which means that you can do things like move the signal outside of the locked section of code.

How to allow certain threads to have priority in locking a mutex use PTHREADS

Assume that the following code is being executed by 10 threads.
pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)
For purpose of explanations lets say the threads are T1, T2, T3.....T10.
My requirement is that as long as T1 or T2 or T3( i.e any of T1, T2 or T3) is waiting for acquiring a lock, the other threads i.t T4, T5, T6.....T10 should not be able to acquire the lock i.e T1, T2 and T3 should have precedence in acquiring the lock with respect to other threads.
I guess it could be done by increasing the priority of threads T1, T2 and T3
i.e here is the pseudo code
if this thread is T1 or T2 or T3
increase its priority
pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)
if this thread is T1 or T2 or T3 decrease it priority to normal
Please note that I want a solution that is works for Linux platform and should use pthreads. I don't really care about any other platform.
Also note that I don't really want to make these 3 threads as realtime, I want them to exhibit their defualt behaviour(scheduling and priority) except that in the above mentioned small piece of code I want them to always have precedence in acquiring lock.
I have read some man pages about scheduling policies and scheduling priorities in Linux but can't really make out :(
Will this work? Can you help me with the exact pthread API required to accomplish the above task?
Regards
lali
Here's my implementation. Low priority threads use prio_lock_low() and prio_unlock_low() to lock and unlock, high priority threads use prio_lock_high() and prio_unlock_high().
The design is quite simple. High priority threads are held at the critical section mutex ->cs_mutex, low priority threads are held at the condition variable. The condition variable mutex is only held around updates to the shared variable and signalling of the condition variable.
#include <pthread.h>
typedef struct prio_lock {
pthread_cond_t cond;
pthread_mutex_t cv_mutex; /* Condition variable mutex */
pthread_mutex_t cs_mutex; /* Critical section mutex */
unsigned long high_waiters;
} prio_lock_t;
#define PRIO_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }
void prio_lock_low(prio_lock_t *prio_lock)
{
pthread_mutex_lock(&prio_lock->cv_mutex);
while (prio_lock->high_waiters || pthread_mutex_trylock(&prio_lock->cs_mutex))
{
pthread_cond_wait(&prio_lock->cond, &prio_lock->cv_mutex);
}
pthread_mutex_unlock(&prio_lock->cv_mutex);
}
void prio_unlock_low(prio_lock_t *prio_lock)
{
pthread_mutex_unlock(&prio_lock->cs_mutex);
pthread_mutex_lock(&prio_lock->cv_mutex);
if (!prio_lock->high_waiters)
pthread_cond_signal(&prio_lock->cond);
pthread_mutex_unlock(&prio_lock->cv_mutex);
}
void prio_lock_high(prio_lock_t *prio_lock)
{
pthread_mutex_lock(&prio_lock->cv_mutex);
prio_lock->high_waiters++;
pthread_mutex_unlock(&prio_lock->cv_mutex);
pthread_mutex_lock(&prio_lock->cs_mutex);
}
void prio_unlock_high(prio_lock_t *prio_lock)
{
pthread_mutex_unlock(&prio_lock->cs_mutex);
pthread_mutex_lock(&prio_lock->cv_mutex);
prio_lock->high_waiters--;
if (!prio_lock->high_waiters)
pthread_cond_signal(&prio_lock->cond);
pthread_mutex_unlock(&prio_lock->cv_mutex);
}
As I understand it, the only way you can truly guarantee this would be to write a lock that works like that yourself. However #xryl669's answer that suggests using thread priority and priority inheritance is certainly worthy of consideration if it works for your use case.
To implement it yourself, you will need condition variables and counts of the number of waiting low / high priority threads.
In terms of the concepts and APIs you'll need, it is relatively similar to implementing a read/write lock (but the semantics you need are completely different, obviously - but if you understood how the r/w lock is working, you'll understand how to implement what you want).
You can see an implementation of a read write lock here:
http://ptgmedia.pearsoncmg.com/images/0201633922/sourcecode/rwlock.c
In the lower priority threads, you'd need to wait for high priority threads to finish, in the same way readers wait for writers to finish.
(The book the above code is taken from it also a great posix threads book btw, http://www.informit.com/store/product.aspx?isbn=0201633922 )
The native way is to enable priority inheritance for your mutex (with pthread_mutex_attr), and use pthread's thread priority to perform what you need.
It only requires very few lines of code, and you are not re-inventing the wheel.
On the good side, it'll also work with RT or FIFO scheduler while your homebrew version will not.
Then, whenever a thread with a high priority waits on a mutex that's acquired by a thread on lower priority, the kernel "boost" the low priority thread so it can be scheduled in place of the high priority thread, thus giving it a timeslice to release the lock. As soon as the lock is released, the high priority thread is scheduled. That's the lowest delay you could get since it's done in the kernel.
Alternatively you may just introduce another lock for higher priority threads. consider the following pseudo-code (i am not familiar with the pthread semantics, but i believe this is not hard to map the code to the needed calls)
EDIT (thanx JosephH)
introducing the exec semaphore set to 3 (number of high-prio threads)
note that pend(exec,3); means that this pend will sleep until all 3 slots are available and will consume them all
//init
exec = semaphore(3,3);
//========================
if this is NOT thread (t1,t2,t3)
lock(low_prio);
sem_pend(exec,3);
else
sem_pend(exec,1);
lock(high_prio);
//...
unlock(high_prio);
if this is NOT thread (t1,t2,t3)
sem_release(exec,3);
sleep(0); //yield(); //ensures that sem_pend(exec,1) is executed
unlock(low_prio);
else
sem_release(exec,1);
(The first two attempts had bugs, pls jump to EDIT2)
Maybe this would work?
if NOT this thread is T1 or T2 or T3
pthread_mutex_lock(&lock1) // see note below
pthread_mutex_lock(&lock2)
Some trivial code
pthread_mutex_unlock(&lock2)
pthread_mutex_unlock(&lock1)
else
pthread_mutex_lock(&lock2)
Some trivial code
pthread_mutex_unlock(&lock2)
end if
Reasoning:
Some threads will compete for two locks and therefore will have lower priority and some threads will compete for only one lock and therefore will have higher priority.
Still the difference might be marginal and then the resolution would be to introduce some lag between acquiring first lock and attempting the second lock for the higher priority threads in which time the higher priority threads will be given a chance to get the lock2.
(disclaimer: I am a newbie when it comes to this)
EDIT:
Another attempt/approach
if NOT (this thread is T1 or T2 or T3)
pthread_mutex_lock(&lock1)
if pthread_mutex_trylock(&lock2) == 0 // low priority threads will not get queued
Some trivial code
pthread_mutex_unlock(&lock2)
end if
pthread_mutex_unlock(&lock1)
else
if (this thread is T1 or T2 or T3)
pthread_mutex_lock(&lock2)
Some trivial code
pthread_mutex_unlock(&lock2)
end if
end if
EDIT2: Another attempt (trying to learn something here)
if NOT (this thread is T1 or T2 or T3)
pthread_mutex_lock(&lock1)
while !(pthread_mutex_trylock(&lock2) == 0)
pthread_yield()
Some trivial code
pthread_mutex_unlock(&lock2)
pthread_mutex_unlock(&lock1)
else
if (this thread is T1 or T2 or T3)
pthread_mutex_lock(&lock2)
Some trivial code
pthread_mutex_unlock(&lock2)
end if
end if
To implement that with pthreads you would need N lists, one per thread priority. The lists would contain pointers to the thread's pthread_cond_t variables.
Schematic untested meta-code:
/* the main lock */
pthread_mutex_t TheLock = PTHREAD_MUTEX_INITIALIZER;
/* service structures: prio lists and the lock for them */
pthread_mutex_t prio_list_guard = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t *prio_lists[MY_MAX_PRIO][MY_MAX_THREAD]; /* 0 == highest prio */
/* lock */
void
prio_lock(int myprio)
{
pthread_cond_t x;
pthread_mutex_lock( &prio_list_guard );
if (0 == pthread_mutex_trylock( &TheLock )) {
pthread_mutex_unlock( &prio_list_guard );
return 0;
}
pthread_cond_init( &x, 0 );
LIST_ADD( prio_lists[myprio], &x )
while(1) /* handle spurious wake-ups */
{
pthread_cond_wait( &prio_list_guard, &x );
if (0 == pthread_mutex_trylock( &TheLock ))
{
LIST_REMOVE( prio_lists[myprio], &x );
pthread_mutex_unlock( &prio_list_guard );
return 0;
}
}
}
/* unlock */
void
prio_unlock()
{
int i;
pthread_cond_t *p;
pthread_mutex_lock( &prio_list_guard );
for (i=0; i<MY_MAX_PRIO; i++)
{
if ((p = LIST_GETFIRST( prio_lists[i] )))
{
pthread_cond_signal( p );
break;
}
}
pthread_mutex_unlock( &TheLock );
pthread_mutex_unlock( &prio_list_guard );
}
The code also handles spurious wake-ups from pthread_cond_wait(), but frankly I have never seen that happening.
Edit1. Note that prio_lists above is a primitive form of a priority queue.

Resources