Trying to understand pthread_cond_lock and pthread_cond_signal - c

So I'm trying to understand exactly how pthread_mutex_lock works.
My current understanding is that it unlocks the mutex and puts whatever thread is going though it to sleep. Sleep meaning that the thread is inactive and consuming no resources.
It then waits for a signal to go from asleep to blocked, meaning that the thread can no longer change any variables.
thread 1:
pthread_mutex_lock(&mutex);
while (!condition){
printf("Thread wating.\n");
pthread_cond_wait(&cond, &mutex);
printf("Thread awakened.\n");
fflush(stdout);
}
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&condVar);
pthread_mutex_unlock(&mutex);
So basically in the sample above, the loop runs and runs and each iteration pthread_cond_wait checks if the condition of the loop is true. If it is then the cond_signal is sent and the thread is blocked so it can't manipulate any more data.
I'm really having trouble wrapping my head around this, I'd appreciate some input and feedback about how this works and whether or not I am beginning to understand this based on what I have above.
I've gone over this post but am still having trouble

First, a summary:
pthread_mutex_lock(&mutex):
If mutex is free, then this thread grabs it immediately.
If mutex is grabbed, then this thread waits until the mutex becomes free, and then grabs it.
pthread_mutex_trylock(&mutex):
If mutex is free, then this thread grabs it.
If mutex is grabbed, then the call returns immediately with EBUSY.
pthread_mutex_unlock(&mutex):
Releases mutex.
pthread_cond_signal(&cond):
Wake up one thread waiting on the condition variable cond.
pthread_cond_broadcast(&cond):
Wake up all threads waiting on the condition variable cond.
pthread_cond_wait(&cond, &mutex):
This must be called with mutex grabbed.
The calling thread will temporarily release mutex and wait on cond.
When cond is broadcast on, or signaled on and this thread happens to be the one woken up, then the calling thread will first re-grab the mutex, and then return from the call.
It is important to note that at all times, the calling thread either has mutex grabbed, or is waiting on cond. There is no interval in between.
Let's look at a practical, running example code. We'll create it along the lines of OP's code.
First, we'll use a structure to hold the parameters for each worker function. Since we'll want the mutex and the condition variable to be shared between threads, we'll use pointers.
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <pthread.h>
#include <limits.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
/* Worker function work. */
struct work {
pthread_t thread_id;
pthread_mutex_t *lock; /* Pointer to the mutex to use */
pthread_cond_t *wait; /* Pointer to the condition variable to use */
volatile int *done; /* Pointer to the flag to check */
FILE *out; /* Stream to output to */
long id; /* Identity of this thread */
unsigned long count; /* Number of times this thread iterated. */
};
The thread worker function receives a pointer to the above structure. Each thread iterates the loop once, then waits on the condition variable. When woken up, if the done flag is still zero, the thread iterates the loop. Otherwise, the thread exits.
/* Example worker function. */
void *worker(void *workptr)
{
struct work *const work = workptr;
pthread_mutex_lock(work->lock);
/* Loop as long as *done == 0: */
while (!*(work->done)) {
/* *(work->lock) is ours at this point. */
/* This is a new iteration. */
work->count++;
/* Do the work. */
fprintf(work->out, "Thread %ld iteration %lu\n", work->id, work->count);
fflush(work->out);
/* Wait for wakeup. */
pthread_cond_wait(work->wait, work->lock);
}
/* *(work->lock) is still ours, but we've been told that all work is done already. */
/* Release the mutex and be done. */
pthread_mutex_unlock(work->lock);
return NULL;
}
To run the above, we'll need a main() as well:
#ifndef THREADS
#define THREADS 4
#endif
int main(void)
{
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t wait = PTHREAD_COND_INITIALIZER;
volatile int done = 0;
struct work w[THREADS];
char *line = NULL, *p;
size_t size = 0;
ssize_t len = 0;
unsigned long total;
pthread_attr_t attrs;
int i, err;
/* The worker functions require very little stack, but the default stack
size is huge. Limit that, to reduce the (virtual) memory use. */
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 2 * PTHREAD_STACK_MIN);
/* Grab the mutex so the threads will have to wait to grab it. */
pthread_mutex_lock(&lock);
/* Create THREADS worker threads. */
for (i = 0; i < THREADS; i++) {
/* All threads use the same mutex, condition variable, and done flag. */
w[i].lock = &lock;
w[i].wait = &wait;
w[i].done = &done;
/* All threads output to standard output. */
w[i].out = stdout;
/* The rest of the fields are thread-specific. */
w[i].id = i + 1;
w[i].count = 0;
err = pthread_create(&(w[i].thread_id), &attrs, worker, (void *)&(w[i]));
if (err) {
fprintf(stderr, "Cannot create thread %d of %d: %s.\n", i+1, THREADS, strerror(errno));
exit(EXIT_FAILURE); /* Exits the entire process, killing any other threads as well. */
}
}
fprintf(stderr, "The first character on each line controls the type of event:\n");
fprintf(stderr, " e, q exit\n");
fprintf(stderr, " s signal\n");
fprintf(stderr, " b broadcast\n");
fflush(stderr);
/* Let each thread grab the mutex now. */
pthread_mutex_unlock(&lock);
while (1) {
len = getline(&line, &size, stdin);
if (len < 1)
break;
/* Find the first character on the line, ignoring leading whitespace. */
p = line;
while ((p < line + len) && (*p == '\0' || *p == '\t' || *p == '\n' ||
*p == '\v' || *p == '\f' || *p == '\r' || *p == ' '))
p++;
/* Do the operation mentioned */
if (*p == 'e' || *p == 'E' || *p == 'q' || *p == 'Q')
break;
else
if (*p == 's' || *p == 'S')
pthread_cond_signal(&wait);
else
if (*p == 'b' || *p == 'B')
pthread_cond_broadcast(&wait);
}
/* It is time for the worker threads to be done. */
pthread_mutex_lock(&lock);
done = 1;
pthread_mutex_unlock(&lock);
/* To ensure all threads see the state of that flag,
we wake up all threads by broadcasting on the condition variable. */
pthread_cond_broadcast(&wait);
/* Reap all threds. */
for (i = 0; i < THREADS; i++)
pthread_join(w[i].thread_id, NULL);
/* Output the thread statistics. */
total = 0;
for (i = 0; i < THREADS; i++) {
total += w[i].count;
fprintf(stderr, "Thread %ld: %lu events.\n", w[i].id, w[i].count);
}
fprintf(stderr, "Total: %lu events.\n", total);
return EXIT_SUCCESS;
}
If you save the above as example.c, you can compile it to example using e.g. gcc -Wall -O2 example.c -lpthread -o example.
To get the correct intuitive grasp of the operations, run the example in a terminal, with the source code in a window next to it, and see how the execution progresses as you provide input.
You can even run commands like printf '%s\n' s s s b q | ./example to run a sequence of events in a quick succession, or printf 's\ns\ns\nb\nq\n' | ./example with even less time in between events.
After some experimentation, you'll hopefully find out that not all input events cause their respective action. This is because the exit event (q above) is not synchronous: it does not wait for all pending work to be done, but tells the threads to exit right then and there. That is why the number of events may vary even for the exact same input.
(Also, if you signal on the condition variable, and immediately broadcast on it, the threads tend to only get woken up once.)
You can mitigate that by delaying the exit, using e.g. (printf '%s\n' s s b s s s ; sleep 1 ; printf 'q\n' ) | ./example.
However, there are better ways. A condition variable is not suitable for countable events; it is really flag-like. A semaphore would work better, but then you should be careful to not overflow the semaphore; it can only be from 0 to SEM_VALUE_MAX, inclusive. (So, you could use a semaphore to represent the number of pending job, but probably not for the number of iterations done by each/all thread workers.) A queue for the work to do, in thread pool fashion, is the most common approach.

pthread_cond_wait() simply means that the current thread shall release the mutex and then waits on a condition. The trick here is that both happens atomically, so it cannot happen, that the thread has released the mutex and is not yet waiting on the condition or is already waiting on the condition and has not yet released the mutex. Either both has happened or none has happened.
pthread_cond_signal() simply wakes up any thread that is currently waiting on the signaled condition. The first thing the woken up thread will do is obtaining the mutex again, if it cannot obtain it (e.g. as the signaling thread is currently owning the mutex), it will block until it can. If multiple threads are waiting on the condition, pthread_cond_signal() just wakes up one of them, which one is not defined. If you want to wake up all the waiting threads, you must use pthread_cond_broadcast() instead; but of course they won't run at the same time as now each of them first requires to obtain the mutex and that will only be possible one after another.
pthread_cond_t has no state. If you signal a condition no thread is waiting for, then nothing will happen. It's not like this will set a flag internally and if later on some thread calls pthread_cond_wait(), it will be woken up immediately as there is a pending signal. pthread_cond_signal() only wakes up threads that are already waiting, that means these threads must have called pthread_cond_wait() prior to you calling pthread_cond_signal().
Here's some simple sample code. First a reader thread:
// === Thread 1 ===
// We want to process an item from a list.
// To make sure the list is not altered by one
// thread while another thread is accessing it,
// it is protected by a mutex.
pthread_mutex_lock(&listLock);
// Now nobody but us is allowed to access the list.
// But what if the list is empty?
while (list->count == 0) {
// As long as we hold the mutex, no other thread
// thread can add anything to the list. So we
// must release it. But we want to know as soon
// as another thread has changed it.
pthread_cond_wait(&listCondition, &listLock);
// When we get here, somebody has signaled the
// condition and we have the mutex again and
// thus are allowed to access the list. The list
// may however still be empty, as another thread
// may have already consumed the new item in case
// there are multiple readers and all are woken
// up, thus the while-loop. If the list is still
// empty, we just go back to sleep and wait again.
}
// If we get here, the list is not empty.
processListItem(list);
// Finally we release the mutex again.
pthread_mutex_unlock(&listLock);
And then a writer thread:
// === Thread 2 ===
// We want to add a new item to the list.
// To make sure that nobody is accessing the
// list while we do, we need to obtain the mutex.
pthread_mutex_lock(&listLock);
// Now nobody but us is allowed to access the list.
// Check if the list is empty.
bool listWasEmpty = (list->count == 0);
// We add our item.
addListItem(list, newItem);
// If the list was empty, one or even multiple
// threads may be waiting for us adding an item.
// So we should wake them up here.
if (listWasEmpty) {
// If any thread is waiting for that condition,
// wake it up as now there is an item to process.
pthread_cond_signal(&listCondition);
}
// Finally we must release the mutex again.
pthread_mutex_unlock(&listLock);
The code is written so that there can be any number of reader/writer threads. Signaling only if the list was empty (listWasEmpty) is just a performance optimization, the code would also work correctly if you always signal the condition after adding an item.

Related

can pthread_cond_signal make more than one thread to wake up?

I'm studying on condition variables of Pthread. When I'm reading the explanation of pthread_cond_signal, I see the following.
The pthread_cond_signal() function shall unblock at least one of
the
threads that are blocked on the specified condition variable cond (if
any threads are blocked on cond).
Till now I knew pthread_cond_signal() would make only one thread to wake up at a time. But, the quoted explanation says at least one. What does it mean? Can it make more than one thread wake up? If yes, why is there pthread_cond_broadcast()?
En passant, I wish the following code taken from UNIX Systems Programming book of Robbins is also related to my question. Is there any reason the author's pthread_cond_broadcast() usage instead of pthread_cond_signal() in waitbarrier function? As a minor point, why is !berror checking needed too as a part of the predicate? When I try both of them by changing, I cannot see any difference.
/*
The program implements a thread-safe barrier by using condition variables. The limit
variable specifies how many threads must arrive at the barrier (execute the
waitbarrier) before the threads are released from the barrier.
The count variable specifies how many threads are currently waiting at the barrier.
Both variables are declared with the static attribute to force access through
initbarrier and waitbarrier. If successful, the initbarrier and waitbarrier
functions return 0. If unsuccessful, these functions return a nonzero error code.
*/
#include <errno.h>
#include <pthread.h>
#include <stdio.h>
static pthread_cond_t bcond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t bmutex = PTHREAD_MUTEX_INITIALIZER;
static int count = 0;
static int limit = 0;
int initbarrier(int n) { /* initialize the barrier to be size n */
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit != 0) { /* barrier can only be initialized once */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
limit = n;
return pthread_mutex_unlock(&bmutex);
}
int waitbarrier(void) { /* wait at the barrier until all n threads arrive */
int berror = 0;
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit <= 0) { /* make sure barrier initialized */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
count++;
while ((count < limit) && !berror)
berror = pthread_cond_wait(&bcond, &bmutex);
if (!berror) {
fprintf(stderr,"soner %d\n",
(int)pthread_self());
berror = pthread_cond_broadcast(&bcond); /* wake up everyone */
}
error = pthread_mutex_unlock(&bmutex);
if (berror)
return berror;
return error;
}
/* ARGSUSED */
static void *printthread(void *arg) {
fprintf(stderr,"This is the first print of thread %d\n",
(int)pthread_self());
waitbarrier();
fprintf(stderr,"This is the second print of thread %d\n",
(int)pthread_self());
return NULL;
}
int main(void) {
pthread_t t0,t1,t2;
if (initbarrier(3)) {
fprintf(stderr,"Error initilizing barrier\n");
return 1;
}
if (pthread_create(&t0,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 0.\n");
if (pthread_create(&t1,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 1.\n");
if (pthread_create(&t2,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 2.\n");
if (pthread_join(t0,NULL))
fprintf(stderr,"Error joining thread 0.\n");
if (pthread_join(t1,NULL))
fprintf(stderr,"Error joining thread 1.\n");
if (pthread_join(t2,NULL))
fprintf(stderr,"Error joining thread 2.\n");
fprintf(stderr,"All threads complete.\n");
return 0;
}
Due to spurious wake-ups pthread_cond_signal could wake up more than one thread.
Look for word "spurious" in pthread_cond_wait.c from glibc.
In waitbarrier it must wake up all threads when they all have arrived to that point, hence it uses pthread_cond_broadcast.
Can [pthread_cond_signal()] make more than one thread wake up?
That's not guaranteed. On some operating system, on some hardware platform, under some circumstances it could wake more than one thread. It is allowed to wake more than one thread because that gives the implementer more freedom to make it work in the most efficient way possible for any given hardware and OS.
It must wake at least one waiting thread, because otherwise, what would be the point of calling it?
But, if your applicaton needs a signal that is guaranteed to wake all of the waiting threads, then that is what pthread_cond_broadcast() is for.
Making efficient use of a multi-processor system is hard. https://www.e-reading.club/bookreader.php/134637/Herlihy,Shavit-_The_art_of_multiprocessor_programming.pdf
Most programming language and library standards allow similar freedoms in the behavior of multi-threaded programs, for the same reason: To allow programs to achieve high performance on a variety of different platforms.

How to make a thread wait for another one in linux?

For example I want to create 5 threads and print them. How do I make the fourth one execute before the second one? I tried locking it with a mutex, but I don't know how to make only the second one locked, so it gives me segmentation fault.
Normally, you define the order of operations, not the threads that do those operations. It may sound like a trivial distinction, but when you start implementing it, you'll see it makes for a major difference. It is also more efficient approach, because you don't think of the number of threads you need, but the number of operations or tasks to be done, and how many of them can be done in parallel, and how they might need to be ordered or sequenced.
For learning purposes, however, it might make sense to look at ordering threads instead.
The OP passes a pointer to a string for each worker thread function. That works, but is slightly odd; typically you pass an integer identifier instead:
#include <stdlib.h>
#include <inttypes.h>
#include <pthread.h>
#define ID_TO_POINTER(id) ((void *)((intptr_t)(id)))
#define POINTER_TO_ID(ptr) ((intptr_t)(ptr))
The conversion of the ID type -- which I assume to be a signed integer above, typically either an int or a long -- to a pointer is done via two casts. The first cast is to intptr_t type defined in <stdint.h> (which gets automatically included when you include <inttypes.h>), which is a signed integer type that can hold the value of any void pointer; the second cast is to a void pointer. The intermediate cast avoids a warning in case your ID is of an integer type that cannot be converted to/from a void pointer without potential loss of information (usually described in the warning as "of different size").
The simplest method of ordering POSIX threads, that is not that dissimilar to ordering operations or tasks or jobs, is to use a single mutex as a lock to protect the ID of the thread that should run next, and a related condition variable for threads to wait on, until their ID appears.
The one problem left, is to how to define the order. Typically, you'd simply increment or decrement the ID value -- decrementing means the threads would run in descending order of ID value, but the ID value of -1 (assuming you number your threads from 0 onwards) would always mean "all done", regardless of the number of threads used:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_id = /* number of threads - 1 */;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_id--;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
Note that I didn't let the worker exit immediately after its task is done; this is because I wanted to expand the example a bit: let's say you want to define the order of operations in an array:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_order[] = { 0, 1, 2, 3, 4, 2, 3, 1, 4, -1 };
static int *worker_idptr = worker_order;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (*worker_idptr >= 0) {
if (*worker_idptr == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_idptr++;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
See how little changed?
Let's consider a third case: a separate thread, say the main thread, decides which thread will run next. In this case, we need two condition variables: one for the workers to wait on, and the other for the main thread to wait on.
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static pthread_cond_t worker_done = PTHREAD_COND_INITIALIZER;
static int worker_id = 0;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Notify we are done. Since there is only
one thread waiting on the _done condition,
we can use _signal instead of _broadcast. */
pthread_cond_signal(&worker_done);
}
/* Wait for a change in the worker_id. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
The thread that decides which worker should run first should hold the worker_lock mutex when the worker threads are created, then wait on the worker_done condition variable. When the first worker completes its task, it will signal on the worker_cone condition variable, and wait on the worker_wait condition variable. The decider thread should then change the worker_id to the next ID that should run, and broadcast on the worker_wait condition variable. This continues, until the decider thread sets worker_id to a negative value. For example:
int threads; /* number of threads to create */
pthread_t *ptids; /* already allocated for that many */
pthread_attr_t attrs;
int i, result;
/* Simple POSIX threads will work with 65536 bytes of stack
on all architectures -- actually, even half that. */
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 65536);
/* Hold the worker_lock. */
pthread_mutex_lock(&worker_lock);
/* Create 'threads' threads. */
for (i = 0; i < threads; i++) {
result = pthread_create(&(ptids[i]), &attrs, worker, ID_TO_POINTER(i));
if (result) {
fprintf(stderr, "Cannot create worker threads: %s.\n", strerror(result));
exit(EXIT_FAILURE);
}
}
/* Thread attributes are no longer needed. */
pthread_attr_destroy(&attrs);
while (1) {
/*
TODO: Set worker_id to a new value, or
break when done.
*/
/* Wake that worker */
pthread_cond_broadcast(&worker_wait);
/* Wait for that worker to complete */
pthread_cond_wait(&worker_done, &worker_lock);
}
/* Tell workers to exit */
worker_id = -1;
pthread_cond_broadcast(&worker_wait);
/* and reap the workers */
for (i = 0; i < threads; i++)
pthread_join(ptids[i], NULL);
There is a very important detail in all of the above examples, that may be hard to understand without a lot of practice: the way how mutexes and condition variables interact (if paired via pthread_cond_wait()).
When a thread calls pthread_cond_wait(), it will atomically release the specified mutex, and wait for new signals/broadcasts on the condition variable. "Atomic" means that there is no time inbetween the two; nothing can occur in between. The call returns when a signal or broadcast is received -- the difference is that a signal goes to only one, a random waiter; whereas a broadcast reaches all threads waiting on the condition variable --, and the thread acquires the lock. You can think of this as if the signal/broadcast first wakes up the thread, but the pthread_cond_wait() will only return when it re-acquires the mutex.
This behaviour is implicitly used in all of the examples above. In particular, you'll notice that the pthread_cond_signal()/pthread_cond_broadcast() is always done while holding the worker_lock mutex; this ensures that the other thread or threads wake up and get to act only after the worker_lock mutex is unlocked -- either explicitly, or by the holding thread waiting on a condition variable.
I thought I might draw a directed graph (using Graphviz) about the order of events and actions, but this "answer" is already too long. I do suggest you do it yourself -- perhaps on paper? -- as that kind of visualization has been very useful for myself when I was learning about all this stuff.
I do feel quite uncomfortable about the above scheme, I must admit. At any one time, only one thread is running, and that is basically wrong: any job where tasks should be done in a specific order, should only require one thread.
However, I showed the above examples in order for you (not just OP, but any C programmer interested in POSIX threads) to get more comfortable about how to use mutexes and condition variables.

Force unlock a mutex that was locked by a different thread

Consider the following test program:
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <strings.h>
#include <unistd.h>
#include <signal.h>
#include <pthread.h>
pthread_mutex_t mutex;
pthread_mutexattr_t mattr;
pthread_t thread1;
pthread_t thread2;
pthread_t thread3;
void mutex_force_unlock(pthread_mutex_t *mutex, pthread_mutexattr_t *mattr)
{
int e;
e = pthread_mutex_destroy(mutex);
printf("mfu: %s\n", strerror(e));
e = pthread_mutex_init(mutex, mattr);
printf("mfu: %s\n", strerror(e));
}
void *thread(void *d)
{
int e;
e = pthread_mutex_trylock(&mutex);
if (e != 0)
{
printf("thr: %s\n", strerror(e));
mutex_force_unlock(&mutex, &mattr);
e = pthread_mutex_unlock(&mutex);
printf("thr: %s\n", strerror(e));
if (e != 0) pthread_exit(NULL);
e = pthread_mutex_lock(&mutex);
printf("thr: %s\n", strerror(e));
}
pthread_exit(NULL);
}
void * thread_deadtest(void *d)
{
int e;
e = pthread_mutex_lock(&mutex);
printf("thr2: %s\n", strerror(e));
e = pthread_mutex_lock(&mutex);
printf("thr2: %s\n", strerror(e));
pthread_exit(NULL);
}
int main(void)
{
/* Setup */
pthread_mutexattr_init(&mattr);
pthread_mutexattr_settype(&mattr, PTHREAD_MUTEX_ERRORCHECK);
//pthread_mutexattr_settype(&mattr, PTHREAD_MUTEX_NORMAL);
pthread_mutex_init(&mutex, &mattr);
/* Test */
pthread_create(&thread1, NULL, &thread, NULL);
pthread_join(thread1, NULL);
if (pthread_kill(thread1, 0) != 0) printf("Thread 1 has died.\n");
pthread_create(&thread2, NULL, &thread, NULL);
pthread_join(thread2, NULL);
pthread_create(&thread3, NULL, &thread_deadtest, NULL);
pthread_join(thread3, NULL);
return(0);
}
Now when this program runs, I get the following output:
Thread 1 has died.
thr: Device busy
mfu: Device busy
mfu: No error: 0
thr: Operation not permitted
thr2: No error: 0
thr2: Resource deadlock avoided
Now I know this has been asked a number of times before, but is there any way to forcefully unlock a mutex? It seems the implementation will only allow the mutex to be unlocked by the thread that locked it as it seems to actively check, even with a normal mutex type.
Why am I doing this? It has to do with coding a bullet-proof network server that has the ability to recover from most errors, including ones where the thread terminates unexpectedly. At this point, I can see no way of unlocking a mutex from a thread that is different than the one that locked it. So the way that I see it is that I have a few options:
Abandon the mutex and create a new one. This is the undesirable option as it creates a memory leak.
Close all network ports and restart the server.
Go into the kernel internals and release the mutex there bypassing the error checking.
I have asked this before but, the powers that be absolutely want this functionality and they will not take no for an answer (I've already tried), so I'm kinda stuck with this. I didn't design it this way, and I would really like to shoot the person who did, but that's not an option either.
And before someone says anything, my usage of pthread_kill is legal under POSIX...I checked.
I forgot to mention, this is FreeBSD 9.3 that we are working with.
Use a robust mutex, and if the locking thread dies, fix the mutex with pthread_mutex_consistent().
If mutex is a robust mutex in an inconsistent state, the
pthread_mutex_consistent() function can be used to mark the state
protected by the mutex referenced by mutex as consistent again.
If an owner of a robust mutex terminates while holding the mutex, the
mutex becomes inconsistent and the next thread that acquires the mutex
lock shall be notified of the state by the return value [EOWNERDEAD].
In this case, the mutex does not become normally usable again until
the state is marked consistent.
If the thread which acquired the mutex lock with the return value
[EOWNERDEAD] terminates before calling either
pthread_mutex_consistent() or pthread_mutex_unlock(), the next thread
that acquires the mutex lock shall be notified about the state of the
mutex by the return value [EOWNERDEAD].
Well, you cannot do what you ask wit a normal pthread mutex, since, as you say, you can only unlock a mutex from the thread that locked it.
What you can do is wrap locking/unlocking of a mutex such that you have a pthread cancel handler that unlocks the mutex if the thread terminates. To give you an idea:
void cancel_unlock_handler(void *p)
{
pthread_mutex_unlock(p);
}
int my_pthread_mutex_lock(pthread_mutex_t *m)
{
int rc;
pthread_cleanup_push(cancel_unlock_handler, m);
rc = pthread_mutex_lock(&m);
if (rc != 0) {
pthread_cleanup_pop(0);
}
return rc;
}
int my_pthread_mutex_unlock(pthread_mutex_t *m)
{
pthread_cleanup_pop(0);
return pthread_mutex_unlock(&m);
}
Now you'll need to use the my_pthread_mutex_lock/my_pthread_mutex_unlock instead of the pthread lock/unlock functions.
Now, threads don't really terminate "unexpectedly", either it calls pthread_exit or it ends, or you pthread_kill it, in which case the above will suffice (also note that threads exit only at certain cancellation points, so there's no race conditions e.g.between pushing the cleanup handler and locking the mutex) , but logical error or undefined behavior might leave erroneous state affecting the whole process, and you're better off re-starting the whole process.
I have come up with a workable method to deal with this situation. As I mentioned before, FreeBSD does not support robust mutexes so that option is out. Also one a thread has locked a mutex, it cannot be unlocked by any means.
So what I have done to solve the problem is to abandon the mutex and place its pointer onto a list. Since the lock wrapper code uses pthread_mutex_trylock and then relinquishes the CPU if it fails, no thread can get stuck on waiting for a permanently locked mutex. In the case of a robust mutex, the thread locking the mutex will be able recover it if it gets EOWNERDEAD as the return code.
Here's some things that are defined:
/* Checks to see if we have access to robust mutexes. */
#ifndef PTHREAD_MUTEX_ROBUST
#define TSRA__ALTERNATE
#define TSRA_MAX_MUTEXABANDON TSRA_MAX_MUTEX * 4
#endif
/* Mutex: Mutex Data Table Datatype */
typedef struct mutex_lock_table_tag__ mutexlock_t;
struct mutex_lock_table_tag__
{
pthread_mutex_t *mutex; /* PThread Mutex */
tsra_daclbk audcallbk; /* Audit Callback Function Pointer */
tsra_daclbk reicallbk; /* Reinit Callback Function Pointer */
int acbkstat; /* Audit Callback Status */
int rcbkstat; /* Reinit Callback Status */
pthread_t owner; /* Owner TID */
#ifdef TSRA__OVERRIDE
tsra_clnup_t *cleanup; /* PThread Cleanup */
#endif
};
/* ******** ******** Global Variables */
pthread_rwlock_t tab_lock; /* RW lock for mutex table */
pthread_mutexattr_t mtx_attrib; /* Mutex attributes */
mutexlock_t *mutex_table; /* Mutex Table */
int tabsizeentry; /* Table Size (Entries) */
int tabsizebyte; /* Table Size (Bytes) */
int initialized = 0; /* Modules Initialized 0=no, 1=yes */
#ifdef TSRA__ALTERNATE
pthread_mutex_t *mutex_abandon[TSRA_MAX_MUTEXABANDON];
pthread_mutex_t mtx_abandon; /* Abandoned Mutex Lock */
int mtx_abandon_count; /* Abandoned Mutex Count */
int mtx_abandon_init = 0; /* Initialization Flag */
#endif
pthread_mutex_t mtx_recover; /* Mutex Recovery Lock */
And here's some code for the lock recovery:
/* Attempts to recover a broken mutex. */
int tsra_mutex_recover(int lockid, pthread_t tid)
{
int result;
/* Check Prerequisites */
if (initialized == 0) return(EDOOFUS);
if (lockid < 0 || lockid >= tabsizeentry) return(EINVAL);
/* Check Mutex Owner */
result = pthread_equal(tid, mutex_table[lockid].owner);
if (result != 0) return(0);
/* Lock Recovery Mutex */
result = pthread_mutex_lock(&mtx_recover);
if (result != 0) return(result);
/* Check Mutex Owner, Again */
result = pthread_equal(tid, mutex_table[lockid].owner);
if (result != 0)
{
pthread_mutex_unlock(&mtx_recover);
return(0);
}
/* Unless the system supports robust mutexes, there is
really no way to recover a mutex that is being held
by a thread that has terminated. At least in FreeBSD,
trying to destory a mutex that is held will result
in EBUSY. Trying to overwrite a held mutex results
in a memory fault and core dump. The only way to
recover is to abandon the mutex and create a new one. */
#ifdef TSRA__ALTERNATE /* Abandon Mutex */
pthread_mutex_t *ptr;
/* Too many abandoned mutexes? */
if (mtx_abandon_count >= TSRA_MAX_MUTEXABANDON)
{
result = TSRA_PROGRAM_ABORT;
goto error_1;
}
/* Get a read lock on the mutex table. */
result = pthread_rwlock_rdlock(&tab_lock);
if (result != 0) goto error_1;
/* Perform associated data audit. */
if (mutex_table[lockid].acbkstat != 0)
{
result = mutex_table[lockid].audcallbk();
if (result != 0)
{
result = TSRA_PROGRAM_ABORT;
goto error_2;
}
}
/* Allocate New Mutex */
ptr = malloc(sizeof(pthread_mutex_t));
if (ptr == NULL)
{
result = errno;
goto error_2;
}
/* Init new mutex and abandon the old one. */
result = pthread_mutex_init(ptr, &mtx_attrib);
if (result != 0) goto error_3;
mutex_abandon[mtx_abandon_count] = mutex_table[lockid].mutex;
mutex_abandon[mtx_abandon_count] = mutex_table[lockid].mutex;
mtx_abandon_count++;
mutex_table[lockid].mutex = ptr;
#else /* Recover Mutex */
/* Try locking the mutex and see what we get. */
result = pthread_mutex_trylock(mutex_table[lockid].mutex);
switch (result)
{
case 0: /* No error, unlock and return */
pthread_unlock_mutex(mutex_table[lockid].mutex);
return(0);
break;
case EBUSY: /* No error, return */
return(0);
break;
case EOWNERDEAD: /* Error, try to recover mutex. */
if (mutex_table[lockid].acbkstat != 0)
{
result = mutex_table[lockid].audcallbk();
if (result != 0)
{
if (mutex_table[lockid].rcbkstat != 0)
{
result = mutex_table[lockid].reicallbk();
if (result != 0)
{
result = TSRA_PROGRAM_ABORT;
goto error_2;
}
}
else
{
result = TSRA_PROGRAM_ABORT;
goto error_2;
}
}
}
else
{
result = TSRA_PROGRAM_ABORT;
goto error_2;
}
break;
case EDEADLK: /* Error, deadlock avoided, abort */
case ENOTRECOVERABLE: /* Error, recovery failed, abort */
/* NOTE: We shouldn't get this, but if we do... */
abort();
break;
default:
/* Ambiguous situation, best to abort. */
abort();
break;
}
pthread_mutex_consistant(mutex_table[lockid].mutex);
pthread_mutex_unlock(mutex_table[lockid].mutex);
#endif
/* Housekeeping */
mutex_table[lockid].owner = pthread_self();
pthread_mutex_unlock(&mtx_recover);
/* Return */
return(0);
/* We only get here on errors. */
#ifdef TSRA__ALTERNATE
error_3:
free(ptr);
error_2:
pthread_rwlock_unlock(&tab_lock);
#else
error_2:
pthread_mutex_unlock(mutex_table[lockid].mutex);
#endif
error_1:
pthread_mutex_unlock(&mtx_recover);
return(result);
}
Because FreeBSD is an evolving operating system like Linux is, I have made provisions to allow for the use of robust mutexes in the future. Since without robust mutexes, there really is no way to do enhanced error checking which is available if robust mutexes are supported.
For a robust mutex, enhanced error checking is performed to verify the need to recover the mutex. For systems that do not support robust mutexes, we have to trust the caller to verify that the mutex in question needs to be recovered. Besides, there is some checking to make sure that there is only one thread performing the recovery. All other threads blocking on the mutex are blocked. I have given some thought about how to signal other threads that a recovery is in progress, so that aspect of the routine still needs work. In a recovery situation, I'm thinking about comparing pointer values to see if the mutex was replaced.
In both cases, an audit routine can be set as a callback function. The purpose of the audit routine is to verify and correct any data discrepancies in the protected data. If the audit fails to correct the data, then another callback routine, the data reinitialize routine, is invoked. The purpose of this is to reinitialize the data that is protected by the mutex. If that fail, then abort() is called to terminate program execution and drop a core file for debugging purposes.
For the abandoned mutex case, the pointer is not thrown away, but is placed on a list. If too many mutexes are abandoned, then the program is aborted. As mentioned above, in the mutex lock routine, pthread_mutex_trylock is used instead of pthread_mutex_lock. This way, no thread can be permanently blocked on a dead mutex. So once the pointer is switched in the mutex table to point to the new mutex, all threads waiting on the mutex will immediately switch to the new mutex.
I am sure there are bugs/errors in this code, but this is a work in progress. Although not quite finished and debugged, I feel that there is enough here to warrant an answer to this question.
Well as you probably aware, a thread which locks a mutex, has the sole ownership of that resource. So it has got all the rights to unlock it. There is no way, atleast till now, to force a thread, give up its resource, without having to do a round about way, that you had did in your code.
However, this would be my approach.
Have a single thread, that owns a mutex, called as Resource thread. Make sure that, this thread receives & responds events to other worker thread.
When a worker thread, wanna enter into critical section, it registers with Resource thread to lock a mutex on it's behalf. When done, the worker thread assumes that, it has got exclusive access to critical section. The assumption is valid because, any other worker thread, which needs to get access to critical section, has to go through the same step.
Now assume that, there is another thread, who wants to force the former worker thread, to unlock, then he can make a special call, maybe a flag or with high priority thread to grant access. The resource thread, on comparing the flag / priority of the requesting thread, will unlock the mutex and lock again for the requesting thread.
I don't know for sure your use-case fully, but just my 2 cents. If you like it, don't forget vote my answer.
You could restart just the process with the crashed thread using function from the exec family to change the process image. I assume that it will be faster to reload the process than to reboot the sever.

Multi-threading, consumers and producers

I have a problem with multithreading, since I'm new to this topic. Code below is code I've been given from my University. It was in few versions, and I understood most of them. But I don't really understand the nready.nready variable and all this thread condition. Can anyone describe how those two work here? And why can't I just synchronise work of threads via mutex?
#include "unpipc.h"
#define MAXNITEMS 1000000
#define MAXNTHREADS 100
/* globals shared by threads */
int nitems; /* read-only by producer and consumer */
int buff[MAXNITEMS];
struct {
pthread_mutex_t mutex;
pthread_cond_t cond;
int nput;
int nval;
int nready;
} nready = { PTHREAD_MUTEX_INITIALIZER, PTHREAD_COND_INITIALIZER };
void *produce(void *), *consume(void *);
/* include main */
int
main(int argc, char **argv)
{
int i, nthreads, count[MAXNTHREADS];
pthread_t tid_produce[MAXNTHREADS], tid_consume;
if (argc != 3)
err_quit("usage: prodcons5 <#items> <#threads>");
nitems = min(atoi(argv[1]), MAXNITEMS);
nthreads = min(atoi(argv[2]), MAXNTHREADS);
Set_concurrency(nthreads + 1);
/* 4create all producers and one consumer */
for (i = 0; i < nthreads; i++) {
count[i] = 0;
Pthread_create(&tid_produce[i], NULL, produce, &count[i]);
}
Pthread_create(&tid_consume, NULL, consume, NULL);
/* wait for all producers and the consumer */
for (i = 0; i < nthreads; i++) {
Pthread_join(tid_produce[i], NULL);
printf("count[%d] = %d\n", i, count[i]);
}
Pthread_join(tid_consume, NULL);
exit(0);
}
/* end main */
void *
produce(void *arg)
{
for ( ; ; ) {
Pthread_mutex_lock(&nready.mutex);
if (nready.nput >= nitems) {
Pthread_mutex_unlock(&nready.mutex);
return(NULL); /* array is full, we're done */
}
buff[nready.nput] = nready.nval;
nready.nput++;
nready.nval++;
nready.nready++;
Pthread_cond_signal(&nready.cond);
Pthread_mutex_unlock(&nready.mutex);
*((int *) arg) += 1;
}
}
/* include consume */
void *
consume(void *arg)
{
int i;
for (i = 0; i < nitems; i++) {
Pthread_mutex_lock(&nready.mutex);
while (nready.nready == 0)
Pthread_cond_wait(&nready.cond, &nready.mutex);
nready.nready--;
Pthread_mutex_unlock(&nready.mutex);
if (buff[i] != i)
printf("buff[%d] = %d\n", i, buff[i]);
}
return(NULL);
}
/* end consume */
pthread_mutex_lock(&nready.mutex);
while (nready.nready == 0)
pthread_cond_wait(&nready.cond, &nready.mutex);
nready.nready--;
pthread_mutex_unlock(&nready.mutex);
The whole point of this structure is to guarantee that the condition (nready.nready == 0) is still true when you execute the corresponding action (nready.nready--) or - if the condition is not satisfied - to wait until it is without using CPU time.
You could use a mutex only, to check that the condition is correct and to perform the corresponding action atomically. But if the condition is not satisfied, you wouldn't know what to do. Wait? Until when? Check it again? Release the mutex and re-check immediately after? That would be wasting CPU time...
pthread_cond_signal() and pthread_cond_wait() are here to solve this problem. You should check their man pages.
Briefly, what pthread_cond_wait does, is it puts the calling thread to sleep and release the mutex in an atomic way until it's signaled. So this is a blocking function. The thread can then be re-scheduled by calling signal or broadcast from a different thread. When the thread is signaled, it grabs the mutex again and exit the wait() function.
Ath this point you know that
your condition is true and
you hold the mutex.
So you can do whatever you need to do with your data.
Be careful though, you shouldn't call wait, if you're not sure that another thread will signal. This is a very common source of deadlocks.
When a thread received a signal, it's put on the list of threads that are ready to be scheduled. By the time the thread is actually executed, your condition (i.e. nread.nready == 0) may be false again. Hence the while (to recheck if the thread is waked).
"But I don't really understand the nready.nready variable"
this results from the struct instance being named 'nready' and there
is a field within the struct named 'nready'
IMO: a very poor design to have two different objects being given the same name
the nready field of the nready struct seems to be keeping track of the number of
items that have been 'produced'
1) The nready filed of struct nready is used to tack how many tasks are ready to consume, i.e., the remaining tasks in array buff. The nready.nready++; statement is only executed when producers put one new item in array buff, and the nready.nready--; is only executed when consume gets item out of buff. With is variable, programmer can always track how many tasks are there left to process.
2)
pthread_mutex_lock(&nready.mutex);
while (nready.nready == 0)
pthread_cond_wait(&nready.cond, &nready.mutex);
nready.nready--;
pthread_mutex_unlock(&nready.mutex);
The statements above are common condition variable usage. You can check
POSIX Threads Programming and Condition Variables for more about condition variables.
Why can't use mutex only? You can poll a mutex lock again and again. Obviously, it is CPU time consuming and may be hugely affect system performance. Instead, you want the consume to wait in sleep when there is no more items in buff, and to be awaken when producer puts new item in buff. The condition variable is acting as this role here. When there is no items (nready.nready==0), pthread_cond_wait() function puts the current thread into sleep, save the precious cpu time. When new items are arriving, Pthread_cond_signal() awakes the consumer.

While loop synchronization

I am working on a project with a user defined number of threads I am using 7 at the moment. I have a while loop that runs in each thread but I need all of the threads to wait for each other at the end of the while loop. The tricky thing is that some of the threads do not all end on the same number of times through the loop.
void *entryFunc(void *param)
{
int *i = (int *)param;
int nextPrime;
int p = latestPrime;
while(latestPrime < testLim)
{
sem_wait(&sem);
nextPrime = findNextPrime(latestPrime);
if(nextPrime != -1)
{
latestPrime = nextPrime;
p = latestPrime;
}
else
{
sem_post(&sem);
break;
}
sem_post(&sem);
if(p < 46341)
{
incrementNotPrimes(p);
}
/*
sem_wait(&sem2);
doneCount++;
sem_post(&sem2);
while(go != 1);
sem_wait(&sem2);
doneCount--;
//sem_post(&sem3);
sem_post(&sem2);
*/
}
return NULL;
}
where the chunk of code is commented out is part of my last attempt at solving this problem. That is where the functions all need to wait for each other. I have a feeling I am missing something simple.
If your problem is that on each thread, the while loop has a different numbers of iterations and some threads never reach the synchronization point after exiting the loop, you could use a barrier. Check here for an example.
However you need to decrease the number of threads at the barrier after you exit each thread. Waiting at the barrier will end after count number of threads reached that point.
So you need to update the barrier object each time a thread finishes. And make sure you do this atomically.
As I mentioned in the comments, you should use a barrier instead of a semaphore for this kind of situation, as it should be simpler to implement (barriers have been designed exactly to solve that problem). However, you may still use a semaphore with a little bit of arithmetic
arithmetic: your goal is to have all thread execute the same code path, but somehow the last thread to finish its task should wake all the other threads up. One way to achieve that is to have at the end of the function an atomic counter which each thread would decrement, and if the counter reaches 0, the thread simply calls as many time sem_post as necessary to release all the waiting threads, instead of issuing a sem_wait as the others.
A second method, this time using only a semaphore, is also possible. Since we cannot differentiate the last thread for the others, all the threads must do the same operations with the semaphore, ie try to release everyone, but also wait for the last one. So the idea is to initialize the semaphore to (1-n)*(n+1), so that each of the n-1 first threads would fail at waking up their friends with n+1 calls to sem_post, but still work toward getting the semaphore at exactly 0. Then the last thread would do the same, pushing the semaphore value to n+1, thus releasing the locked threads, and leaving room for it to also perform its sem_wait and be released immediately.
void *entryFunc(void *param)
{
int *i = (int *)param;
int nextPrime;
int p = latestPrime, j;
while(latestPrime < testLim){
nextPrime = findNextPrime(latestPrime);
if(nextPrime != -1)
{
latestPrime = nextPrime;
p = latestPrime;
}
if(p < 46341)
{
incrementNotPrimes(p);
}
}
for (j=0;j<=THREAD_COUNT;j++)
sem_post(&sem);
sem_wait(&sem);
return NULL;
}
The problem with this approach is that it doesn't deal with how the semaphore should be reset in between uses (if your program needs to repeat this mechanism, it will need to reset the semaphore value, since it will end up being 1 after this code has been executed successfully).

Resources