C Timer with cancellation and call back - c

I've created a Timer pseudo class in C that has call back capability and can be cancelled. I come from the .NET/C# world where this is all done by the framework and I'm not an expert with pthreads.
In .NET there are cancellation tokens which you can wait on which means I don't need to worry so much about the nuts and bolts.
However using pthreads is a bit more low level than I am used to so my question is:
Are there any issues with the way I have implemented this?
Thanks in anticipation for any comments you may have.
Timer struct:
typedef struct _timer
{
pthread_cond_t Condition;
pthread_mutex_t ConditionMutex;
bool IsRunning;
pthread_mutex_t StateMutex;
pthread_t Thread;
int TimeoutMicroseconds;
void * Context;
void (*Callback)(bool isCancelled, void * context);
} TimerObject, *Timer;
C Module:
static void *
TimerTask(Timer timer)
{
struct timespec timespec;
struct timeval now;
int returnValue = 0;
clock_gettime(CLOCK_REALTIME, &timespec);
timespec.tv_sec += timer->TimeoutMicroseconds / 1000000;
timespec.tv_nsec += (timer->TimeoutMicroseconds % 1000000) * 1000000;
pthread_mutex_lock(&timer->StateMutex);
timer->IsRunning = true;
pthread_mutex_unlock(&timer->StateMutex);
pthread_mutex_lock(&timer->ConditionMutex);
returnValue = pthread_cond_timedwait(&timer->Condition, &timer->ConditionMutex, &timespec);
pthread_mutex_unlock(&timer->ConditionMutex);
if (timer->Callback != NULL)
{
(*timer->Callback)(returnValue != ETIMEDOUT, timer->Context);
}
pthread_mutex_lock(&timer->StateMutex);
timer->IsRunning = false;
pthread_mutex_unlock(&timer->StateMutex);
return 0;
}
void
Timer_Initialize(Timer timer, void (*callback)(bool isCancelled, void * context))
{
pthread_mutex_init(&timer->ConditionMutex, NULL);
timer->IsRunning = false;
timer->Callback = callback;
pthread_mutex_init(&timer->StateMutex, NULL);
pthread_cond_init(&timer->Condition, NULL);
}
bool
Timer_IsRunning(Timer timer)
{
pthread_mutex_lock(&timer->StateMutex);
bool isRunning = timer->IsRunning;
pthread_mutex_unlock(&timer->StateMutex);
return isRunning;
}
void
Timer_Start(Timer timer, int timeoutMicroseconds, void * context)
{
timer->Context = context;
timer->TimeoutMicroseconds = timeoutMicroseconds;
pthread_create(&timer->Thread, NULL, TimerTask, (void *)timer);
}
void
Timer_Stop(Timer timer)
{
void * returnValue;
pthread_mutex_lock(&timer->StateMutex);
if (!timer->IsRunning)
{
pthread_mutex_unlock(&timer->StateMutex);
return;
}
pthread_mutex_unlock(&timer->StateMutex);
pthread_cond_broadcast(&timer->Condition);
pthread_join(timer->Thread, &returnValue);
}
void
Timer_WaitFor(Timer timer)
{
void * returnValue;
pthread_join(timer->Thread, &returnValue);
}
Example use:
void
TimerExpiredCallback(bool cancelled, void * context)
{
fprintf(stderr, "TimerExpiredCallback %s with context %s\n",
cancelled ? "Cancelled" : "Timed Out",
(char *)context);
}
void
ThreadedTimerExpireTest()
{
TimerObject timerObject;
Timer_Initialize(&timerObject, TimerExpiredCallback);
Timer_Start(&timerObject, 5 * 1000000, "Threaded Timer Expire Test");
Timer_WaitFor(&timerObject);
}
void
ThreadedTimerCancelTest()
{
TimerObject timerObject;
Timer_Initialize(&timerObject, TimerExpiredCallback);
Timer_Start(&timerObject, 5 * 1000000, "Threaded Timer Cancel Test");
Timer_Stop(&timerObject);
}

Overall, it seems pretty solid work for someone who ordinarily works in different languages and who has little pthreads experience. The idea seems to revolve around pthread_cond_timedwait() to achieve a programmable delay with a convenient cancellation mechanism. That's not unreasonable, but there are, indeed, a few problems.
For one, your condition variable usage is non-idiomatic. The conventional and idiomatic use of a condition variable associates with each wait a condition for whether the thread is clear to proceed. This is tested, under protection of the mutex, before waiting. If the condition is satisfied then no wait is performed. It is tested again after each wakeup, because there is a variety of scenarios in which a thread may return from waiting even though it is not actually clear to proceed. In these cases, it loops back and waits again.
I see at least two such possibilities with your timer:
The timer is cancelled very quickly, before its thread starts to wait. Condition variables do not queue signals, so in this case the cancellation would be ineffective. This is a form of race condition.
Spurious wakeup. This is always a possibility that must be considered. Spurious wakeups are rare under most circumstances, but they really do happen.
It seems natural to me to address that by generalizing your IsRunning to cover more states, perhaps something more like
enum { NEW, RUNNING, STOPPING, FINISHED, ERROR } State;
, instead.
Of course, you still have to test that under protection of the appropriate mutex, which brings me to my next point: one mutex should suffice. That one can and should serve both to protect shared state and as the mutex associated with the CV wait. This, too, is idiomatic. It would lead to code in TimerTask() more like this:
// ...
pthread_mutex_lock(&timer->StateMutex);
// Responsibility for setting the state to RUNNING transferred to Timer_Start()
while (timer->State == RUNNING) {
returnValue = pthread_cond_timedwait(&timer->Condition, &timer->StateMutex, &timespec);
switch (returnValue) {
case 0:
if (timer->State == STOPPING) {
timer->State = FINISHED;
}
break;
case ETIMEDOUT:
timer->State = FINISHED;
break;
default:
timer->State = ERROR;
break;
}
}
pthread_mutex_unlock(&timer->StateMutex);
// ...
The accompanying Timer_Start() and Timer_Stop() would be something like this:
void Timer_Start(Timer timer, int timeoutMicroseconds, void * context) {
timer->Context = context;
timer->TimeoutMicroseconds = timeoutMicroseconds;
pthread_mutex_lock(&timer->StateMutex);
timer->state = RUNNING;
// start the thread before releasing the mutex so that no one can see state
// RUNNING before the thread is actually running
pthread_create(&timer->Thread, NULL, TimerTask, (void *)timer);
pthread_mutex_unlock(&timer->StateMutex);
}
void Timer_Stop(Timer timer) {
_Bool should_join = 0;
pthread_mutex_lock(&timer->StateMutex);
switch (timer->State) {
case NEW:
timer->state = FINISHED;
break;
case RUNNING:
timer->state = STOPPING;
should_join = 1;
break;
case STOPPING:
should_join = 1;
break;
// else no action
}
pthread_mutex_unlock(&timer->StateMutex);
// Harmless if the timer has already stopped:
pthread_cond_broadcast(&timer->Condition);
if (should_join) {
pthread_join(timer->Thread, NULL);
}
}
A few other, smaller adjustments would be needed elsewhere.
Additionally, although the example code above omits it for clarity, you really should ensure that you test the return values of all the functions that provide status information that way, unless you don't care whether they succeeded. That includes almost all standard library and Pthreads functions. What you should do in the event that that one fails is highly contextual, but pretending (or assuming) that it succeeded, instead, is rarely a good choice.
An alternative
Another approach to a cancellable delay would revolve around select() or pselect() with a timeout. To arrange for cancellation, you set up a pipe, and have select() to listen to the read end. Writing anything to the write end will then wake select().
This is in several ways easier to code, because you don't need any mutexes or condition variables. Also, data written to a pipe persists until it is read (or the pipe is closed), which smooths out some of the timing-related issues that the CV-based approach has to code around.
With select, however, you need to be prepared to deal with signals (at minimum by blocking them), and the timeout is a duration, not an absolute time.

pthread_mutex_lock(&timer->StateMutex);
timer->IsRunning = true;
pthread_mutex_unlock(&timer->StateMutex);
pthread_mutex_lock(&timer->ConditionMutex);
returnValue = pthread_cond_timedwait(&timer->Condition, &timer->ConditionMutex, &timespec);
pthread_mutex_unlock(&timer->ConditionMutex);
if (timer->Callback != NULL)
{
(*timer->Callback)(returnValue != ETIMEDOUT, timer->Context);
}
You have two bugs here.
A cancellation can slip in after IsRunning is set to true and before pthread_cond_timedwait gets called. In this case, you'll wait out the entire timer. This bug exists because ConditionMutex doesn't protect any shared state. To use a condition variable properly, the mutex associated with the condition variable must protect the shared state. You can't trade the right mutex for the wrong mutex and then call pthread_cond_timedwait because that creates a race condition. The entire point of a condition variable is to provide an atomic "unlock and wait" operation to prevent this race condition and your code goes to effort to break that logic.
You don't check the return value of pthread_cond_timedwait. If neither the timeout has expired nor cancellation has been requested, you call the callback anyway. Condition variables are stateless. It is your responsibility to track and check state, the condition variable will not do this for you. You need to call pthread_cond_timedwait in a loop until either the state is set to STOPPING or the timeout is reached. Note that the mutex associated with the condition variable, as in 1 above, must protect the shared state -- in this case state.
I think you have a fundamental misunderstanding about how condition variable work and what they're for. They are used when you a mutex that protects shared state and you want to wait for that shared state to change. The mutex associated with the condition variable must protect the shared state to avoid the classic race condition where the state changes after you released the lock but before you managed to start waiting.
UPDATE:
To provide some more useful information, let me briefly explain what a condition variable is for. Say you have some shared state protected by a mutex. And say some thread can't make forward progress until that shared state changes.
You have a problem. You have to hold the mutex that protects the shared state to see what the state is. When you see that it's in the wrong state, you need to wait. But you also need to release the mutex or no other thread can change the shared state.
But if you unlock the mutex and then wait (which is what your code does above!) you have a race condition. After you unlock the mutex but before you wait, another thread can acquire the mutex and change the shared state such that you no longer want to wait. So you need an atomic "unlock the mutex and wait" operation.
That is the purpose, and the only purpose, of condition variables. So you can atomically release the mutex that protects some shared state and wait for a sign with no change for the signal to be lost in-between when you released the mutex and when you waited.
Another important point -- condition variables are stateless. They have no idea what you are waiting for. You must never call pthread_cond_wait or pthread_cond_timedwait and make assumptions about the state. You must check it yourself. Your code releases the mutex after pthread_cond_timedwait returns. You only want to do that if the call times out.
If pthread_cond_timedwait doesn't timeout (or, in any case, when pthread_cond_wait returns), you don't know what happened until you check the state. That's why these functions re-acquire the mutex -- so you can check the state and decide what to do. This is why these functions are almost always called in a loop -- if the thing you're waiting for still hasn't happened (which you determine by checking the shared state that you are responsible for), you need to keep waiting.

Related

Reader/Writer implementation in C

I'm currently learning about concurrency at my University. In this context I have to implement the reader/writer problem in C, and I think I'm on the right track.
My thought on the problem is, that we need two locks rd_lock and wr_lock. When a writer thread wants to change our global variable, it tries to grab both locks, writes to the global and unlocks. When a reader wants to read the global, it checks if wr_lock is currently locked, and then reads the value, however one of the reader threads should grab the rd_lock, but the other readers should not care if rd_lock is locked.
I am not allowed to use the implementation already in the pthread library.
typedef struct counter_st {
int value;
} counter_t;
counter_t * counter;
pthread_t * threads;
int readers_tnum;
int writers_tnum;
pthread_mutex_t rd_lock;
pthread_mutex_t wr_lock;
void * reader_thread() {
while(true) {
pthread_mutex_lock(&rd_lock);
pthread_mutex_trylock(&wr_lock);
int value = counter->value;
printf("%d\n", value);
pthread_mutex_unlock(&rd_lock);
}
}
void * writer_thread() {
while(true) {
pthread_mutex_lock(&wr_lock);
pthread_mutex_lock(&rd_lock);
// TODO: increment value of counter->value here.
counter->value += 1;
pthread_mutex_unlock(&rd_lock);
pthread_mutex_unlock(&wr_lock);
}
}
int main(int argc, char **args) {
readers_tnum = atoi(args[1]);
writers_tnum = atoi(args[2]);
pthread_mutex_init(&rd_lock, 0);
pthread_mutex_init(&wr_lock, 0);
// Initialize our global variable
counter = malloc(sizeof(counter_t));
counter->value = 0;
pthread_t * threads = malloc((readers_tnum + writers_tnum) * sizeof(pthread_t));
int started_threads = 0;
// Spawn reader threads
for(int i = 0; i < readers_tnum; i++) {
int code = pthread_create(&threads[started_threads], NULL, reader_thread, NULL);
if (code != 0) {
printf("Could not spawn a thread.");
exit(-1);
} else {
started_threads++;
}
}
// Spawn writer threads
for(int i = 0; i < writers_tnum; i++) {
int code = pthread_create(&threads[started_threads], NULL, writer_thread, NULL);
if (code != 0) {
printf("Could not spawn a thread.");
exit(-1);
} else {
started_threads++;
}
}
}
Currently it just prints a lot of zeroes, when run with 1 reader and 1 writer, which means, that it never actually executes the code in the writer thread. I know that this is not going to work as intended with multiple readers, however I don't understand what is wrong, when running it with one of each.
Don't think of the locks as "reader lock" and "writer lock".
Because you need to allow multiple concurrent readers, readers cannot hold a mutex. (If they do, they are serialized; only one can hold a mutex at the same time.) They can take one for a short duration (before they begin the access, and after they end the access), to update state, but that's it.
Split the timeline for having a rwlock into three parts: "grab rwlock", "do work", "release rwlock".
For example, you could use one mutex, one condition variable, and a counter. The counter holds the number of active readers. The condition variable is signaled on by the last reader, and by writers just before they release the mutex, to wake up a waiting writer. The mutex protects both, and is held by writers for the whole duration of their write operation.
So, in pseudocode, you might have
Function rwlock_rdlock:
Take mutex
Increment counter
Release mutex
End Function
Function rwlock_rdunlock:
Take mutex
Decrement counter
If counter == 0, Then:
Signal_on cond
End If
Release mutex
End Function
Function rwlock_wrlock:
Take mutex
While counter > 0:
Wait_on cond
End Function
Function rwlock_unlock:
Signal_on cond
Release mutex
End Function
Remember that whenever you wait on a condition variable, the mutex is atomically released for the duration of the wait, and automatically grabbed when the thread wakes up. So, for waiting on a condition variable, a thread will have the mutex both before and after the wait, but not during the wait itself.
Now, the above approach is not the only one you might implement.
In particular, you might note that in the above scheme, there is a different "unlock" operation you must use, depending on whether you took a read or a write lock on the rwlock. In POSIX pthread_rwlock_t implementation, there is just one pthread_rwlock_unlock().
Whatever scheme you design, it is important to examine it whether it works right in all situations: a lone read-locker, a lone-write-locker, several read-lockers, several-write-lockers, a lone write-locker and one read-locker, a lone write-locker and several read-lockers, several write-lockers and a lone read-locker, and several read- and write-lockers.
For example, let's consider the case when there are several active readers, and a writer wants to write-lock the rwlock.
The writer grabs the mutex. It then notices that the counter is nonzero, so it starts waiting on the condition variable. When the last reader -- note how the order of the readers exiting does not matter, since a simple counter is used! -- unlocks its readlock on the rwlock, it signals on the condition variable, which wakes up the writer. The writer then grabs the mutex, sees the counter is zero, and proceeds to do its work. During that time, the mutex is held by the writer, so all new readers will block, until the writer releases the mutex. Because the writer will also signal on the condition variable when it releases the mutex, it is a race between other waiting writers and waiting readers, who gets to go next.

What's a a good way to stop a pool of threads from running?

I've been working on a project lately, and i need to manage a pair of thread pools.
What the worker threads in the pools do is basically execute some kind of pop operation to each respective queue, eventually wait on a condition variable (pthread_cond_t) if there is no available value in the queue, and once they get an item, parse it and execute operations accordingly.
What i'm concerned about is the fact that i want to have no memory leaks, and to achieve that i noticed that calling a pthread_cancel on each thread when the main process is exiting is definitely a bad idea, as it leaves a lot of garbage around.
The point is, my first thought was to use a exit flag which i can set when the threads need to exit, so that they can easily free memory and call a pthread_exit...
I guess i should set this flag, then send a broadcast signal to the threads waiting on the condition variable and check the flag right after the pop operation...
Is this really the correct way to implement a good thread pool termination? I don't feel that much confident about this...
I'm writing some pseudo-code here to explain what i'm talking about
Each pool thread will run some code structured like this:
/* Worker thread (which will run on each pool thread) */
{ /* Thread initialization */ ... }
loop {
value = pop();
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
{ /* Elaborate value */ ... }
}
return NULL;
And there will be some kind of pool_stopRunning() function which will look like:
/* pool_stopRunning() function code */
{ /* Acquire flag mutex */ ... }
setFlag(flag);
{ /* Unlock flag mutex */ ... }
{ /* Acquire queue mutex */ ... }
pthread_cond_broadcast(...);
{ /* Unlock queue mutex */ ... }
Thanks in advance, i just need to be sure that there isn't a fancy-er way to stop a thread pool... (or get to know a better way, by any chance)
As always, i'm sorry if there is any typo, i'm not and english speaker and it's kind of late right now >:
What you are describing will work, but I would suggest a different approach...
You already have a mechanism for assigning tasks to threads, complete with all appropriate synchronization. So instead of complicating the design with some new parallel mechanism, just define a new type of task called "STOP". If there are N threads servicing a queue and you want to terminate them, push N STOP tasks onto the queue. Then just wait for all of the threads to terminate. (This last can be done via "join", so it should not require any new mechanism, either.)
No muss, no fuss.
With respect to symmetry with setting the flag and reducing serialization, this code:
{ /* Mutex lock because of the shared flag */ ... }
if (flag) {{ /* Free memory and unlock mutex */ ... } pthread_exit(); }
{ /* Unlock the mutex */ ... }
should look like this:
{ /* Mutex lock because of the shared flag */ ... }
flagcopy = readFlag();
{ /* Unlock the mutex */ ... }
if (flagcopy) {{ /* Free memory ... } pthread_exit(); }
Having said that, you can (should?) factor the mutex code into the setFlag and readFlag methods.
There is one more thing. If the flag is only a boolean and it is only changed once before the whole thing shuts down (i.e., it's never unset after being set), then I would argue that protecting the read with a mutex is not required.
I say this because if the above assumptions are true and if the loop's duration is very short and the loop iteration frequency is high, then you would be imposing undue serialization upon the business task and potentially increasing the response time unacceptably.

Rerunning cancelled pthread

My problem is that I cannot reuse cancelled pthread. Sample code:
#include <pthread.h>
pthread_t alg;
pthread_t stop_alg;
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
void *algorithm() {
while (1) {
printf("I'm here\n");
}
}
int main() {
thread_available = 0;
pthread_create(&stop_alg, NULL, stopAlgorithm, 0);
while (1) {
sleep(1);
if (thread_available == 0) {
sleep(2);
printf("Starting algorithm\n");
pthread_create(&alg, NULL, algorithm, 0);
thread_available = 1;
}
}
}
This sample should create two threads - one will be created at the program beginning and will try to cancel second as soon it starts, second should be rerunned as soon at it was cancelled and say "I'm here". But when algorithm thread cancelled once it doesn't start once again, it says "Starting algorithm" and does nothing, no "I'm here" messages any more. Could you please tell me the way to start cancelled(immediately stopped) thread once again?
UPD: So, thanks to your help I understood what is the problem. When I rerun algorithm thread it throws error 11:"The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process PTHREAD_THREADS_MAX would be exceeded.". Actually I have 5 threads, but only one is cancelled, others stop by pthread_exit. So after algorithm stopped and program went to standby mode I checked status of all threads with pthread_join - all thread show 0(cancelled shows PTHREAD_CANCELED), as far as I can understand this means, that all threads stopped successfully. But one more try to run algorithm throws error 11 again. So I've checked memory usage. In standby mode before algorithm - 10428, during the algorithm, when all threads used - 2026m, in standby mode after algorithm stopped - 2019m. So even if threads stopped they still use memory, pthread_detach didn't help with this. Are there any other ways to clean-up after threads?
Also, sometimes on pthread_cancel my program crashes with "libgcc_s.so.1 must be installed for pthread_cancel to work"
Several points:
First, this is not safe:
int thread_available;
void *stopAlgorithm() {
while (1) {
sleep(6);
if (thread_available == 1) {
pthread_cancel(alg);
printf("Now it's dead!\n");
thread_available = 0;
}
}
}
It's not safe for at least reasons. Firstly, you've not marked thread_available as volatile. This means that the compiler can optimise stopAlgorithm to read the variable once, and never reread it. Secondly, you haven't ensured access to it is atomic, or protected it by a mutex. Either declare it:
volatile sig_atomic_t thread_available;
(or similar), or better, protect it by a mutex.
But for the general case of triggering one thread from another, you are better using a condition variable (and a mutex), using pthread_condwait or pthread_condtimedwait in the listening thread, and pthread_condbroadcast in the triggering thread.
Next, what's the point of the stopAlgorithm thread? All it does is cancel the algorithm thread after an unpredictable amount of time between 0 and 6 seconds? Why not just sent the pthread_cancel from the main thread?
Next, do you care where your algorithm is when it is cancelled? If not, just pthread_cancel it. If so (and anyway, I think it's far nicer), regularly check a flag (either atomic and volatile as above, or protected by a mutex) and pthread_exit if it's set. If your algorithm does big chunks every second or so, then check it then. If it does lots of tiny things, check it (say) every 1,000 operations so taking the mutex doesn't introduce a performance penalty.
Lastly, if you cancel a thread (or if it pthread_exits), the way you start it again is simply to call pthread_create again. It's then a new thread running the same code.

possible causes of infinite wait in pthread_cond_wait

I am currently trying to analyse a issue in third party source code where a thread (code snippet corresponding to THREAD-T1) is in infinite wait state. The suspicion is that the thread is stuck in pthread_cond_wait. The following are the details of the same.
Code description
T1 does an asynchronous call to an API exposed by T2.
Hence T1 moves to a blocking wait on a conditional variable (say cond_t).
The conditional variable cond_t is signalled in the callback event generated by T2.
The above cycle is repeated n times until the API returns success.
To consolidate, the above is a series of steps which makes the asynchronous call similar to a synchronous one by the use of condition variables.
Sample code
#define MAX_RETRY (3)
bool g_b_ret_val;
pthread_mutex_t g_cond_mutex;
pthread_mutex_t g_ret_val_mutex; /* Assume iniitailzed in the main thread */
pthread_cond_t g_cond_t; /* Assume iniitailzed in the main thread */
retry_async_call_routine() /* Thread-T1 */
{
while(( false == g_b_ret_val) && (retry < MAX_RETRY))
{
(void)invoke_async_api();
pthread_mutex_init(&g_cond_mutex, NULL);
pthread_mutex_lock(&g_cond_mutex);
pthread_cond_wait(g_cond_t, &g_cond_mutex);
pthread_mutex_unlock(&g_cond_mutex);
pthread_mutex_destroy(&g_cond_mutex);
retry ++ ;
}
}
callback_routine() /* Thread-T2 */
{
pthread_mutex_lock(&g_ret_val_mutex);
g_b_ret_val = true; /* May be false also on failure */
pthread_mutex_unlock(&g_ret_val_mutex);
pthread_cond_signal(&g_cond_t);
}
Known issues that I see in the code
Missing retest of condition in a while loop on pthread_cond_wait
Missing mutex lock while signalling
Questions
Please point out me on any more loop holes (or) possibility of infinite wait (if any).
g_cond_t is not reset using pthread_cond_destroy between successive waits, what is the behaviour of the same ? ( Any references regarding this)
This code seems absurd. You are not supposed to create and destroy a mutex just so you can wait on the condition variable. A mutex needs to be created before thread-shared data are used, then the mutex must be used to protect the shared data. In this case, that's g_ret_val_mutex which protects g_b_ret_val.
The condition variable itself is just used for waiting (with regular or timed wait) and signaling (signal or broadcast). It generally does not need its own lock, and in fact, having a separate one (as in the above loop) gets in the way of calling pthread_cond_wait, which takes only one mutex to unlock, not two. There's no need to destroy and re-create condition variables unless you need new/different attributes.
The key to "not getting stuck"—avoiding infinite wait—is to guarantee that, whenever a thread calls pthread_cond_wait, there is definitely some other thread that will, in the future, call pthread_cond_signal (or pthread_cond_broadcast). That is, the waiter tests "why to wait" first, with the "why" part locked, then waits only if the "why" part says "you should wait". The wake-up thread may use the same lock to determine that a wake-up is necessary, or—if the wake-up thread is "lazy", as in the above example—simply issues a "wake up call" every time.
The minimal change for correctness would thus seem to be to change the loop to read:
pthread_mutex_lock(&g_ret_val_mutex);
for (retry = 0; retry < MAX_RETRY && !g_b_ret_val; retry++) {
(void)invoke_async_api();
pthread_cond_wait(&g_cond_t, &g_ret_val_mutex);
}
success = g_b_ret_val; /* if false, we failed */
/* or: success = retry < MAX_RETRY; -- same result */
pthread_mutex_unlock(&g_ret_val_mutex);
(Aside: g_cond_t is a terrible name for a variable; the _t suffix is meant for types.)
It's sometimes wise to separate "some thread needs a wake-up" from "final result of that thread is success". If needed, I'd probably add that using a second boolean. Let's call it g_waiting, which we set true when callback_routine() is (supposedly) guaranteed to be called and it should do a wake-up event, and false when it's not guaranteed to be called or the wakeup is not required. This kind of coding allows you to switch to pthread_cond_timedwait, in case the asynchronous event might never occur for some reason.
Given that g_ret_val_mutex protects g_b_ret_val, it's appropriate to use that for the "waiting" flag as well—adding another mutex just offers more opportunities for problems, here. So now we get:
pthread_mutex_lock(&g_ret_val_mutex);
for (retry = 0; retry < MAX_RETRY && !g_b_ret_val; retry++) {
(void)invoke_async_api();
compute_wakeup_time(&abstime);
g_waiting = true;
pthread_cond_timedwait(&g_cond_t, &g_ret_val_mutex, &abstime);
if (g_waiting) {
/* timeout occurred, we never got our callback */
/* may want something special for this case */
} else {
/* wakeup occurred, result is in g_b_ret_val */
}
}
success = g_b_ret_val;
/* or: success = retry < MAX_RETRY; */
g_waiting = false;
pthread_mutex_unlock(&g_ret_val_mutex);
Meanwhile:
callback_routine() /* Thread-T2 */
{
pthread_mutex_lock(&g_ret_val_mutex);
g_b_ret_val = compute_success_or_failure();
if (g_waiting) {
g_waiting = false;
pthread_cond_signal(&g_cond_t);
}
pthread_mutex_unlock(&g_ret_val_mutex);
}
I've moved the signal to "inside" the mutex, although it's OK either way, so that I can do it only if g_waiting is set, and clear g_waiting. Since we hold the mutex, it's OK to clear g_waiting either before or after calling pthread_cond_signal (as long as no other code will interrupt the sequence).
Note: if we do start using timedwait, we need to find out whether it is OK to call invoke_async_api when another earlier invoke was used but no result was returned before the timeout.

Implementation of condition variables

To understand the code of pthread condition variables, I have written my own version. Does it look correct? I am using it in a program, its working, but working surprisingly much faster. Originally the program takes around 2.5 seconds and with my version of condition variables it takes only 0.8 seconds, and the output of the program is correct too. However, I'm not sure, if my implementation is correct.
struct cond_node_t
{
sem_t s;
cond_node_t * next;
};
struct cond_t
{
cond_node_t * q; // Linked List
pthread_mutex_t qm; // Lock for the Linked List
};
int my_pthread_cond_init( cond_t * cond )
{
cond->q = NULL;
pthread_mutex_init( &(cond->qm), NULL );
}
int my_pthread_cond_wait( cond_t* cond, pthread_mutex_t* mutex )
{
cond_node_t * self;
pthread_mutex_lock(&(cond->qm));
self = (cond_node_t*)calloc( 1, sizeof(cond_node_t) );
self->next = cond->q;
cond->q = self;
sem_init( &self->s, 0, 0 );
pthread_mutex_unlock(&(cond->qm));
pthread_mutex_unlock(mutex);
sem_wait( &self->s );
free( self ); // Free the node
pthread_mutex_lock(mutex);
}
int my_pthread_cond_signal( cond_t * cond )
{
pthread_mutex_lock(&(cond->qm));
if (cond->q != NULL)
{
sem_post(&(cond->q->s));
cond->q = cond->q->next;
}
pthread_mutex_unlock(&(cond->qm));
}
int my_pthread_cond_broadcast( cond_t * cond )
{
pthread_mutex_lock(&(cond->qm));
while ( cond->q != NULL)
{
sem_post( &(cond->q->s) );
cond->q = cond->q->next;
}
pthread_mutex_unlock(&(cond->qm));
}
Apart from the missing return value checks, there are some more issues that should be fixable:
sem_destroy is not called.
Signal/broadcast touch the cond_node_t after waking the target thread, potentially resulting in a use-after-free.
Further comments:
The omitted destroy operation may require changes to the other operations so it is safe to destroy the condition variable when POSIX says it shall be safe. Not supporting destroy or imposing stronger restrictions on when it may be called will simplify things.
A production implementation would handle thread cancellation.
Backing out of a wait (such as required for thread cancellation and pthread_cond_timedwait timeouts) may lead to complications.
Your implementation queues threads in userland, which is done in some production implementations for performance reasons; I do not understand exactly why.
Your implementation always queues threads in LIFO order. This is often faster (such as because of cache effects) but may lead to starvation. Production implementation may use FIFO order sometimes to avoid starvation.
Basically your strategy looks ok, but you have one major danger, some undefined behavior, and a nit pick:
your are not inspecting the return values of your POSIX functions. In particular sem_wait is interruptible so under heavy load or bad luck your thread will be woken up spuriously. You'd have to carefully catch all that
none of your functions returns a value. If some user of the functions will decide to use the return values some day, this is undefined behavior. Carefully analyse the error codes that the condition functions are allowed to return and do just that.
don't cast the return of malloc or calloc
Edit: Actually, you don't need malloc/free at all. A local variable would do as well.
You don't seem to respect this requirement:
These functions atomically release mutex and cause the calling thread to block on the condition variable cond; atomically here means
"atomically with respect to access by another thread to the mutex and then the condition variable". That is, if another thread is able to acquire the mutex
after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or pthread_cond_signal() in that thread shall
behave as if it were issued after the about-to-block thread has blocked.
You unlock and then wait. Another thread can do lots of things between these operations.
P.S. I'm not sure myself if I'm interpreting this paragraph correctly, feel free to point out my error.

Resources