I'm having trouble synchronizing a master thread to a recently started child thread.
What I want to do is:
master thread creates a new child thread and blocks
child thread starts and initializes (might take some time)
once the child thread is initialized, the main thread continues (and the two threads run in parallel)
My first attempt was something like:
typedef struct threaddata_ {
int running;
} threaddata_t;
void*child_thread(void*arg) {
threaddata_t*x=(threaddata_t)arg;
/* ... INITIALIZE ... */
x->running=1; /* signal that we are running */
/* CHILD THREAD BODY */
return 0;
}
void start_thread(void) {
threaddata_t*x=(threaddata_t*)malloc(sizeof(threaddata_t));
x->running=0;
int result=pthread_create(&threadid, 0, child_thread, &running);
while(!x->running) usleep(100); /* wait till child is initialized */
/* MAIN THREAD BODY */
}
Now I didn't like this at all, because it forces the main thread to sleep for probably a longer period than necessary.
So I did a 2nd attempt, using mutexes&conditions
typedef struct threaddata_ {
pthread_mutex_t x_mutex;
pthread_cond_t x_cond;
} threaddata_t;
void*child_thread(void*arg) {
threaddata_t*x=(threaddata_t)arg;
/* ... INITIALIZE ... */
pthread_cond_signal(&x->x_cond); /* signal that we are running */
/* CHILD THREAD BODY */
return 0;
}
void start_thread(void) {
threaddata_t*x=(threaddata_t*)malloc(sizeof(threaddata_t));
pthread_mutex_init(&x->x_mutex, 0);
pthread_cond_init (&x->x_cond , 0);
pthread_mutex_lock(&x->x_mutex);
int result=pthread_create(&threadid, 0, child_thread, &running);
if(!result)pthread_cond_wait(&x->x_cond, &x->x_mutex);
pthread_mutex_unlock(&x->x_mutex);
/* MAIN THREAD BODY */
}
This seemed more sane than the first attempt (using proper signals rather than rolling my own wait loop), until I discovered, that this includes a race condition:
If the child thread has finished the initialization fast enough (before the main thread waits for the condition), it will deadlock the main thread.
I guess that my case is not so uncommon, so there must be a really easy solution, but I cannot see it right now.
Proper way of condvar/mutex pair usage:
bool initialised = false;
mutex mt;
convar cv;
void *thread_proc(void *)
{
...
mt.lock();
initialised = true;
cv.signal();
mt.unlock();
}
int main()
{
...
mt.lock();
while(!initialised) cv.wait(mt);
mt.unlock();
}
This algorithm avoids any possible races. You can use any complex condition modified when mutex locked (instead of the simple !initialised).
The correct tool for that are sem_t. The main thread would initialize them with 0 and wait until it receives a token from the newly launched thread.
BTW your mutex/cond solution has a race condition because the child thread is not locking the mutex.
A barrier should do the trick nicely. Since you mention the need for support on Win32 in comments, barriers are supported on the latest Win32 pthreads, so you shouldn't have to write your own wrapper to get some portability between Win32 and *nix.
Something like:
typedef struct threaddata_ {
pthread_barrier_t* pbarrier;
} threaddata_t;
void* child_thread(void*arg) {
threaddata_t*x=(threaddata_t*)arg;
/* ... INITIALIZE ... */
int result = pthread_barrier_wait(x->pbarrier);
/* CHILD THREAD BODY */
return 0;
}
void start_thread(void) {
pthread_barrier_t barrier;
int result = pthread_barrier_init(&barrier, NULL, 2);
threaddata_t*x=(threaddata_t*)malloc(sizeof(threaddata_t));
x->pbarrier = &barrier;
int result=pthread_create(&threadid, 0, child_thread, &x);
result = pthread_barrier_wait(&barrier);
/* child has reached the barrier */
pthread_barrier_destroy(&barrier); /* note: the child thread should not use */
/* the barrier pointer after it returns from*/
/* pthread_barrier_wait() */
/* MAIN THREAD BODY */
}
The drawback to this solution is that it may needlessly blocks the child thread momentarily. If that's a problem, the condition variable solution mentioned by Dmitry Poroh is the way to go.
Related
For example I want to create 5 threads and print them. How do I make the fourth one execute before the second one? I tried locking it with a mutex, but I don't know how to make only the second one locked, so it gives me segmentation fault.
Normally, you define the order of operations, not the threads that do those operations. It may sound like a trivial distinction, but when you start implementing it, you'll see it makes for a major difference. It is also more efficient approach, because you don't think of the number of threads you need, but the number of operations or tasks to be done, and how many of them can be done in parallel, and how they might need to be ordered or sequenced.
For learning purposes, however, it might make sense to look at ordering threads instead.
The OP passes a pointer to a string for each worker thread function. That works, but is slightly odd; typically you pass an integer identifier instead:
#include <stdlib.h>
#include <inttypes.h>
#include <pthread.h>
#define ID_TO_POINTER(id) ((void *)((intptr_t)(id)))
#define POINTER_TO_ID(ptr) ((intptr_t)(ptr))
The conversion of the ID type -- which I assume to be a signed integer above, typically either an int or a long -- to a pointer is done via two casts. The first cast is to intptr_t type defined in <stdint.h> (which gets automatically included when you include <inttypes.h>), which is a signed integer type that can hold the value of any void pointer; the second cast is to a void pointer. The intermediate cast avoids a warning in case your ID is of an integer type that cannot be converted to/from a void pointer without potential loss of information (usually described in the warning as "of different size").
The simplest method of ordering POSIX threads, that is not that dissimilar to ordering operations or tasks or jobs, is to use a single mutex as a lock to protect the ID of the thread that should run next, and a related condition variable for threads to wait on, until their ID appears.
The one problem left, is to how to define the order. Typically, you'd simply increment or decrement the ID value -- decrementing means the threads would run in descending order of ID value, but the ID value of -1 (assuming you number your threads from 0 onwards) would always mean "all done", regardless of the number of threads used:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_id = /* number of threads - 1 */;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_id--;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
Note that I didn't let the worker exit immediately after its task is done; this is because I wanted to expand the example a bit: let's say you want to define the order of operations in an array:
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static int worker_order[] = { 0, 1, 2, 3, 4, 2, 3, 1, 4, -1 };
static int *worker_idptr = worker_order;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (*worker_idptr >= 0) {
if (*worker_idptr == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Choose next worker */
worker_idptr++;
pthread_cond_broadcast(&worker_wait);
}
/* Wait for someone else to broadcast on the condition. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
See how little changed?
Let's consider a third case: a separate thread, say the main thread, decides which thread will run next. In this case, we need two condition variables: one for the workers to wait on, and the other for the main thread to wait on.
static pthread_mutex_t worker_lock = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t worker_wait = PTHREAD_COND_INITIALIZER;
static pthread_cond_t worker_done = PTHREAD_COND_INITIALIZER;
static int worker_id = 0;
void *worker(void *dataptr)
{
const int id = POINTER_TO_ID(dataptr);
pthread_mutex_lock(&worker_lock);
while (worker_id >= 0) {
if (worker_id == id) {
/* Do the work! */
printf("Worker %d running.\n", id);
fflush(stdout);
/* Notify we are done. Since there is only
one thread waiting on the _done condition,
we can use _signal instead of _broadcast. */
pthread_cond_signal(&worker_done);
}
/* Wait for a change in the worker_id. */
pthread_cond_wait(&worker_wait, &worker_lock);
}
/* All done; worker_id became negative.
We still hold the mutex; release it. */
pthread_mutex_unlock(&worker_lock);
return NULL;
}
The thread that decides which worker should run first should hold the worker_lock mutex when the worker threads are created, then wait on the worker_done condition variable. When the first worker completes its task, it will signal on the worker_cone condition variable, and wait on the worker_wait condition variable. The decider thread should then change the worker_id to the next ID that should run, and broadcast on the worker_wait condition variable. This continues, until the decider thread sets worker_id to a negative value. For example:
int threads; /* number of threads to create */
pthread_t *ptids; /* already allocated for that many */
pthread_attr_t attrs;
int i, result;
/* Simple POSIX threads will work with 65536 bytes of stack
on all architectures -- actually, even half that. */
pthread_attr_init(&attrs);
pthread_attr_setstacksize(&attrs, 65536);
/* Hold the worker_lock. */
pthread_mutex_lock(&worker_lock);
/* Create 'threads' threads. */
for (i = 0; i < threads; i++) {
result = pthread_create(&(ptids[i]), &attrs, worker, ID_TO_POINTER(i));
if (result) {
fprintf(stderr, "Cannot create worker threads: %s.\n", strerror(result));
exit(EXIT_FAILURE);
}
}
/* Thread attributes are no longer needed. */
pthread_attr_destroy(&attrs);
while (1) {
/*
TODO: Set worker_id to a new value, or
break when done.
*/
/* Wake that worker */
pthread_cond_broadcast(&worker_wait);
/* Wait for that worker to complete */
pthread_cond_wait(&worker_done, &worker_lock);
}
/* Tell workers to exit */
worker_id = -1;
pthread_cond_broadcast(&worker_wait);
/* and reap the workers */
for (i = 0; i < threads; i++)
pthread_join(ptids[i], NULL);
There is a very important detail in all of the above examples, that may be hard to understand without a lot of practice: the way how mutexes and condition variables interact (if paired via pthread_cond_wait()).
When a thread calls pthread_cond_wait(), it will atomically release the specified mutex, and wait for new signals/broadcasts on the condition variable. "Atomic" means that there is no time inbetween the two; nothing can occur in between. The call returns when a signal or broadcast is received -- the difference is that a signal goes to only one, a random waiter; whereas a broadcast reaches all threads waiting on the condition variable --, and the thread acquires the lock. You can think of this as if the signal/broadcast first wakes up the thread, but the pthread_cond_wait() will only return when it re-acquires the mutex.
This behaviour is implicitly used in all of the examples above. In particular, you'll notice that the pthread_cond_signal()/pthread_cond_broadcast() is always done while holding the worker_lock mutex; this ensures that the other thread or threads wake up and get to act only after the worker_lock mutex is unlocked -- either explicitly, or by the holding thread waiting on a condition variable.
I thought I might draw a directed graph (using Graphviz) about the order of events and actions, but this "answer" is already too long. I do suggest you do it yourself -- perhaps on paper? -- as that kind of visualization has been very useful for myself when I was learning about all this stuff.
I do feel quite uncomfortable about the above scheme, I must admit. At any one time, only one thread is running, and that is basically wrong: any job where tasks should be done in a specific order, should only require one thread.
However, I showed the above examples in order for you (not just OP, but any C programmer interested in POSIX threads) to get more comfortable about how to use mutexes and condition variables.
I have created two array of threads using POSIX thread.There are two thread functions student and teacher(I have not shown them here). My sample program is given below. I want to make a time limit(say 10 sec) after which the main thread will automatically exit no matter if the corresponding threads have completed or not. How will I do that?
Sample code fragment:
int main(void)
{
pthread_t thread1[25];
pthread_t thread2[6];
int i;
int id1[25]; //for students
int id2[6]; //for teachers
for(i=0;i<25;i++)
{
id1[i]=i;
id2[i]=i;
pthread_create(&thread1[i],NULL,student,(void*)&id1[i] );
if(i<6)
{
pthread_create(&thread2[i],NULL,teacher,(void*)&id2[i]);
}
}
for (i=0;i<25;i++)
{
pthread_join(thread1[i],NULL);
if(i<6)
{
pthread_join(thread2[i],NULL);
}
}
return 0;
}
What additional things will I have to add to the above code to terminate the main thread after a certain time? (say: 10 seconds)
what you need is pthread timed join. See the snippet below
struct timespec
{
time_t tv_sec; /* sec */
long tv_nsec; /* nsec */
};
struct timespec ts;
if (clock_gettime(CLOCK_REALTIME, &ts) == -1)
{
printf("ERROR\n");
}
ts.tv_sec += 10; //10 seconds
int st = pthread_timedjoin_np(thread, NULL, &ts); //only wait for 10 seconds
if (st != 0)
{
printf("ERROR\n");
}
For additional info refer the man page http://man7.org/linux/man-pages/man3/pthread_tryjoin_np.3.html
If you just want the whole process to be terminated after 10 seconds of waiting time, you just have to replace the whole for-loop with your pthread_join calls by a suitable sleep function. You could use nanosleep, clock_nanosleep, thrd_sleep or just
sleep(10);
After that your main function would go out of scope and terminate the process.
Beware, all these functions are sensible to signals that arrive in the middle.
One way to do this is to create another thread which will sleep for 10 seconds, then call exit() (which will terminate the entire process):
void *watchdog(void *arg)
{
sigset_t all_sigs;
/* Block all signals in this thread, so that we do not have to
* worry about the sleep() being interrupted. */
sigfillset(&all_sigs);
sigprocmask(SIG_BLOCK, &all_sigs, NULL);
sleep(10);
exit(0);
return NULL; /* not reached */
}
Create this thread from the main thread after creating the other threads, and detach it:
pthread_create(&watchdog_thread, NULL, watchdog, NULL);
pthread_detach(watchdog_thread);
Now your process will end either when the main thread finishes after joining the other threads, or when the watchdog thread calls exit(), whichever happens first.
I am trying to write a code that does not block main() when pthread_join() is called:
i.e. basically trying to implement my previous question mentioned below:
https://stackoverflow.com/questions/24509500/pthread-join-and-main-blocking-multithreading
And the corresponding explanation at:
pthreads - Join on group of threads, wait for one to exit
As per suggested answer:
You'd need to create your own version of it - e.g. an array of flags (one flag per thread) protected by a mutex and a condition variable; where just before "pthread_exit()" each thread acquires the mutex, sets its flag, then does "pthread_cond_signal()". The main thread waits for the signal, then checks the array of flags to determine which thread/s to join (there may be more than one thread to join by then).
I have tried as below:
My status array which keeps a track of which threads have finished:
typedef struct {
int Finish_Status[THREAD_NUM];
int signalled;
pthread_mutex_t mutex;
pthread_cond_t FINISHED;
}THREAD_FINISH_STATE;
The thread routine, it sets the corresponding array element when the thread finishes and also signals the condition variable:
void* THREAD_ROUTINE(void* arg)
{
THREAD_ARGUMENT* temp=(THREAD_ARGUMENT*) arg;
printf("Thread created with id %d\n",temp->id);
waitFor(5);
pthread_mutex_lock(&(ThreadFinishStatus.mutex));
ThreadFinishStatus.Finish_Status[temp->id]=TRUE;
ThreadFinishStatus.signalled=TRUE;
if(ThreadFinishStatus.signalled==TRUE)
{
pthread_cond_signal(&(ThreadFinishStatus.FINISHED));
printf("Signal that thread %d finished\n",temp->id);
}
pthread_mutex_unlock(&(ThreadFinishStatus.mutex));
pthread_exit((void*)(temp->id));
}
I am not able to write the corresponding parts pthread_join() and pthread_cond_wait() functions. There are a few things which I am not able to implement.
1) How to write corresponding part pthread_cond_wait() in my main()?
2) I am trying to write it as:
pthread_mutex_lock(&(ThreadFinishStatus.mutex));
while((ThreadFinishStatus.signalled != TRUE){
pthread_cond_wait(&(ThreadFinishStatus.FINISHED), &(ThreadFinishStatus.mutex));
printf("Main Thread signalled\n");
ThreadFinishStatus.signalled==FALSE; //Reset signalled
//check which thread to join
}
pthread_mutex_unlock(&(ThreadFinishStatus.mutex));
But it does not enter the while loop.
3) How to use pthread_join() so that I can get the return value stored in my arg[i].returnStatus
i.e. where to put below statement in my main:
`pthread_join(T[i],&(arg[i].returnStatus));`
COMPLETE CODE
#include <stdio.h>
#include <pthread.h>
#include <time.h>
#define THREAD_NUM 5
#define FALSE 0
#define TRUE 1
void waitFor (unsigned int secs) {
time_t retTime;
retTime = time(0) + secs; // Get finishing time.
while (time(0) < retTime); // Loop until it arrives.
}
typedef struct {
int Finish_Status[THREAD_NUM];
int signalled;
pthread_mutex_t mutex;
pthread_cond_t FINISHED;
}THREAD_FINISH_STATE;
typedef struct {
int id;
void* returnStatus;
}THREAD_ARGUMENT;
THREAD_FINISH_STATE ThreadFinishStatus;
void initializeState(THREAD_FINISH_STATE* state)
{
int i=0;
state->signalled=FALSE;
for(i=0;i<THREAD_NUM;i++)
{
state->Finish_Status[i]=FALSE;
}
pthread_mutex_init(&(state->mutex),NULL);
pthread_cond_init(&(state->FINISHED),NULL);
}
void destroyState(THREAD_FINISH_STATE* state)
{
int i=0;
for(i=0;i<THREAD_NUM;i++)
{
state->Finish_Status[i]=FALSE;
}
pthread_mutex_destroy(&(state->mutex));
pthread_cond_destroy(&(state->FINISHED));
}
void* THREAD_ROUTINE(void* arg)
{
THREAD_ARGUMENT* temp=(THREAD_ARGUMENT*) arg;
printf("Thread created with id %d\n",temp->id);
waitFor(5);
pthread_mutex_lock(&(ThreadFinishStatus.mutex));
ThreadFinishStatus.Finish_Status[temp->id]=TRUE;
ThreadFinishStatus.signalled=TRUE;
if(ThreadFinishStatus.signalled==TRUE)
{
pthread_cond_signal(&(ThreadFinishStatus.FINISHED));
printf("Signal that thread %d finished\n",temp->id);
}
pthread_mutex_unlock(&(ThreadFinishStatus.mutex));
pthread_exit((void*)(temp->id));
}
int main()
{
THREAD_ARGUMENT arg[THREAD_NUM];
pthread_t T[THREAD_NUM];
int i=0;
initializeState(&ThreadFinishStatus);
for(i=0;i<THREAD_NUM;i++)
{
arg[i].id=i;
}
for(i=0;i<THREAD_NUM;i++)
{
pthread_create(&T[i],NULL,THREAD_ROUTINE,(void*)&arg[i]);
}
/*
Join only if signal received
*/
pthread_mutex_lock(&(ThreadFinishStatus.mutex));
//Wait
while((ThreadFinishStatus.signalled != TRUE){
pthread_cond_wait(&(ThreadFinishStatus.FINISHED), &(ThreadFinishStatus.mutex));
printf("Main Thread signalled\n");
ThreadFinishStatus.signalled==FALSE; //Reset signalled
//check which thread to join
}
pthread_mutex_unlock(&(ThreadFinishStatus.mutex));
destroyState(&ThreadFinishStatus);
return 0;
}
Here is an example of a program that uses a counting semaphore to watch as threads finish, find out which thread it was, and review some result data from that thread. This program is efficient with locks - waiters are not spuriously woken up (notice how the threads only post to the semaphore after they've released the mutex protecting shared state).
This design allows the main program to process the result from some thread's computation immediately after the thread completes, and does not require the main wait for all threads to complete. This would be especially helpful if the running time of each thread varied by a significant amount.
Most importantly, this program does not deadlock nor race.
#include <pthread.h>
#include <semaphore.h>
#include <stdlib.h>
#include <stdio.h>
#include <queue>
void* ThreadEntry(void* args );
typedef struct {
int threadId;
pthread_t thread;
int threadResult;
} ThreadState;
sem_t completionSema;
pthread_mutex_t resultMutex;
std::queue<int> threadCompletions;
ThreadState* threadInfos;
int main() {
int numThreads = 10;
int* threadResults;
void* threadResult;
int doneThreadId;
sem_init( &completionSema, 0, 0 );
pthread_mutex_init( &resultMutex, 0 );
threadInfos = new ThreadState[numThreads];
for ( int i = 0; i < numThreads; i++ ) {
threadInfos[i].threadId = i;
pthread_create( &threadInfos[i].thread, NULL, &ThreadEntry, &threadInfos[i].threadId );
}
for ( int i = 0; i < numThreads; i++ ) {
// Wait for any one thread to complete; ie, wait for someone
// to queue to the threadCompletions queue.
sem_wait( &completionSema );
// Find out what was queued; queue is accessed from multiple threads,
// so protect with a vanilla mutex.
pthread_mutex_lock(&resultMutex);
doneThreadId = threadCompletions.front();
threadCompletions.pop();
pthread_mutex_unlock(&resultMutex);
// Announce which thread ID we saw finish
printf(
"Main saw TID %d finish\n\tThe thread's result was %d\n",
doneThreadId,
threadInfos[doneThreadId].threadResult
);
// pthread_join to clean up the thread.
pthread_join( threadInfos[doneThreadId].thread, &threadResult );
}
delete threadInfos;
pthread_mutex_destroy( &resultMutex );
sem_destroy( &completionSema );
}
void* ThreadEntry(void* args ) {
int threadId = *((int*)args);
printf("hello from thread %d\n", threadId );
// This can safely be accessed since each thread has its own space
// and array derefs are thread safe.
threadInfos[threadId].threadResult = rand() % 1000;
pthread_mutex_lock( &resultMutex );
threadCompletions.push( threadId );
pthread_mutex_unlock( &resultMutex );
sem_post( &completionSema );
return 0;
}
Pthread conditions don't have "memory"; pthread_cond_wait doesn't return if pthread_cond_signal is called before pthread_cond_wait, which is why it's important to check the predicate before calling pthread_cond_wait, and not call it if it's true. But that means the action, in this case "check which thread to join" should only depend on the predicate, not on whether pthread_cond_wait is called.
Also, you might want to make the while loop actually wait for all the threads to terminate, which you aren't doing now.
(Also, I think the other answer about "signalled==FALSE" being harmless is wrong, it's not harmless, because there's a pthread_cond_wait, and when that returns, signalled would have changed to true.)
So if I wanted to write a program that waited for all threads to terminate this way, it would look more like
pthread_mutex_lock(&(ThreadFinishStatus.mutex));
// AllThreadsFinished would check that all of Finish_Status[] is true
// or something, or simpler, count the number of joins completed
while (!AllThreadsFinished()) {
// Wait, keeping in mind that the condition might already have been
// signalled, in which case it's too late to call pthread_cond_wait,
// but also keeping in mind that pthread_cond_wait can return spuriously,
// thus using a while loop
while (!ThreadFinishStatus.signalled) {
pthread_cond_wait(&(ThreadFinishStatus.FINISHED), &(ThreadFinishStatus.mutex));
}
printf("Main Thread signalled\n");
ThreadFinishStatus.signalled=FALSE; //Reset signalled
//check which thread to join
}
pthread_mutex_unlock(&(ThreadFinishStatus.mutex));
Your code is racy.
Suppose you start a thread and it finishes before you grab the mutex in main(). Your while loop will never run because signalled was already set to TRUE by the exiting thread.
I will echo #antiduh's suggestion to use a semaphore that counts the number of dead-but-not-joined threads. You then loop up to the number of threads spawned waiting on the semaphore. I'd point out that the POSIX sem_t is not like a pthread_mutex in that sem_wait can return EINTR.
Your code appears fine. You have one minor buglet:
ThreadFinishStatus.signalled==FALSE; //Reset signalled
This does nothing. It tests whether signalled is FALSE and throws away the result. That's harmless though since there's nothing you need to do. (You never want to set signalled to FALSE because that loses the fact that it was signalled. There is never any reason to unsignal it -- if a thread finished, then it's finished forever.)
Not entering the while loop means signalled is TRUE. That means the thread already set it, in which case there is no need to enter the loop because there's nothing to wait for. So that's fine.
Also:
ThreadFinishStatus.signalled=TRUE;
if(ThreadFinishStatus.signalled==TRUE)
There's no need to test the thing you just set. It's not like the set can fail.
FWIW, I would suggest re-architecting. If the existing functions like pthread_join don't do exactly what you want, just don't use them. If you're going to have structures that track what work is done, then totally separate that from thread termination. Since you will already know what work is done, what different does it make when and how threads terminate? Don't think of this as "I need a special way to know when a thread terminates" and instead think of this "I need to know what work is done so I can do other things".
I have a C program in which I use pthread.
I would like newly created threads to run as soon as they are created.
The reason behind this is that my threads have initialisation code to set up signal handlers, and I must be sure the handlers are ready, before my main thread sends some signals.
I've tried doing pthread_yield just after my pthread_create, but without success.
I doubt it makes a difference, but I am running Linux 3.6 on x86_64.
Thanks
If your goal is to have the main thread wait for all threads to reach the same point before continuing onward, I would suggest using pthread_barrier_wait:
void worker(void*);
int main(int argc, char **argv)
{
pthread_barrier_t b;
pthread_t children[TCOUNT];
int child;
/* +1 for our main thread */
pthread_barrier_init(&b, NULL, TCOUNT+1);
for (child = 0; child < TCOUNT; ++child)
{
pthread_create(&children[child], NULL, worker, &b);
}
printf("main: children created\n");
/* everybody who calls barrier_wait will wait
* until TCOUNT+1 have called it
*/
pthread_barrier_wait(&b);
printf("main: children finished\n");
/* wait for children to finish */
for (child = 0; child < TCOUNT; ++child)
{
pthread_join(&children[child], NULL);
}
/* clean-up */
pthread_barrier_destroy(&b);
return 0;
}
void worker(void *_b)
{
pthread_barrier_t *b = (pthread_barrier_t*)_b;
printf("child: before\n");
pthread_barrier_wait(b);
printf("child: after\n");
}
Or you might use a barrier, i.e. call pthread_barrier_wait (early in the routine of each thread, or at initialization in the main thread), to ensure that every relevant thread has reached the barrier (after which some of your threads could do your naughty signal tricks). See this question.
I'm having a little trouble with pthreads. Basically, I want to catch a SIGINT and have all threads cleanup and exit. What I have (skeleton code):
main.c:
sig_atomic_t running;
void handler(int signal_number)
{
running = 0;
}
int main(void)
{
queue job_queue = new_job_queue();
running = 1;
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = &handler;
sigaction(SIGINT, &sa, NULL);
/* create a bunch of threads */
init_threads(&job_queue);
while(running) {
/* do stuff */
}
cleanup();
return (0);
}
threads.c
extern sig_atomic_t running;
pthread_mutex_t queue_mutex = PTHREAD_MUTEX_INITIALIZER;
sem_t queue_count;
void init_threads(queue *q)
{
int numthreads = 12; /* say */
sem_init (&queue_count, 0, 0);
pthread_t worker_threads[numthreads];
int i;
for(i=0;i<numthreads;i++)
pthread_create(&worker_threads[i], NULL, &thread_function, q);
}
void * thread_function(void *args)
{
pthread_detatch(pthread_self());
queue *q = (queue *)args;
while(running) {
job *j = NULL;
sem_wait(&queue_count);
pthread_mutex_lock(&queue_mutex);
j = first_job_in_queue(q);
pthread_mutex_unlock(&queue_mutex);
if(j) {
/*do something*/
}
}
return (NULL);
}
I am having little luck with this. Since you're not guarenteed which thread gets the signal I thought this was a good way to go. But I am having a problem where sem_wait() in threads.c is hanging, which is expected but not desired. The while(running) loop in threads.c seems redundant. Should I maybe do a pthread_kill() to all the threads from main? Any obvious problems with the above skeleton code? Is there a better/easier way to go about doing this?
Thanks.
What you can do is to call sem_post() from the handler until all threads are unlocked. In the thread function, immediately after sem_wait() you should check the value of the running variable and if it's zero break breom the while.
The code in the handler could be something like the following:
int sval;
sem_getvalue(&queue_count, &sval);
while (sval < 0) {
sem_post(&queue_count);
sem_getvalue(&queue_count, &sval);
}
Of course return values should be verified for errors
You can catch SIGINT in one thread, and use pthread_sigmask() to block SIGINT in all other threads, if SIGINT generated by some way, the signal will be delivered to the specified thread, that thread can call pthread_cancel() to cancel all other threads.
You may want to consider calling pthread_join after each call to pthread_create. This will allow for your main thread to wait until all threads are done executing.
But maybe I'm misunderstanding slightly... Do you want to wait for all threads to finish, or simply wait for one to finish, and then stop all others immediately?
You shouldn't do a pthread_kill() if you don't have to. I'm not to familiar with pthread_detatch() but if you are wanting your main() function to wait for the threads to finish, it would probably be better if your cleanup() function did a pthread_join() on each thread id returned from pthread_create() to wait for each thread to exit normally.
Also, as far as I can tell, sem_wait() is hanging because your semaphore value is initialized to 0. If you want say at most 5 threads to access the shared resource at a time, initialize the semaphore to 5, i.e. sem_init(&queue_count, 0, 5).