Close all threads, except the main - c

Is there a way to close all created threads if I don't have a list of their identifiers?
It is assumed that I only need the main thread, and the rest can be closed.

It's usually a good idea to have threads in charge of their own lifetime, periodically checking for some event indicating they should shut down. This usually make the architecture of your code much easier to understand.
What I'm talking about is along the lines of (pseudo-code):
def main():
# Start up all threads.
synchronised runFlag = true
for count = 1 to 10:
start thread threadFn, receiving id[count]
sleep for a bit
# Tell them all to exit, then wait.
synchronised runFlag = false
for count = 1 to 10:
wait for thread id[count] to exit
exit program
def threadFn():
initialise
# Thread will do its stuff until told to stop.
while synchronised runFlag:
do something relatively quick
exit thread
The periodic checking is a balance between efficiency of the thread loop and the amount of time you may have to wait for the thread to exit.
And, yes, I'm aware that pseudo-code uses identifiers (that you specifically stated you didn't have), but that's just one example of how to effect shutdown. You could equally, for example:
maintain a (synchronised) thread count incremented as a thread starts and decremented when it stops, then wait for it to reach zero;
have threads continue to run while a synchronised counter hasn't changed from the value it was when the thread started (you could just increment the counter in main then freely create a new batch of threads, knowing that the old ones would eventually disappear since the counter is different).
do one of a half dozen other things, depending on your needs :-)
This "lifetime handled by thread" approach is often the simplest way to achieve things since the thread is fully in control of when things happen to it. The one thing you don't want is a thread being violently killed from outside while it holds a resource lock of some sort.
Some threading implementations have ways to handle that with, for example, cancellability points, so you can cancel a thread from outside and it will die at such time it allows itself to. But, in my experience, that just complicates things.
In any case, pthread_cancel requires a thread ID so is unsuitable based on your requirements.

Is there a way to close all created threads if I don't have a list of their identifiers?
No, with POSIX threads there is not.
It is assumed that I only need the main thread, and the rest can be closed.
What you could do is have main() call fork() and let the calling main() (the parent) return, which will end the parent process along with all its thread.
The fork()ed off child process would live on as a copy of the original parent process' main() but without any other threads.
If going this route be aware, that the threads of the process going down might very well run into undefined behaviour, so that strange things might happen including messy left-overs.
All in all a bad approach.

Is there a way to close all created threads if I don't have a list of their identifiers? It is assumed that I only need the main thread, and the rest can be closed.
Technically, you can fork your process and terminate the parent. Only the thread calling fork exists in the new child process. However, the mutexes locked by other threads remain locked and this is why forking a multi-threaded process without immediately calling exec may be unwise.

Related

Is there a difference between spawning multiple threads that runs to completion vs having a single thread wait for work?

I'm writing a piece of software that does a single very long task. To allow interruption, we have added a check-pointing function that periodically (on the order of minutes) dumps an image of the program state to disk. This takes some time, however, so I would like to switch to a model where the checkpoints are written on a separate thread rather than blocking the primary worker. (Yes, I know I need to keep it thread-safe.)
As I see it, there are two primary methods of accomplishing this task:
For each checkpoint, I pthread_create() a thread which will execute the checkpointing function once and then terminate.
For each checkpoint, I pthread_cond_signal() a single waiting thread that executes the checkpointing function and then returns to waiting.
Both methods require making an atomic copy of my working state and passing it to the checkpoint thread, as well as ensuring that the checkpoint complete successfully before I try another.
My question is if there is a compelling reason to use one method over the other.
I would argue that pthreads are a bad fit for your requirements:Regardless of whether you spawn a new thread for each backup or use a threadpool, you need to make a deep copy of your working-set, which is expensive. Also, you may need extensive synchronization if you go with the thread-pool. Instead, there's a much easier way to do it:fork().The child process inherits the entire memory-space of the parent, but on modern OSs, the copy is lazy (copy on write). Also, you don't need to worry about cleaning up the thread you started, because the fork()ed child releases its resources when it terminates. If your original program is already multithreaded, you may wish to make sure to only use async-safe functions in the child, but thankfully write() is async-safe (as is open() and unlink()). To avoid your child turning into a zombie, you need to call waitid(P_ALL, 0, siginfo_t *infop, WEXITED | WNOHANG) in a loop until it returns nonzero or the siginfo_t * indicates that the child has not yet exited. This avoids stalling the parent in case the child is not done with the backup before the next backup-point is reached.
Don't go with continually creating/terminating/destroying/joining threads if you can possibly avoid it. It's expensive in terms of latency and cycles, has the risk of unwanted multiple threads doing overlapping work and is difficult to debug.
Just create one thread once, at app startup, and don't terminate it. Loop it round some synchro object and sSignal it when you need to, or run a timer or sleep loop to perform your image dumps.

Managing a variable number of worker threads with graceful exit

I have a boss thread that spawns up to M worker threads. Over the lifetime of the program, workers may be added and removed. When the program-wide shutdown flag is signalled, I want to await the completion of these workers.
Currently, any of the threads can add/remove threads, but it's strictly not a requirement as long as any thread can initiate a spawn/removal.
What's stopping me from using a counting semaphore or pthread_barrier_wait() is that it expects a fixed number of threads.
I can't loop pthread_join() over all workers either because I'd risk leaking zombie threads that have exited and possibly since then been replaced.
The boss thread itself has no other purpose than spawning the threads initially and making sure that the process exits gracefully.
I've spent days on and off on this problem and cannot come up with something robust and simple; are there any fairly well-established ways to accomplish this with POSIX threads?
1) "Currently, any of the threads can add/remove threads"
and
2) "are there any fairly well-established ways to accomplish this with POSIX threads"
Yes. Don't do (1). Have the boss thread do it.
Or, you can protect the code which spawns threads with a critical section or mutex (I assume you are already doing this). They should check a flag to see if shutdown is in progress, and if it is, don't spawn any more threads.
You can also have a counter of "ideal number of threads" and "actual number of threads" and have threads suicide if they find "ideal > actual". (I.e. they should decrement actual, exit the critical section/mutex, then quit).
When you need to initiate shutdown, use the SAME mutex/section to set the flag. Once done, you know the number of threads cannot increase, so you can use the most recent value.
Indeed, to exit you can just have the boss thread set "ideal" to zero, exit the mutex, and repeatedly sleep 10ms and repeat until all threads have exited. Worst case is you wait an extra 10ms to quit. If that's too much cut it to 1ms.
These are just ideas. The central concept is that all thread creation/removal, and messages about thread creation/removal should be protected by a mutex to ensure that only one thread is adding/removing/querying status at a time. Once you have that in place, there is more than one way to do it...
Threads that want to initiate spawns/removals should ask the boss thread to actually do it for them. Then the boss thread doesn't have to worry about threads it doesn't know about, and you can use one of the simple methods you described in your question.
I'll take the opposite tac as some of the other answers since I have to do this now and again.
(1) Give every spawned thread access to a single pipe file descriptor either through the data passed through pthread_create or globally. Only the boss thread reads the pipe. Each thread announces its creation and termination to the boss via the pipe by passing its tid and boss adds or removes it from its list and pthread_joins it as appropriate. Boss can block on the pipe w/o having to do anything special.
(2) Do more or less the above with some other mechanism. Global ctr and list with accompanying condition variable to wake up boss; a message queue, etc.

How to resuse threads - pthreads c

I am programming using pthreads in C.
I have a parent thread which needs to create 4 child threads with id 0, 1, 2, 3.
When the parent thread gets data, it will set split the data and assign it to 4 seperate context variables - one for each sub-thread.
The sub-threads have to process this data and in the mean time the parent thread should wait on these threads.
Once these sub-threads have done executing, they will set the output in their corresponding context variables and wait(for reuse).
Once the parent thread knows that all these sub-threads have completed this round, it computes the global output and prints it out.
Now it waits for new data(the sub-threads are not killed yet, they are just waiting).
If the parent thread gets more data the above process is repeated - albeit with the already created 4 threads.
If the parent thread receives a kill command (assume a specific kind of data), it indicates to all the sub-threads and they terminate themselves. Now the parent thread can terminate.
I am a Masters research student and I am encountering the need for the above scenario. I know that this can be done using pthread_cond_wait, pthread_Cond_signal. I have written the code but it is just running indefinitely and I cannot figure out why.
My guess is that, the way I have coded it, I have over-complicated the scenario. It will be very helpful to know how this can be implemented. If there is a need, I can post a simplified version of my code to show what I am trying to do(even though I think that my approach is flawed!)...
Can you please give me any insights into how this scenario can be implemented using pthreads?
As far what can be seen from your description, there seems to be nothing wrong with the principle.
What you are trying to implement is a worker pool, I guess, there should be a lot of implementations out there. If the work that your threads are doing is a substantial computation (say at least a CPU second or so) such a scheme is a complete overkill. Mondern implementations of POSIX threads are efficient enough that they support the creation of a lot of threads, really a lot, and the overhead is not prohibitive.
The only thing that would be important if you have your workers communicate through shared variables, mutexes etc (and not via the return value of the thread) is that you start your threads detached, by using the attribute parameter to pthread_create.
Once you have such an implementation for your task, measure. Only then, if your profiler tells you that you spend a substantial amount of time in the pthread routines, start thinking of implementing (or using) a worker pool to recycle your threads.
One producer-consumer thread with 4 threads hanging off it. The thread that wants to queue the four tasks assembles the four context structs containing, as well as all the other data stuff, a function pointer to an 'OnComplete' func. Then it submits all four contexts to the queue, atomically incrementing a a taskCount up to 4 as it does so, and waits on an event/condvar/semaphore.
The four threads get a context from the P-C queue and work away.
When done, the threads call the 'OnComplete' function pointer.
In OnComplete, the threads atomically count down taskCount. If a thread decrements it to zero, is signals the the event/condvar/semaphore and the originating thread runs on, knowing that all the tasks are done.
It's not that difficult to arrange it so that the assembly of the contexts and the synchro waiting is done in a task as well, so allowing the pool to process multiple 'ForkAndWait' operations at once for multiple requesting threads.
I have to add that operations like this are a huge pile easier in an OO language. The latest Java, for example, has a 'ForkAndWait' threadpool class that should do exactly this kind of stuff, but C++, (or even C#, if you're into serfdom), is better than plain C.

For pthread, How to kill child thread from the main thread

I use pthread_create to create several child threads. At a time, the main thread wants to kill all child threads or there will be segment falut. Which function should I use to finish that? I searched the answer from google and got function like pthread_kill. But I did not know which signal should I send to the child thread to kill them. My running environment is RHEL 5.4 and programming language is C.
In general, you don't really want to violently kill a child thread, but instead you want to ask it to terminate. That way you can be sure that the child is quitting at a safe spot and all its resources are cleaned up.
I generally do this with a small piece of shared state between parent and child to allow the parent to communicate a "quit request" to each child. This can just be a boolean value for each child, protected by a mutex. The child checks this value periodically (every loop iteration, or whatever convenient checkpoints you have in your child thread). Upon seeing "quit_request" being true, the child thread cleans up and calls pthread_exit.
On the parent side, the "kill_child" routine looks something like this:
acquire shared mutex
set quit_request to true
pthread_join the child
The pthread_join may take some time, depending on how frequently the child checks its quit request. Make sure your design can handle whatever the delay may be.
It is possible to "cancel" a thread using pthread_cancel. However, this isn't typically best practice though under extreme circumstances like a SEGFAULT it may be conisdered a reasonable approach.
You should send SIG_TERM to each of your threads, using
int pthread_kill(pthread_t thread, int sig);
A quick way to get rid of all threads (besides the main) is to fork() and keep going with the child.
Not hyper clean...
if (fork()) exit(0); // deals also with -1...
You can use a global variable for the entire program.
int _fCloseThreads;
Set it to 1 when you want the threads to quit execution. Have the threads check that variable in their "loop" and nicely quit when it is set to 1. No need to protect it with a mutex.
You need to wait for the threads to quit. You can use join. Another way is to increment a counter when a thread enters its thread proc and then decriment the counter when it exits. The counter would need to be a global of sorts. Use gcc atomic ops on the counter. The main thread, after setting fCloseThreads, can wait on the counter to go to zero by looping, sleeping, and checking the count.
Finally, you might checkout pthread_cleanup_push and pop. They are a model for allowing a thread to cancel anywhere in its code (uses a longjump) and then call a final cleanup function before exiting threadproc. You basicly put cleanup_push at the top of your threadproc and cleanup_pop at the bottom, create an unwind function, and then at certain cancelation points a thread canceled by a call to pthread_cancel() will longjump back to threadproc and call the unwind function.

Semaphore queues

I'm extending the functionality of a semaphore. I ran into a roadblock when I realized I don't know the implementation of an actual semaphore and to make sure my code ran correctly, I needed to know this.
I know a semaphore works by blocking threads that are waiting on it when they call sem_wait() and another thread currently has it locked. The thread is then blocked and then put into a wait list for that semaphore.
My question relates to what happens on a sem_post(). Is the next thread pulled off the waiting list, set as the locking thread, and allowed to be unblocked? Or is the scheme for posting completely different?
Thanks!
The next thread to unblock on it's sem_wait() will be whatever thread the OS decides is the next one to context switch into. Nobody makes any guarantee of ordering; it depends on your OS's scheduling strategy. It might be the thread that has been off the CPU for the longest, or the one that has been assigned the highest "priority", or the one that has historically had certain resource-usage statistics, or whatever.
Most likely, your current thread (the one that called sem_post()) will continue running for a while, until it either starts waiting for user input, blocks on another semaphore, or runs out of its os-allotted time slice. Then, the OS will switch in some totally unrelated process to run for a fraction of a second (probably Firefox or something), then go off and handle some network traffic, get itself a cup of tea, and, finally, when it gets around to it, pick whichever of your other threads it feels like, based on something like whether it feels based on past history that the particular thread is more CPU or I/O-bound.
In many OSes, priority is given to I/O-bound processes that haven't been around for very long. The theory is that new processes might be short-lived (if it's been around for five hours already, odds are it won't be finishing up in the next 1ms) so we might as well get them over with. I/O-bound processes are likely to continue to be I/O-bound, which means that chances are they are going to switch off the CPU shortly while waiting for other resources. Basically, the OS wants to find the process that it's going to be able to be done with ASAP, so it can get back to sipping its tea and running your malware.
Semaphores have two operations:
P() To acquire the semaphore (you seem to call this sem_wait)
V() To release the semaphore (you seem to call this sem_post)
Semaphores also have an integer associated to them, which is the number of concurrent threads allowed to pass P() without blocking. Other calls to P() will block until V() is called to free up spots.
That is the classic definition of a semaphore.
Edit: Semaphores do not make any guarantee of order. They don't have to actually use a queue or other FIFO structure. When only one thread is allowed at a time, when it calls V(), another (possibly random) thread will then return from its P() call and continue.
According to the IEEE standards, the behavior of POSIX semaphores:
If the semaphore value resulting from this operation is positive, then no threads were blocked waiting for the semaphore to become unlocked; the semaphore value is simply incremented.
If the value of the semaphore resulting from this operation is zero, then one of the threads blocked waiting for the semaphore shall be allowed to return successfully from its call to sem_wait(). If the Process Scheduling option is supported, the thread to be unblocked shall be chosen in a manner appropriate to the scheduling policies and parameters in effect for the blocked threads. In the case of the schedulers SCHED_FIFO and SCHED_RR, the highest priority waiting thread shall be unblocked, and if there is more than one highest priority thread blocked waiting for the semaphore, then the highest priority thread that has been waiting the longest shall be unblocked. If the Process Scheduling option is not defined, the choice of a thread to unblock is unspecified.
If the Process Sporadic Server option is supported, and the scheduling policy is SCHED_SPORADIC, the semantics are as per SCHED_FIFO above."

Resources