My daemon initializes itself in four different threads before it starts doing its things. Right now I use a counter which is incremented when a thread is started and decremented when it is finished. When the counter hits 0 I call the initialization finished callback.
Is this the preferred way to do it, or are there better ways? I'm using POSIX threads (pthread) and I just run a while cycle to wait for the counter to hit 0.
Edit: pthread_barrier_* functions are not available on my platform although they do seem to be the best choice.
Edit 2: Not all threads exit. Some initialize and then listen to events. Basically the thread needs to say, "I'm done initializing".
A barrier is what you need. They were created for that, when you need to "meet up" at certain points before continuing. See pthread_barrier_*
Rather than spinning, use the pthread mutex/condvar primitives. I'd suggest a single mutex to protect both the count of threads outstanding, and the condvar.
The main loop looks like this:
acquire mutex
count=N_THREADS;
start your N threads
while (1) {
if (count==0) break;
cond_wait(condvar);
}
release mutex
And when each thread is ready it would do something like this:
acquire mutex
count--
cond_signal(condvar)
release mutex
(EDIT: I have assumed that the threads are to keep going once they have done their initialisation stuff. If they are to finish, use pthread_join as others have said.)
pthread_join is the preferred way to wait for pthreads.
That sounds ... weird. Shouldn't you just be using pthread_join() to wait for the threads to complete? Maybe I don't understand the question.
As Klas Lindbäck pointed out in his answer, joining threads is a preferred way to go.
In case your threads are not exiting (i.e. are part of the reusable pool etc.), the logic sounds good. The only thing is that using counter without any synchronisation is dangerous. You have to use either mutex with condition or atomic integer. I'd recommend using mutex + condition if you don't want to spin on atomic counter in the thread that waits for initialisation to finish.
So, what happens if one thread finishes initialization before any of the others begin?
So one way to do it
initialize an atomic counter to 0
when each thread is done with init, increment counter and retrieve the value atomically. If you use GCC, you can use __sync_add_and_fetch()
If the retrieved counter value < N_threads, block on a pthread condition variable.
If the retrieved counter value == N_threads, init phase is done, signal the condition and continue.
Related
I'm tryng to do the Dining philosophers, and in my code, after a thread drop the stick, they also send a broadcast to all thread waiting in the while loop, to move foward, but apparently this is not happening and I don't know way
https://github.com/lucizzz/Philosophers/blob/main/dinning.c
Your code has a lot of bugs, but the most fundamental one is that you access shared state without holding the mutex that protects that state. For example, the while loop in routine_1 tests the stick array without holding the mutex. It even calls pthread_cond_wait without holding the mutex.
This is wrong for many reasons, but the most obvious is this -- what if the while loop decides to call pthread_cond_wait, but then before you call pthread_cond_wait, the thread holding the resources releases it. Now, you are calling pthread_cond_wait to wait for something that has already happened -- you will be waiting forever.
You must hold the mutex both when you decide whether to call pthread_cond_wait and when you actually do call pthread_cond_wait or your code will wait forever if a thread releases the resource before you were able to wait for it.
Fundamentally, the whole point of condition variables is to provide an atomic "unlock and wait" operation to avoid this race condition. But your code doesn't use the mutexes correctly.
I would like to know what happens with the while after a waiting thread is waked.
To avoid 'spurious wake-ups', the pthreads documentation points out that you need to use pthread_cond_wait inside a while statement.
So, when the pthread_cond_wait is called, the calling thread is blocked. And after signaled, the thread resume inside the while.
On my code, I'm handling a pthread_cond_wait() this way:
pthread_mutex_lock(&waiting_call_lock);
while(1) {
pthread_cond_wait(&queue_cond[thread_ctx], &waiting_call_lock);
break;
}
pthread_mutex_unlock(&waiting_call_lock);
The question is, it will try to enter the while again or somehow it break the while and go on? Or, in that case, the break after the pthread_cond_wait() is necessary?
To get this right, I think it is best to start with asking yourself: "what is the thread waiting for?"
The answer should not be "it should wait until another thread signals it", because the way condition variables work is assuming you have something else, some kind of mutex-protected information, that the thread is to wait for.
To illustrate this I will invent an example here where the thread is supposed to wait until a variable called counter is larger than 7. The variable counter is accessible by multiple threads and is protected by a mutex that I will call theMutex. Then the code involving the pthread_cond_wait call could look like this:
pthread_mutex_lock(&theMutex);
while(counter <= 7) {
pthread_cond_wait(&cond, &theMutex);
}
pthread_mutex_unlock(&theMutex);
Now, if a "spurious wake-up" were to happen, the program will check the condition (counter <= 7) again and find it is still not satisfied, so it will stay in the loop and call pthread_cond_wait again. So this ensures that the thread will not proceed past the while loop until the condition is satisfied.
Since spurious wake-ups rarely happen in practice, it may be of interest to trigger that to check that your implementation works properly; here is a discussion about that: How to trigger spurious wake-up within a Linux application?
I have this code:
int _break=0;
while(_break==0) {
if(someCondition) {
//...
if(someOtherCondition)_break=1;//exit the loop
//...
}
}
The problem is that if someCondition is false, the loop gets heavy on the CPU. Is there a way to sleep for some milliseconds in the loop so that the cpu will not have a huge load?
Update
What I'm trying to do is a server-client application, without using sockets, just using shared memory, semaphores and system calls. I'm doing this on linux.
someOtherCondition becomes true when the applications receives the "kill" signal, while someCondition is true if the message received is valid. If it's not valid, it keeps waiting for a valid message and the while loop becomes a heavy infinite loop (it works but loads the CPU too much). I would like to make it lightweight.
I'm working on Linux (Debian 7).
If you have a single-threaded application, then it won't make any difference whether you suspend the execution or not.
If you have multiple threads running, then you should use a binary semaphore instead of polling a global variable.
This thread should acquire the semaphore at the beginning of each iteration, and one of the other threads should release the semaphore whenever you wish this thread to run.
This method is also known as "consumer-producer".
When a thread attempts to acquire a binary semaphore:
If the semaphore is released, then the calling thread acquires it and continues the execution.
If the semaphore is already acquired, then the calling thread "asks" the OS to block itself, and the OS will unblock it as soon as some other thread releases the semaphore.
The entire procedure is "atomic", i.e., no context-switch between threads can take place while the semaphore code is executed. This is generally achieved by disabling the interrupts. Everything is implemented within the semaphore code, so you need not "worry" about it.
Since you did not specify what OS you're using, I cannot provide any technical details (i.e., code)...
UPDATE:
If you are trying to protect a critical section inside the loop (i.e., if you are accessing some other global variable, which is also being accessed by other threads, and at least one of those threads is changing that global variable), then you should use a Mutex instead of a binary semaphore.
There are two advantages for using a Mutex in this case:
It can be released only by the thread which has acquired it (thus ensuring mutual exclusion).
It can resolve a specific type of deadlocks that occur when a high-priority thread is waiting for a low-priority thread to complete, while a medium-priority thread is preventing the low-priority thread from completing (a.k.a. priority-inversion).
Of course, a Mutex is required only if you really need to ensure mutual exclusion for accessing the data.
UPDATE #2:
Now that you've added some specific details on your system, here is the general scheme:
Step #1 - Before starting your threads:
// Declare a global variable 'sem'
// Initialize the global variable 'sem' with 'count = 0' (i.e., as acquired)
Step #2 - In this thread:
// Declare the global variable 'sem' as 'extern'
while(1)
{
semget(&sem);
//...
}
Step #3 - In the Rx ISR:
// Declare the global variable 'sem' as 'extern'
semset(&sem);
Spinning a loop without any delay will use a fair amount of CPU, a small time delay will reduce that you're right.
Using Sleep() is the easiest way, in Windows this is in the windows.h header.
Having said that, the most elegant solution would be to thread your code so that the code is only ever run when your condition is true, that way it will truly sleep until you wake it up.
I suggest you look into pthread and mutex. This will allow you to sleep that loop of yours entirely until the condition becomes true.
Hope that helps in some way :)
I'm curious whether anyone has used something like:
pthread_mutex_lock(&ctx->processing_pipeline.feeder_safe_point_mutex);
while(!ctx->processing_pipeline.feeder_safe_point)
pthread_cond_wait(&ctx->processing_pipeline.feeder_safe_point_cv, &ctx->processing_pipeline.feeder_safe_point_mutex);
pthread_mutex_unlock(&ctx->processing_pipeline.feeder_safe_point_mutex);
... when waiting on a condvar.
The idea is that the feeder_safe_point int variable will be set to 1 when the event is completed and then the waiting thread will be woken up.
Also, what is the recommended way to use condvars to serialise the execution of multiple threads
Yes, that is exactly how you should use a pthreads condition variable. ctx->processing_pipeline.feeder_safe_point should also only be modified with ctx->processing_pipeline.feeder_safe_point_mutex locked.
I have a thread where I want to wait (at a particular line of code) for three callback events from another thread. Only after these three events are received then I want to proceed forward.
I am trying to use semaphores. I am aware that a semaphore can be locked at a point and it keeps waiting till it is released by some other thread.
Now, the thing is that I want to wait for three callbacks and not just one before i release the semaphore.
I thought of having a counter but I am not sure whether just have a separate counter would be thread safe.
So is there a way to have a semaphore with a thread safe counter?
This is for both Linux and Windows.
Thanks.
If the threads can have assignable numbers, you maybe can have just a boolean variable per controlling thread and then check if all are set before the suspended thread is released. Writing a byte is probably atomic.
Normal semaphores would have atomic counters, however.