I'm trying to implement a blocking queue using POSIX threads. Important code segments are shown below. I tried to run this program. The thread trying to remove an element from the queue goes to sleep when there are no elements in the queue and wakes up again without any signal from the thread that adds an element into the queue (This I conclude because I did not start any thread that adds an element into the queue). The thread woke up again goes to sleep and this process repeats. What am I doing wrong? Please some one tell me what I am missing here?
struct rqueue
{
int qsize;
int capacity;
pthread_mutex_t lock;
pthread_cond_t not_empty;
pthread_cond_t not_full;
};
remove_element_method:
pthread_mutex_lock(&rq->lock);
while(rq->qsize == 0){
perror("Q size is zero going to sleep");
pthread_cond_wait(&rq->not_empty);
perror("woke up");
}
// some code
pthread_cond_signal(&rq->not_full);
pthread_mutex_unlock(&rq->lock);
add_element_method:
pthread_mutex_lock(&rq->lock);
if(rq->capacity != -1 ){
while(rq->qsize == rq->capacity){
pthread_cond_wait(&rq->not_full);
}
}
//some code
pthread_cond_signal(&rq->not_empty);
pthread_mutex_unlock(&rq->lock);
pthread_cond_wait() takes two arguments -- the second is the mutex you're holding. You're only passing it one argument.
Also, did you initialize the mutex and condition variables using pthread_mutex_init() and pthread_cond_init()?
Related
I'm writing a program which receive data from websocket and work with this data in thread pool.
I have problem with pthread_cond_wait when processor have 2 or more cores. After pthread_cond_signal signal is received by all threads which run on different cores. For example if I have 2 cores, then the signal will come to 2 threads at once, which are located on these two cores. If I have single core processor all is good.
What I have to do to get the program to work correctly on multi-core processors? So that only one thread receives the signal to start work.
I wrote an example of my code with generation random text data instead of websocket data.
#include<stdio.h>
#include<stdlib.h>
#include<cstring>
#include<pthread.h>
#include<unistd.h>
pthread_attr_t attrd;
pthread_mutex_t mutexQueue;
pthread_cond_t condQueue;
char textArr[128][24]; //array with random text to work
int tc; //tasks count
int gi; //global array index
void *workThread(void *args){
int ai;//internal index for working array element
while(1){
pthread_mutex_lock(&mutexQueue);
while(tc==0){
pthread_cond_wait(&condQueue,&mutexQueue); //wait for signal if tasks count = 0.
}
ai=gi;
if(gi==127)gi=0;else gi++;
tc--;
pthread_mutex_unlock(&mutexQueue);
printf("%s\r\n",textArr[ai]);
// then work with websocket data
}
}
void *generalThread(void *args){
const char chrs[]="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; //chars fo random text generation
int ai=0;
srand(time(NULL));
while(1){
for(int i=0;i<23;i++)textArr[ai][i]=chrs[rand()%61];//generating data instead of websocket data
textArr[ai][23]='\0';
tc++;
pthread_cond_signal(&condQueue); //Send signal for thread to begin work with data
if(ai==127)ai=0;else ai++;
}
}
int main(int argc,char *argv[]){
pthread_attr_init(&attrd);
pthread_attr_setdetachstate(&attrd,PTHREAD_CREATE_DETACHED);
pthread_t gt,wt[32];
for(int i=0;i<32;i++)pthread_create(&wt[i],&attrd,&workThread,NULL);
pthread_create(>,NULL,&generalThread,NULL);
pthread_join(gt,NULL);
return 0;
}
First some info:
man pthread_cond_wait
Rationale
Some implementations, particularly on a multi-processor, may sometimes cause multiple threads to wake up when the condition variable is signaled simultaneously on different processors.
man pthread_cond_signal
Rationale
Multiple Awakenings by Condition Signal
On a multi-processor, it may be impossible for an implementation of pthread_cond_signal() to avoid the unblocking of more than one thread blocked on a condition variable.
...
The effect is that more than one thread can return from its call to pthread_cond_wait() or pthread_cond_timedwait() as a result of one call to pthread_cond_signal(). This effect is called "spurious wakeup". Note that the situation is self-correcting in that the number of threads that are so awakened is finite; for example, the next thread to call pthread_cond_wait() after the sequence of events above blocks.
So far, so good, the code in your workThread is proper synchronized (but you should put the printf in the synchronized section as well) but the code in your generalThread has no synchronization at all. Encapsulate the code in the while loop with a lock / unlock.
In that case, the first awakened thread has to aquire a lock on the specified mutex, which will be owned by either another thread or the generalThread. Until the mutex is unlocked, the thread blocks (no matter the reason of its wakeup). After the aquisition, it owns the mutex and all other threads will be blocked, the generalThread inclusive.
Note: a pthread_cond_wait implicitly unlocks the specified mutex upon entering the wait state and on a wakeup it tries to aquire a lock on the specified mutex.
Adding a mutex lock to tc++ fully corrects my programs:
void *generalThread(void *args) {
const char chrs[]="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
int ai=0;
srand(time(NULL));
while(1){
for(int i=0;i<23;i++)textArr[ai][i]=chrs[rand()%61];
textArr[ai][23]='\0';
pthread_mutex_lock(&mutexQueue); //this has been added
tc++;
pthread_mutex_unlock(&mutexQueue); //this has been added
pthread_cond_signal(&condQueue);
if(ai==127)ai=0;else ai++;
}
}
I'm currently learning concurrent programming. I have two threads and I want them to act like this:
the first thread runs N times while the second one waits
when the first one is done, the second does its job and the first waits
when the second thread is done, repeat.
I'm trying to do it in C using pthread library. Here's some pseudocode (hopefully understandable)
int cnt = 0;
void* thread1(){
while(1){
// thread 1 code
cnt ++;
if(cnt == N){
let_thread2_work();
wait_thread2();
}
}
}
void* thread2(){
while(1){
wait_thread1();
// thread2 code
cnt = 0;
let_thread1_work();
}
}
Can anyone please help me ?
Like David Schwartz commented, this doesn't make a ton of sense with just thread1 and thread2 waiting on each other, not actually doing any work in parallel. But maybe eventually you want multiple "thread1"s, all processing jobs at the same time until they finish a batch of N jobs, then they all stop and wait for "thread2" to do some kind of post-processing before the pool of worker threads start back up.
In this situation I would consider using a couple condition variables, one for your worker threads to communicate to your post-processing thread that they're waiting, and one for your post-processing thread to tell the workers to start working again. You can declare a condition variable and a helper mutex in the global scope right next to your "cnt" variable, which I'm calling "jobs_done" for clarity:
#include<pthread.h>
#DEFINE NUM_WORKERS 1 // although it doesn't make sense to just have 1
#DEFINE BATCH_SIZE 50 // this is "N"
// we're going to keep track of how many jobs we have done in
// this variable
int jobs_done = 0;
// when a worker thread checks jobs_done and it's N or greater,
// that means we have to wait for the post-processing thread to set
// jobs_done back to 0. so the worker threads "wait" on the
// condition variable, and the post-processing thread "broadcasts"
// to the condition variable to wake them all up again once it's
// done its work
pthread_cond_t jobs_ready_cv;
// we're going to use this helper mutex. whenever any thread
// reads or writes to the jobs_done variable, we have to lock this
// mutex. that includes the worker threads when they check to see if
// they're ready to wake up again.
pthread_mutex_t jobs_mx;
// here's how the worker threads will communicate to the post-process
// thread that a batch is done. to make sure that all N jobs are fully
// complete before postprocessing happens, we'll use this variable to
// keep track of how many threads are waiting for postprocessing to finish.
int workers_done = 0;
// we'll also use a separate condition variable and separate mutex to
// communicate to the postprocess thread.
pthread_cond_t workers_done_cv;
pthread_mutex_t workers_done_mx;
Then in your setup code, initialize the condition variables and helper mutexes:
int main() { // or something
pthread_cond_init(&jobs_ready_cv, NULL);
pthread_mutex_init(&jobs_mx, NULL);
pthread_cond_init(&workers_done_cv, NULL);
pthread_mutex_init(&workers_done_mx, NULL);
...
}
So, your worker threads (or "thread1"), before taking a job, will check to see how many jobs have been taken. If N (here, BATCH_SIZE) have been taken, then it updates a variable to indicate that it has no work left to do. If it finds that all of the worker threads are done, then it signals the postprocess thread ("thread2") through workers_done_cv. Then, the thread waits for a signal from the postprocess thread through `
void* worker_thread(){
while(1){
/* first, we check if the batch is complete. we do this first
* so we don't accidentally take an extra job.
*/
pthread_mutex_lock(&jobs_mx);
if (jobs_done == BATCH_SIZE) {
/* if BATCH_SIZE jobs have been done, first let's increment workers_done,
* and if all workers are done, let's notify the postprocess thread.
* after that, we release the workers_done mutex so the postprocess
* thread can wake up from the workers_done condition variable.
*/
pthread_mutex_lock(&workers_done_mx);
++workers_done;
if (workers_done == NUM_WORKERS) {
pthread_cond_broadcast(&workers_done_cv);
}
pthread_mutex_unlock(&workers_done_mx);
/* now we wait for the postprocess thread to do its work. this
* unlocks the mutex. when we get the signal to start doing jobs
* again, the mutex will relock automatically when we wake up.
*
* note that we use a while loop here to check the jobs_done
* variable after we wake up. That's because sometimes threads
* can wake up on accident even if no signal or broadcast happened,
* so we need to make sure that the postprocess thread actually
* reset the variable. google "spurious wakeups"
*/
while (jobs_done == BATCH_SIZE) {
pthread_cond_wait(&jobs_ready_cv, &jobs_mx);
}
}
/* okay, now we're ready to take a job.
*/
++jobs_done;
pthread_mutex_unlock(&jobs_mx);
// thread 1 code
}
}
Meanwhile, your postprocess thread waits on the workers_done_cv immediately, and doesn't wake up until the last worker thread is done and calls pthread_cond_broadcast(&workers_done_cv). Then, it does whatever it needs to, resets the counts, and broadcasts to the worker threads to wake them back up.
void* postprocess_thread(){
while(1){
/* first, we wait for our worker threads to be done
*/
pthread_mutex_lock(&workers_done_mx);
while (workers_done != NUM_WORKERS) {
pthread_cond_wait(&workers_done_cv, &workers_done_mx);
}
// thread2 code
/* reset count of stalled worker threads, release mutex */
workers_done = 0;
pthread_mutex_unlock(&workers_done_mx);
/* reset number of jobs done and wake up worker threads */
pthread_mutex_lock(&jobs_mx);
jobs_done = 0;
pthread_mutex_unlock(&jobs_mx);
pthread_cond_broadcast(&jobs_ready_cv);
}
}
Also take heed of David Schwartz's advice that you probably don't actually need the postprocessing thread to wait on the worker threads. If you don't need this, then you can get rid of the condition variable that makes the worker threads wait for the postprocess thread, and this implementation becomes a lot simpler.
edit: mutex protected the assignment to jobs_done in postprocess_thread(), added a forgotten ampersand
One solution is to use mutex:
https://www.geeksforgeeks.org/mutex-lock-for-linux-thread-synchronization/
Another solution, but using semaphore:
https://www.geeksforgeeks.org/difference-between-binary-semaphore-and-mutex/?ref=rp
Or creating own lightthread schedule.
for exemple:
http://www.dunkels.com/adam/pt/ [edited]
I had an exercise about threads, locks, and condition variables in C. I needed to write a program that get data, turn it into a linked list, starting 3 threads each calculating result for each node in the list and main thread printing the results after evreyone finished.
This is the main function:
int thread_finished_count;
// Lock and Conditional variable
pthread_mutex_t list_lock;
pthread_mutex_t thread_lock;
pthread_cond_t thread_cv;
int main(int argc, char const *argv[])
{
node *list;
int pairs_count, status;
thread_finished_count = 0;
/* get the data and start the threads */
node *head = create_numbers(argc, argv, &pairs_count);
list = head; // backup head for results
pthread_t *threads = start_threads(&list);
/* wait for threads and destroy lock */
status = pthread_cond_wait(&thread_cv, &list_lock);
chcek_status(status);
status = pthread_mutex_destroy(&list_lock);
chcek_status(status);
status = pthread_mutex_destroy(&thread_lock);
chcek_status(status);
/* print result in original list */
print_results(head);
/* cleanup */
wait_for_threads(threads, NUM_THREADS);
free_list(head);
free(threads);
return EXIT_SUCCESS;
}
Please note that the create_numbers function is working properly, and the list is working as intended.
Here is the start_thread and thread_function code:
pthread_t *start_threads(node **list)
{
int status;
pthread_t *threads = (pthread_t *)malloc(sizeof(pthread_t) * NUM_THREADS);
check_malloc(threads);
for (int i = 0; i < NUM_THREADS; i++)
{
status = pthread_create(&threads[i], NULL, thread_function, list);
chcek_status(status);
}
return threads;
}
void *thread_function(node **list)
{
int status, self_id = pthread_self();
printf("im in %u\n", self_id);
node *currentNode;
while (1)
{
if (!(*list))
break;
status = pthread_mutex_lock(&list_lock);
chcek_status(status);
printf("list location %p thread %u\n", *list, self_id);
if (!(*list))
{
status = pthread_mutex_unlock(&list_lock);
chcek_status(status);
break;
}
currentNode = (*list);
(*list) = (*list)->next;
status = pthread_mutex_unlock(&list_lock);
chcek_status(status);
currentNode->gcd = gcd(currentNode->num1, currentNode->num2);
status = usleep(10);
chcek_status(status);
}
status = pthread_mutex_lock(&thread_lock);
chcek_status(status);
thread_finished_count++;
status = pthread_mutex_unlock(&thread_lock);
chcek_status(status);
if (thread_finished_count != 3)
return NULL;
status = pthread_cond_signal(&thread_cv);
chcek_status(status);
return NULL;
}
void chcek_status(int status)
{
if (status != 0)
{
fputs("pthread_function() error\n", stderr);
exit(EXIT_FAILURE);
}
}
Note that self_id is used for debugging purposes.
My problem
My main problem is about splitting the work. Each thread so get an element from the global linked list, calculate gcd, then go on and take the next element. I get this effect only if I'm adding usleep(10) after I unlock the mutex in the while loop. If I don't add the usleep, the FIRST thread will go in and do all the work while other thread just wait and come in after all the work has been done.
Please note!: I thought about the option that maybe the first thread is created, and untill the second thread is created the first one already finish all the jobs. This is why I added the "I'm in #threadID" check with usleep(10) when evrey thread is created. They all come in but only first one is doing all the jobs.
Here is the output example if I do usleep after the mutex unlock (notice diffrent thread ID)
with usleep
./v2 nums.txt
im in 1333593856
list location 0x7fffc4fb56a0 thread 1333593856
im in 1316685568
im in 1325139712
list location 0x7fffc4fb56c0 thread 1333593856
list location 0x7fffc4fb56e0 thread 1316685568
list location 0x7fffc4fb5700 thread 1325139712
list location 0x7fffc4fb5720 thread 1333593856
list location 0x7fffc4fb5740 thread 1316685568
list location 0x7fffc4fb5760 thread 1325139712
list location 0x7fffc4fb5780 thread 1333593856
list location 0x7fffc4fb57a0 thread 1316685568
list location 0x7fffc4fb57c0 thread 1325139712
list location 0x7fffc4fb57e0 thread 1333593856
list location 0x7fffc4fb5800 thread 1316685568
list location (nil) thread 1325139712
list location (nil) thread 1333593856
...
normal result output
...
And thats the output if I comment out the usleep after mutex lock (Notice the same thread ID)
without usleep
./v2 nums.txt
im in 2631730944
list location 0x7fffe5b946a0 thread 2631730944
list location 0x7fffe5b946c0 thread 2631730944
list location 0x7fffe5b946e0 thread 2631730944
list location 0x7fffe5b94700 thread 2631730944
list location 0x7fffe5b94720 thread 2631730944
list location 0x7fffe5b94740 thread 2631730944
list location 0x7fffe5b94760 thread 2631730944
list location 0x7fffe5b94780 thread 2631730944
list location 0x7fffe5b947a0 thread 2631730944
list location 0x7fffe5b947c0 thread 2631730944
list location 0x7fffe5b947e0 thread 2631730944
list location 0x7fffe5b94800 thread 2631730944
im in 2623276800
im in 2614822656
...
normal result output
...
My second question is about order of threads working. My exercise ask me to not use join to synchronize the threads (only use at the end to "free resources") but instand use that condition variable.
My goal is that each thread will take element, do the calculation and in the meanwhile another thread will go in and take another element, and new thread will take each element (or at least close to that)
Thanks for reading and I appriciate your help.
First, you are doing the gcd() work while holding the lock... so (a) only one thread will do any work at any one time, though (b) that does not entirely explain why only one thread appears to do (nearly) all the work -- as KamilCuk says, it may be that there is so little work to do, that it's (nearly) all done before the second thread wakes up properly. [More exotic, there may be some latency between thread 'a' unlocking the mutex and another thread starting to run, such that thread 'a' can acquire the mutex before another thread gets there.]
POSIX says that when a mutex is unlocked, if there are waiters then "the scheduling policy shall determine which thread shall acquire the mutex". The default "scheduling policy" is (to the best of my knowledge) implementation defined.
You could try a couple of things: (1) use a pthread_barrier_t to hold all the threads at the start of thread_function() until they are all running; (2) use sched_yield(void) after pthread_mutex_unlock() to prompt the system into running the newly runnable thread.
Second, you should not under any circumstance treat a 'condition variable' as a signal. For main() to know that all threads have finished you need a count -- which could be a pthread_barrier_t; or it could be simple integer, protected by a mutex, with a 'condition variable' to hold the main thread on while it waits; or it could be a count (in main()) and a semaphore (posted once by each thread as it exits).
Third, you show pthread_cond_wait(&cv, &lock); in main(). At that point main() must own lock... and it matters when that happened. But: as it stands, the first thread to find the list empty will kick cv, and main() will proceed, even though other threads are still running. Though once main() does re-acquire lock, any threads which are then still running will either be exiting or will be stuck on the lock. (It's a mess.)
In general, the template for using a 'condition variable' is:
pthread_mutex_lock(&...lock) ;
while (!(... thing we need ...))
pthread_cond_wait(&...cond_var, &...lock) ;
... do stuff now we have what we need ....
pthread_mutex_unlock(&...lock) ;
NB: a 'condition variable' does not have a value... despite the name, it is not a flag to signal that some condition is true. A 'condition variable' is, essentially, a queue of threads waiting to be re-started. When a 'condition variable' is signaled, at least one waiting thread will be re-started -- but if there are no threads waiting, nothing happens, in particular the (so called) 'condition variable' retains no memory of the signal.
In the new code, following the above template, main() should:
/* wait for threads .... */
status = pthread_mutex_lock(&thread_lock);
chcek_status(status);
while (thread_finished_count != 3)
{
pthread_cond_wait(&thread_cv, &thread_lock) ;
chcek_status(status);
} ;
status = pthread_mutex_unlock(&thread_lock) ;
chcek_status(status);
So what is going on here ?
main() is waiting for thread_finished_count == 3
thread_finished_count is a shared variable "protected" by the thread_lock mutex.
...so it is incremented in thread_function() under the mutex.
...and main() must also read it under the mutex.
if main() finds thread_finished_count != 3 it must wait.
to do that it does: pthread_cond_wait(&thread_cv, &thread_lock), which:
unlocks thread_lock
places the thread on the thread_cv queue of waiting threads.
and it does those atomically.
when thread_function() does the pthread_cond_signal(&thread_cv) it wakes up the waiting thread.
when the main() thread wakes up, it will first re-acquire the thread_lock...
...so it can then proceed to re-read thread_finished_count, to see if it is now 3.
FWIW: I recommend not destroying the mutexes etc until after all the threads have been joined.
I have looked deeper into how glibc (v2.30 on Linux & x86_64, at least) implements pthread_mutex_lock() and _unlock().
It turns out that _lock() works something like this:
if (atomic_cmp_xchg(mutex->lock, 0, 1))
return <OK> ; // mutex->lock was 0, is now 1
while (1)
{
if (atomic_xchg(mutex->lock, 2) == 0)
return <OK> ; // mutex->lock was 0, is now 2
...do FUTEX_WAIT(2)... // suspend thread iff mutex->lock == 2...
} ;
And _unlock() works something like this:
if (atomic_xchg(mutex->lock, 0) == 2) // set mutex->lock == 0
...do FUTEX_WAKE(1)... // if may have waiter(s) start 1
Now:
mutex->lock: 0 => unlocked, 1 => locked-but-no-waiters, 2 => locked-with-waiter(s)
'locked-but-no-waiters' optimizes for the case where there is no lock contention and there is no need to do FUTEX_WAKE in _unlock().
the _lock()/_unlock() functions are in the library -- they are not in the kernel.
...in particular, the ownership of the mutex is a matter for the library, not the kernel.
FUTEX_WAIT(2) is a call to the kernel, which will place the thread on a pending queue associated with the mutex, unless mutex->lock != 2.
The kernel checks for mutex->lock == 2 and adds the thread to the queue atomically. This deals with the case of _unlock() being called after the atomic_xchg(mutex->lock, 2).
FUTEX_WAKE(1) is also a call to the kernel, and the futex man page tells us:
FUTEX_WAKE (since Linux 2.6.0)
This operation wakes at most 'val' of the waiters that are waiting ... No guarantee is provided about which waiters are awoken (e.g., a waiter with a higher scheduling priority is not guaranteed to be awoken in preference to a waiter with a lower priority).
where 'val' in this case is 1.
Although the documentation says "no guarantee about which waiters are awoken", the queue appears to be at least FIFO.
Note especially that:
_unlock() does not pass the mutex to the thread started by the FUTEX_WAKE.
once woken up, the thread will again try to obtain the lock...
...but may be beaten to it by any other running thread -- including the thread which just did the _unlock().
I believe this is why you have not seen the work being shared across the threads. There is so little work for each one to do, that a thread can unlock the mutex, do the work and be back to lock the mutex again before a thread woken up by the unlock can get going and succeed in locking the mutex.
I am creating a program in which i have 3 linked lists and I am trying to update or remove the nodes from these linked lists in these three threads. But the deadlock is occurring
The insertion and deletion is working fine. Here the three variables var1InUse,var2InUse and var3InUse are indicating that whether the 3 linked lists are in use or not(Not all three are use in the all threads). I am putting the threads on waiting based on var1InUse,var2InUse and var3InUse as you can see in the code. Sometimes this works fine but sometimes deadlock happens. I have searched for the solution on the internet but could find it.
Am I using the wait and signal methods correctly?
pthread variables declaration
pthread_mutex_t myMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t t1cond = PTHREAD_COND_INITIALIZER;
pthread_cond_t t2cond = PTHREAD_COND_INITIALIZER;
pthread_cond_t t3cond = PTHREAD_COND_INITIALIZER;
int var1InUse=0,var2InUse=0,var3InUse=0;
THREAD 1
void* thread1(void* args){
while(var1InUse || var2InUse ) {
pthread_cond_wait(&t1cond,&myMutex);}
var1InUse=1,var2InUse=1;
while(1){
pthread_mutex_lock(&myMutex);
/*
some other code about adding and removing from
linkedlist
*/
var1InUse=0,var2InUse=0;
pthread_cond_signal(&t2cond);
pthread_cond_signal(&t3cond);
pthread_mutex_unlock(&myMutex);}
}
THREAD 2
void* thread2(void* args){
while(var1InUse || var2InUse || var3InUse) {
pthread_cond_wait(&t2cond,&myMutex);}
var1InUse=1,var2InUse=1,var3InUse=1;
while(1){
pthread_mutex_lock(&myMutex);
/*
some other code adding and removing from linkedlist
*/
var1InUse=0,var2InUse=0,var3InUse=0;
pthread_cond_signal(&t1cond);
pthread_cond_signal(&t3cond);
pthread_mutex_unlock(&myMutex);}
}
THREAD 3
void* thread3(void* args){
while(var1InUse || var3InUse ) {
pthread_cond_wait(&t3cond,&myMutex);}
var1InUse=1,var3InUse=1;
while(1){
pthread_mutex_lock(&myMutex);
/*
some other code adding and removing from linkedlist
*/
var1InUse=0,var3InUse=0;
pthread_cond_signal(&t1cond);
pthread_cond_signal(&t2cond);
pthread_mutex_unlock(&myMutex);}
}
MAIN METHOD
int main(){
pthread_t t1,t2,t3,t4;
pthread_mutex_init(&myMutex,0);
pthread_create(&t1,NULL,thread1,NULL);
pthread_create(&t2,NULL,thread2,NULL);
pthread_create(&t3,NULL,thread3,NULL);
pthread_join(t1,NULL);
pthread_join(t2,NULL);
pthread_join(t3,NULL);
pthread_mutex_destroy(&myMutex);
return 0
}
I want the deadlock to be removed.
The mutex used by pthread_cond_wait() needs to locked before the function is called. Here is an extract from the man page:
The pthread_cond_timedwait() and pthread_cond_wait() functions shall block on a condition variable. The application shall ensure that these functions are called with mutex locked by the calling thread; otherwise, an error (for PTHREAD_MUTEX_ERRORCHECK and robust mutexes) or undefined behavior (for other mutexes) results.
Although pthread_cond_wait() unlocks the mutex internally, it is locked again before the function returns successfully:
Upon successful return, the mutex shall have been locked and shall be owned by the calling thread.
Additionally, you should access the shared variables var1InUse, var2InUse and var3InUse with the mutex locked.
Here is a modified version of your thread1() that follows these rules. The modifications to the other thread start routines should be similar:
void* thread1(void* args){
pthread_mutex_lock(&myMutex);
while(var1InUse || var2InUse ) {
pthread_cond_wait(&t1cond,&myMutex);
}
var1InUse=1,var2InUse=1;
pthread_mutex_unlock(&myMutex);
while(1){
pthread_mutex_lock(&myMutex);
/*
some other code about adding and removing from linkedlist
*/
var1InUse=0,var2InUse=0;
pthread_cond_signal(&t2cond);
pthread_cond_signal(&t3cond);
pthread_mutex_unlock(&myMutex);
}
return NULL;
}
(I'm not entirely sure that the above is correct, because it's not entirely clear from the original code what the while(1) loop is supposed to do.)
I am working on a pthread problem in C.
The background of the problem: There are 5 threads using the same function, and when the core data in the shared memory hit the upper bound, all these five threads should terminate. I use semaphore to make sure only one of them executes the core data, which means only one of the five will get the end signal and then it will tell the rest to terminate. The code I wrote to achieve is:
#define THREAD_NUM 5;
int thread[THREAD_NUM];
sem_t s; /*semaphore to synchronize*/
sem_t empty; /*keep check of the number of empty buffers*/
sem_t full; /*keep check of the number of full buffers*/
void com_thread(*prt){ // I pass the id of the thread using prt
while(1){
sem_wait(&full)
sem_wait(&s)
...do something
sem_post(&s)
sem_post(&empty)
}
}
The signal will come while the while loop is running, and I tried to accept the signal at following position and then terminate all the threads.
To be honest, what I need to do is end all of the threads gracefully, and I need them to return to the main thread for thread_join() and free memory rather than simply exiting the program. So that's why I did not use exit() here.
The main idea below is terminating the other 4 threads when one of them got the signal. After that it would terminate itself.
However, it does not work as I expected.
#define THREAD_NUM 5;
int thread[THREAD_NUM];
sem_t s; /*semaphore to synchronize*/
sem_t empty; /*keep check of the number of empty buffers*/
sem_t full; /*keep check of the number of full buffers*/
void com_thread(*prt){ // I pass the id of the thread using prt
while(1){
sem_wait(&full)
sem_wait(&s)
if(signal){
int i;
int id = *((int*) prt);
for (i=0;i<THREAD_NUM;i++){
if(i != id)
pthread_exit(&thread[i]);
}
pthread_exit(&thread[id]);
}
...do something
sem_post(&s)
sem_post(&empty)
}
}
Can anyone help me with that? Or, if there is a better way to achieve this? Thanks in advance :)
You can use pthead_kill from the thread you want to terminate all other threads. Man page is at http://linux.die.net/man/3/pthread_kill. You should choose signal carefully if you want graceful termination. http://linux.die.net/man/7/signal has more details.
The easiest solution is probably to have a single global boolean variable, initially initialized to "false". All the threads check if this variable is "false" or "true", and if it's "true" they terminate.
When a single thread notices that all threads should be terminated it simply sets this flag to "true", and the other threads will notice that sooner or later.
You can check for this exit-condition in multiple places in the thread function, especially if some thread is waiting for a lock from the currently active thread (the one that sets the exit condition).