Registering a level triggered eventfd on epoll_ctl only fires once, when not decrementing the eventfd counter. To summarize the problem, I have observed that the epoll flags (EPOLLET, EPOLLONESHOT or None for level triggered behaviour) behave similar. Or in other words: Does not have an effect.
Could you confirm this bug?
I have an application with multiple threads. Each thread waits for new events with epoll_wait with the same epollfd. If you want to terminate the application gracefully, all threads have to be woken up. My thought was that you use the eventfd counter (EFD_SEMAPHORE|EFD_NONBLOCK) for this (with level triggered epoll behavior) to wake up all together. (Regardless of the thundering herd problem for a small number of filedescriptors.)
E.g. for 4 threads you write 4 to the eventfd. I was expecting epoll_wait returns immediately and again and again until the counter is decremented (read) 4 times. epoll_wait only returns once for every write.
Yep, I read all related manuals carefully ;)
#include <sys/epoll.h>
#include <sys/eventfd.h>
#include <sys/types.h>
#include <unistd.h>
#include <pthread.h>
static int event_fd = -1;
static int epoll_fd = -1;
void *thread(void *arg)
{
(void) arg;
for(;;) {
struct epoll_event event;
epoll_wait(epoll_fd, &event, 1, -1);
/* handle events */
if(event.data.fd == event_fd && event.events & EPOLLIN) {
uint64_t val = 0;
eventfd_read(event_fd, &val);
break;
}
}
return NULL;
}
int main(void)
{
epoll_fd = epoll_create1(0);
event_fd = eventfd(0, EFD_SEMAPHORE| EFD_NONBLOCK);
struct epoll_event event;
event.events = EPOLLIN;
event.data.fd = event_fd;
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, event_fd, &event);
enum { THREADS = 4 };
pthread_t thrd[THREADS];
for (int i = 0; i < THREADS; i++)
pthread_create(&thrd[i], NULL, &thread, NULL);
/* let threads park internally (kernel does readiness check before sleeping) */
usleep(100000);
eventfd_write(event_fd, THREADS);
for (int i = 0; i < THREADS; i++)
pthread_join(thrd[i], NULL);
}
When you write to an eventfd, a function eventfd_signal is called. It contains the following line which does the wake up:
wake_up_locked_poll(&ctx->wqh, EPOLLIN);
With wake_up_locked_poll being a macro:
#define wake_up_locked_poll(x, m) \
__wake_up_locked_key((x), TASK_NORMAL, poll_to_key(m))
With __wake_up_locked_key being defined as:
void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key)
{
__wake_up_common(wq_head, mode, 1, 0, key, NULL);
}
And finally, __wake_up_common is being declared as:
/*
* The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just
* wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve
* number) then we wake all the non-exclusive tasks and one exclusive task.
*
* There are circumstances in which we can try to wake a task which has already
* started to run but is not in state TASK_RUNNING. try_to_wake_up() returns
* zero in this (rare) case, and we handle it by continuing to scan the queue.
*/
static int __wake_up_common(struct wait_queue_head *wq_head, unsigned int mode,
int nr_exclusive, int wake_flags, void *key,
wait_queue_entry_t *bookmark)
Note the nr_exclusive argument and you will see that writing to an eventfd wakes only one exclusive waiter.
What does exclusive mean? Reading epoll_ctl man page gives us some insight:
EPOLLEXCLUSIVE (since Linux 4.5):
Sets an exclusive wakeup mode for the epoll file descriptor that is being attached to the target file descriptor, fd. When a wakeup event occurs and multiple epoll file descriptors are attached to the same target file using EPOLLEXCLUSIVE, one or more of the epoll file descriptors will receive an event with epoll_wait(2).
You do not use EPOLLEXCLUSIVE when adding your event, but to wait with epoll_wait every thread has to put itself to a wait queue. Function do_epoll_wait performs the wait by calling ep_poll. By following the code you can see that it adds the current thread to a wait queue at line #1903:
__add_wait_queue_exclusive(&ep->wq, &wait);
Which is the explanation for what is going on - epoll waiters are exclusive, so only a single thread is woken up. This behavior has been introduced in v2.6.22-rc1 and the relevant change has been discussed here.
To me this looks like a bug in the eventfd_signal function: in semaphore mode it should perform a wake-up with nr_exclusive equal to the value written.
So your options are:
Create a separate epoll descriptor for each thread (might not work with your design - scaling problems)
Put a mutex around it (scaling problems)
Use poll, probably on both eventfd and epoll
Wake each thread separately by writing 1 with evenfd_write 4 times (probably the best you can do).
Related
I'm trying to add a signal handler for proper cleanup to my event-driven application.
My signal handler for SIGINT only changes the value of a global flag variable, which is then checked in the main loop. To avoid races, the signal is blocked at all times, except during the pselect() call. This should cause pending signals to be delivered only during the pselect() call, which should be interrupted and fail with EINTR.
This usually works fine, except if there are already events pending on the monitored file descriptors (e.g. under heavy load, when there's always activity on the file descriptors).
This sample program reproduces the problem:
#include <assert.h>
#include <errno.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <sys/select.h>
#include <fcntl.h>
#include <signal.h>
#include <unistd.h>
volatile sig_atomic_t stop_requested = 0;
void handle_signal(int sig)
{
// Use write() and strlen() instead of printf(), which is not async-signal-safe
const char * out = "Caught stop signal. Exiting.\n";
size_t len = strlen (out);
ssize_t writelen = write(STDOUT_FILENO, out, len);
assert(writelen == (ssize_t) len);
stop_requested = 1;
}
int main(void)
{
int ret;
// Install signal handler
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = handle_signal;
ret = sigaction(SIGINT, &sa, NULL);
assert(ret == 0);
}
// Block SIGINT
sigset_t old_sigmask;
{
sigset_t blocked;
sigemptyset(&blocked);
sigaddset(&blocked, SIGINT);
ret = sigprocmask(SIG_BLOCK, &blocked, &old_sigmask);
assert(ret == 0);
}
ret = raise(SIGINT);
assert(ret == 0);
// Create pipe and write data to it
int pipefd[2];
ret = pipe(pipefd);
assert(ret == 0);
ssize_t writelen = write(pipefd[1], "foo", 3);
assert(writelen == 3);
while (stop_requested == 0)
{
printf("Calling pselect().\n");
fd_set fds;
FD_ZERO(&fds);
FD_SET(pipefd[0], &fds);
struct timespec * timeout = NULL;
int ret = pselect(pipefd[0] + 1, &fds, NULL, NULL, timeout, &old_sigmask);
assert(ret >= 0 || errno == EINTR);
printf("pselect() returned %d.\n", ret);
if (FD_ISSET(pipefd[0], &fds))
printf("pipe is readable.\n");
sleep(1);
}
printf("Event loop terminated.\n");
}
This program installs a handler for SIGINT, then blocks SIGINT, sends SIGINT to itself (which will not be delivered yet because SIGINT is blocked), creates a pipe and writes some data into the pipe, and then monitors the read end of the pipe for readability.
This readability monitoring is done using pselect(), which is supposed to unblock SIGINT, which should then interrupt the pselect() and call the signal handler.
However, on Linux (I tested on 5.6 and 4.19), the pselect() call returns 1 instead and indicates readability of the pipe, without calling the signal handler. Since this test program does not read the data that was written to the pipe, the file descriptor will never cease to be readable, and the signal handler is never called. In real programs, a similar situation might arise under heavy load, where a lot of data might be available for reading on different file descriptors (e.g. sockets).
On the other hand, on FreeBSD (I tested on 12.1), the signal handler is called, and then pselect() returns -1 and sets errno to EINTR. This is what I expected to happen on Linux as well.
Am I misunderstanding something, or am I using these interfaces incorrectly? Or should I just fall back to the old self-pipe trick, which (I believe) would handle this case better?
This is a type of resource starvation caused by always checking for active resources in the same order. When resources are always checked in the same order, if the resources checked first are busy enough the resources checked later may never get any attention.
See What is starvation?.
The Linux implementation of pselect() apparently checks file descriptors before checking for signals. The BSD implementation does the opposite.
For what it's worth, the POSIX documentation for pselect() states:
If none of the selected descriptors are ready for the requested operation, the pselect() or select() function shall block until at least one of the requested operations becomes ready, until the timeout occurs, or until interrupted by a signal.
A strict reading of that description requires checking the descriptors first. If any descriptor is active, pselect() will return that instead of failing with errno set to EINTR.
In that case, if the descriptors are so busy that one is always active, the signal processing gets starved.
The BSD implementation likely starves active descriptors if signals come in too fast.
One common solution is to always process all active resources every time a select() call or similar returns. But you can't do that with your current design that mixes signals with descriptors because pselect() doesn't even get to checking for a pending signal if there are active descriptors. As #Shawn mentioned in the comments, you can map signals to file descriptors using signalfd(). Then add the descriptor from signalfd() to the file descriptor set passed to pselect().
So, according to manual, pselect can have a timeout parameter and it will wait if no file-descriptors are changing. Also, it has an option to be interrupted by a signal:
sigemptyset(&emptyset); /* Signal mask to use during pselect() */
res = pselect(0, NULL, NULL, NULL, NULL, &emptyset);
if (errno == EINTR) printf("Interrupted by signal\n");
It is however not obvious from the manual which signals are able to interrupt pselect?
If I have threads (producers and consumers), and each (consumer)thread is using pselect, is there a way to interrupt only one (consumer)thread from another(producer) thread?
i think the issue is analyzed in https://lwn.net/Articles/176911/
For this reason, the POSIX.1g committee devised an enhanced version of
select(), called pselect(). The major difference between select() and
pselect() is that the latter call has a signal mask (sigset_t) as an
additional argument:
int pselect(int n, fd_set *readfds, fd_set *writefds, fd_set *exceptfds,
const struct timespec *timeout, const sigset_t *sigmask);
pselect uses the sigmask argument to configure which signals can interrupt it
The collection of signals that are currently blocked is called the
signal mask. Each process has its own signal mask. When you create a
new process (see Creating a Process), it inherits its parent’s mask.
You can block or unblock signals with total flexibility by modifying
the signal mask.
source : https://www.gnu.org/software/libc/manual/html_node/Process-Signal-Mask.html
https://linux.die.net/man/2/pselect
https://www.linuxprogrammingblog.com/code-examples/using-pselect-to-avoid-a-signal-race
Because of your second questions there are multiple algorithms for process synchronization see i.e. https://www.geeksforgeeks.org/introduction-of-process-synchronization/ and the links down on this page or https://en.wikipedia.org/wiki/Sleeping_barber_problem and associated pages. So basically signals are only one path for IPC in linux, cf IPC using Signals on linux
(Ignoring all the signal's part of the question, and only answering to
If I have threads (producers and consumers), and each (consumer)thread
is using pselect, is there a way to interrupt only one
(consumer)thread from another(producer) thread?"
, since the title does not imply the use of signals).
The easiest way I know is for the thread to expose a file descriptor that will always included in the p/select monitored descriptors, so it always monitor at least one. If other thread writes to that, the p/select call will return:
struct thread {
pthread_t tid;
int wake;
...
}
void *thread_cb(void *t) {
struct thread *me = t;
t->wake = eventfd(0, 0);
...
fd_set readfds;
// Populate readfds;
FD_SET(t->wake, &readfds);
select(...);
}
void interrupt_thread(struct thread *t) {
eventfd_write(t->wake, 1);
}
If no eventfd is available, you can replace it with a classic (and more verbose) pipe, or other similar communication mechanism.
I'm studying on condition variables of Pthread. When I'm reading the explanation of pthread_cond_signal, I see the following.
The pthread_cond_signal() function shall unblock at least one of
the
threads that are blocked on the specified condition variable cond (if
any threads are blocked on cond).
Till now I knew pthread_cond_signal() would make only one thread to wake up at a time. But, the quoted explanation says at least one. What does it mean? Can it make more than one thread wake up? If yes, why is there pthread_cond_broadcast()?
En passant, I wish the following code taken from UNIX Systems Programming book of Robbins is also related to my question. Is there any reason the author's pthread_cond_broadcast() usage instead of pthread_cond_signal() in waitbarrier function? As a minor point, why is !berror checking needed too as a part of the predicate? When I try both of them by changing, I cannot see any difference.
/*
The program implements a thread-safe barrier by using condition variables. The limit
variable specifies how many threads must arrive at the barrier (execute the
waitbarrier) before the threads are released from the barrier.
The count variable specifies how many threads are currently waiting at the barrier.
Both variables are declared with the static attribute to force access through
initbarrier and waitbarrier. If successful, the initbarrier and waitbarrier
functions return 0. If unsuccessful, these functions return a nonzero error code.
*/
#include <errno.h>
#include <pthread.h>
#include <stdio.h>
static pthread_cond_t bcond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t bmutex = PTHREAD_MUTEX_INITIALIZER;
static int count = 0;
static int limit = 0;
int initbarrier(int n) { /* initialize the barrier to be size n */
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit != 0) { /* barrier can only be initialized once */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
limit = n;
return pthread_mutex_unlock(&bmutex);
}
int waitbarrier(void) { /* wait at the barrier until all n threads arrive */
int berror = 0;
int error;
if (error = pthread_mutex_lock(&bmutex)) /* couldn't lock, give up */
return error;
if (limit <= 0) { /* make sure barrier initialized */
pthread_mutex_unlock(&bmutex);
return EINVAL;
}
count++;
while ((count < limit) && !berror)
berror = pthread_cond_wait(&bcond, &bmutex);
if (!berror) {
fprintf(stderr,"soner %d\n",
(int)pthread_self());
berror = pthread_cond_broadcast(&bcond); /* wake up everyone */
}
error = pthread_mutex_unlock(&bmutex);
if (berror)
return berror;
return error;
}
/* ARGSUSED */
static void *printthread(void *arg) {
fprintf(stderr,"This is the first print of thread %d\n",
(int)pthread_self());
waitbarrier();
fprintf(stderr,"This is the second print of thread %d\n",
(int)pthread_self());
return NULL;
}
int main(void) {
pthread_t t0,t1,t2;
if (initbarrier(3)) {
fprintf(stderr,"Error initilizing barrier\n");
return 1;
}
if (pthread_create(&t0,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 0.\n");
if (pthread_create(&t1,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 1.\n");
if (pthread_create(&t2,NULL,printthread,NULL))
fprintf(stderr,"Error creating thread 2.\n");
if (pthread_join(t0,NULL))
fprintf(stderr,"Error joining thread 0.\n");
if (pthread_join(t1,NULL))
fprintf(stderr,"Error joining thread 1.\n");
if (pthread_join(t2,NULL))
fprintf(stderr,"Error joining thread 2.\n");
fprintf(stderr,"All threads complete.\n");
return 0;
}
Due to spurious wake-ups pthread_cond_signal could wake up more than one thread.
Look for word "spurious" in pthread_cond_wait.c from glibc.
In waitbarrier it must wake up all threads when they all have arrived to that point, hence it uses pthread_cond_broadcast.
Can [pthread_cond_signal()] make more than one thread wake up?
That's not guaranteed. On some operating system, on some hardware platform, under some circumstances it could wake more than one thread. It is allowed to wake more than one thread because that gives the implementer more freedom to make it work in the most efficient way possible for any given hardware and OS.
It must wake at least one waiting thread, because otherwise, what would be the point of calling it?
But, if your applicaton needs a signal that is guaranteed to wake all of the waiting threads, then that is what pthread_cond_broadcast() is for.
Making efficient use of a multi-processor system is hard. https://www.e-reading.club/bookreader.php/134637/Herlihy,Shavit-_The_art_of_multiprocessor_programming.pdf
Most programming language and library standards allow similar freedoms in the behavior of multi-threaded programs, for the same reason: To allow programs to achieve high performance on a variety of different platforms.
The main function is based on libevent, but there is a long run task in the function. So start N treads to run the tasks. Is is this idea OK? And how to use libevent and pthread together in C?
Bumping an old question, may have already been solved. But posting the answer just in case someone else needs it.
Yes, it is okay to do threading in this case. I recently used libevent in pthreads, and it seems to be working just fine. Here's the code :
#include <stdint.h>
#include <pthread.h>
#include <event.h>
void * thread_func (void *);
int main(void)
{
int32_t tid = 0, ret = -1;
struct event_base *evbase;
struct event *timer;
int32_t *t_ret = &ret;
struct timeval tv;
// 1. initialize libevent for pthreads
evthread_use_pthreads();
ret = pthread_create(&tid, NULL, thread_func, NULL);
// check ret for error
// 2. allocate event base
evbase = event_base_new();
// 3. allocate event object
timer = event_new(evbase, -1, EV_PERSIST, callback_func, NULL);
// 4. add event
tv.tv_sec = 0;
tv.tv_usec = 1000;
evtimer_add(timer, &tv);
// 5. start the event loop
event_base_dispatch(evbase); // event loop
// join pthread...
// 6. free resources
event_free(timer);
event_base_free(evbase);
return 0;
}
void * thread_func(void *arg)
{
struct event *ev;
struct event_base *base;
base = event_base_new();
ev = event_new(base, -1, EV_PERSIST, thread_callback, NULL);
event_add(ev, NULL); // wait forever
event_base_dispatch(base); // start event loop
event_free(ev);
event_base_free(base);
pthread_exit(0);
}
As you can see, in my case, the event for the main thread is timer. The base logic followed is as below :
call evthread_use_pthreads() to initialize libevent for pthreads on Linux (my case). For windows evthread_use_window_threads(). Check out the documentation given in event.h itself.
Allocate an event_base structure on global heap as instructed in documentation. Make sure to check return value for errors.
Same as above, but allocate event structure itself. In my case, I am not waiting on any file descriptor, so -1 is passed as argument. Also, I want my event to persist, hence EV_PERSIST . The code for callback functions is omitted.
Schedule the event for execution
Start the event loop
free the resources when done.
Libevent version used in my case is libevent2 5.1.9 , and you will also need libevent_pthreads.so library for linking.
cheers.
That would work.
In the I/O callback function delegates time consuming job to another thread of a thread pool. The exact mechanics depend on the interface of the worker thread or the thread pool.
To communicate the result back from the worker thread to the I/O thread use a pipe. The worker thread writes the pointer to the result object to the pipe and the I/O thread
wakes up and read the pointer from the pipe.
There is a multithreaded libevent example in this blog post:
http://www.roncemer.com/multi-threaded-libevent-server-example
His solution is, to quote:
The solution is to create one libevent event queue (AKA event_base) per active connection, each with its own event pump thread. This project does exactly that, giving you everything you need to write high-performance, multi-threaded, libevent-based socket servers.
NOTE This is for libev not libevent but the idea may apply.
Here I present an example for the community. Please comment and let me know if there are any noticable bugs. This example could include a signal handler for thread termination and graceful exit in the future.
//This program is demo for using pthreads with libev.
//Try using Timeout values as large as 1.0 and as small as 0.000001
//and notice the difference in the output
//(c) 2009 debuguo
//(c) 2013 enthusiasticgeek for stack overflow
//Free to distribute and improve the code. Leave credits intact
//compile using: gcc -g test.c -o test -lpthread -lev
#include <ev.h>
#include <stdio.h> // for puts
#include <stdlib.h>
#include <pthread.h>
pthread_mutex_t lock;
double timeout = 0.00001;
ev_timer timeout_watcher;
int timeout_count = 0;
ev_async async_watcher;
int async_count = 0;
struct ev_loop* loop2;
void* loop2thread(void* args)
{
// now wait for events to arrive on the inner loop
ev_loop(loop2, 0);
return NULL;
}
static void async_cb (EV_P_ ev_async *w, int revents)
{
//puts ("async ready");
pthread_mutex_lock(&lock); //Don't forget locking
++async_count;
printf("async = %d, timeout = %d \n", async_count, timeout_count);
pthread_mutex_unlock(&lock); //Don't forget unlocking
}
static void timeout_cb (EV_P_ ev_timer *w, int revents) // Timer callback function
{
//puts ("timeout");
if(ev_async_pending(&async_watcher)==false){ //the event has not yet been processed (or even noted) by the event loop? (i.e. Is it serviced? If yes then proceed to)
ev_async_send(loop2, &async_watcher); //Sends/signals/activates the given ev_async watcher, that is, feeds an EV_ASYNC event on the watcher into the event loop.
}
pthread_mutex_lock(&lock); //Don't forget locking
++timeout_count;
pthread_mutex_unlock(&lock); //Don't forget unlocking
w->repeat = timeout;
ev_timer_again(loop, &timeout_watcher); //Start the timer again.
}
int main (int argc, char** argv)
{
if (argc < 2) {
puts("Timeout value missing.\n./demo <timeout>");
return -1;
}
timeout = atof(argv[1]);
struct ev_loop *loop = EV_DEFAULT; //or ev_default_loop (0);
//Initialize pthread
pthread_mutex_init(&lock, NULL);
pthread_t thread;
// This loop sits in the pthread
loop2 = ev_loop_new(0);
//This block is specifically used pre-empting thread (i.e. temporary interruption and suspension of a task, without asking for its cooperation, with the intention to resume that task later.)
//This takes into account thread safety
ev_async_init(&async_watcher, async_cb);
ev_async_start(loop2, &async_watcher);
pthread_create(&thread, NULL, loop2thread, NULL);
ev_timer_init (&timeout_watcher, timeout_cb, timeout, 0.); // Non repeating timer. The timer starts repeating in the timeout callback function
ev_timer_start (loop, &timeout_watcher);
// now wait for events to arrive on the main loop
ev_loop(loop, 0);
//Wait on threads for execution
pthread_join(thread, NULL);
pthread_mutex_destroy(&lock);
return 0;
}
I am migrating an applciation from windows to linux. I am facing problem with respect to WaitForSingleObject and WaitForMultipleObjects interfaces.
In my application I spawn multiple threads where all threads wait for events from parent process or periodically run for every t seconds.
I have checked pthread_cond_timedwait, but we have to specify absolute time for this.
How can I implement this in Unix?
Stick to pthread_cond_timedwait and use clock_gettime. For example:
struct timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
ts.tv_sec += 10; // ten seconds
while (!some_condition && ret == 0)
ret = pthread_cond_timedwait(&cond, &mutex, &ts);
Wrap it in a function if you wish.
UPDATE: complementing the answer based on our comments.
POSIX doesn't have a single API to wait for "all types" of events/objects as Windows does. Each one has its own functions. The simplest way to notify a thread for termination is using atomic variables/operations. For example:
Main thread:
// Declare it globally (argh!) or pass by argument when the thread is created
atomic_t must_terminate = ATOMIC_INIT(0);
// "Signal" termination by changing the initial value
atomic_inc(&must_terminate);
Secondary thread:
// While it holds the default value
while (atomic_read(&must_terminate) == 0) {
// Keep it running...
}
// Do proper cleanup, if needed
// Call pthread_exit() providing the exit status
Another alternative is to send a cancellation request using pthread_cancel. The thread being cancelled must have called pthread_cleanup_push to register any necessary cleanup handler. These handlers are invoked in the reverse order they were registered. Never call pthread_exit from a cleanup handler, because it's undefined behaviour. The exit status of a cancelled thread is PTHREAD_CANCELED. If you opt for this alternative, I recommend you to read mainly about cancellation points and types.
And last but not least, calling pthread_join will make the current thread block until the thread passed by argument terminates. As bonus, you'll get the thread's exit status.
For what it's worth, we (NeoSmart Technologies) have just released an open source (MIT licensed) library called pevents which implements WIN32 manual and auto-reset events on POSIX, and includes both WaitForSingleObject and WaitForMultipleObjects clones.
Although I'd personally advise you to use POSIX multithreading and signaling paradigms when coding on POSIX machines, pevents gives you another choice if you need it.
I realise this is an old question now, but for anyone else who stumbles across it, this source suggests that pthread_join() does effectively the same thing as WaitForSingleObject():
http://www.ibm.com/developerworks/linux/library/l-ipc2lin1/index.html
Good luck!
For WaitForMultipleObjects with false WaitAll try this:
#include <unistd.h>
#include <pthread.h>
#include <stdio.h>
using namespace std;
pthread_cond_t condition;
pthread_mutex_t signalMutex;
pthread_mutex_t eventMutex;
int finishedTask = -1;
void* task(void *data)
{
int num = *(int*)data;
// Do some
sleep(9-num);
// Task finished
pthread_mutex_lock(&eventMutex); // lock until the event will be processed by main thread
pthread_mutex_lock(&signalMutex); // lock condition mutex
finishedTask = num; // memorize task number
pthread_cond_signal(&condition);
pthread_mutex_unlock(&signalMutex); // unlock condtion mutex
}
int main(int argc, char *argv[])
{
pthread_t thread[10];
pthread_cond_init(&condition, NULL);
pthread_mutex_init(&signalMutex, NULL); // First mutex locks signal
pthread_mutex_init(&eventMutex, NULL); // Second mutex locks event processing
int numbers[10];
for (int i = 0; i < 10; i++) {
numbers[i] = i;
printf("created %d\n", i); // Creating 10 asynchronous tasks
pthread_create(&thread[i], NULL, task, &numbers[i]);
}
for (int i = 0; i < 10;)
{
if (finishedTask >= 0) {
printf("Task %d finished\n", finishedTask); // handle event
finishedTask = -1; // reset event variable
i++;
pthread_mutex_unlock(&eventMutex); // unlock event mutex after handling
} else {
pthread_cond_wait(&condition, &signalMutex); // waiting for event
}
}
return 0;
}