I am facing some trouble dealing with zombie processes. I wrote a simple server which creates tic tac toe matches between players. I am using select() to multiplex between multiple connected clients. Whenever there are two clients, the server will fork another process which execs a match arbiter program.
The problem is that select() blocks. So therefore, say if there is a match arbiter program running as a child process and it exits, the parent will never wait for the child if there are no incoming connections because select() is blocking.
I have my code here, apologies since it is quite messy.
while(1) {
if (terminate)
terminate_program();
FD_ZERO(&rset);
FD_SET(tcp_listenfd, &rset);
FD_SET(udpfd, &rset);
maxfd = max(tcp_listenfd, udpfd);
/* add child connections to set */
for (i = 0; i < MAXCLIENTS; i++) {
sd = tcp_confd_lst[i];
if (sd > 0)
FD_SET(sd, &rset);
if (sd > maxfd)
maxfd = sd;
}
/* Here select blocks */
if ((nready = select(maxfd + 1, &rset, NULL, NULL, NULL)) < 0) {
if (errno == EINTR)
continue;
else
perror("select error");
}
/* Handles incoming TCP connections */
if (FD_ISSET(tcp_listenfd, &rset)) {
len = sizeof(cliaddr);
if ((new_confd = accept(tcp_listenfd, (struct sockaddr *) &cliaddr, &len)) < 0) {
perror("accept");
exit(1);
}
/* Send connection message asking for handle */
writen(new_confd, handle_msg, strlen(handle_msg));
/* adds new_confd to array of connected fd's */
for (i = 0; i < MAXCLIENTS; i++) {
if (tcp_confd_lst[i] == 0) {
tcp_confd_lst[i] = new_confd;
break;
}
}
}
/* Handles incoming UDP connections */
if (FD_ISSET(udpfd, &rset)) {
}
/* Handles receiving client handles */
/* If client disconnects without entering their handle, their values in the arrays will be set to 0 and can be reused. */
for (i = 0; i < MAXCLIENTS; i++) {
sd = tcp_confd_lst[i];
if (FD_ISSET(sd, &rset)) {
if ((valread = read(sd, confd_handle, MAXHANDLESZ)) == 0) {
printf("Someone disconnected: %s\n", usr_handles[i]);
close(sd);
tcp_confd_lst[i] = 0;
usr_in_game[i] = 0;
} else {
confd_handle[valread] = '\0';
printf("%s\n", confd_handle); /* For testing */
fflush(stdout);
strncpy(usr_handles[i], confd_handle, sizeof(usr_handles[i]));
for (j = i - 1; j >= 0; j--) {
if (tcp_confd_lst[j] != 0 && usr_in_game[j] == 0) {
usr_in_game[i] = 1; usr_in_game[j] = 1;
if ((child_pid = fork()) == 0) {
close(tcp_listenfd);
snprintf(fd_args[0], sizeof(fd_args[0]), "%d", tcp_confd_lst[i]);
snprintf(fd_args[1], sizeof(fd_args[1]), "%d", tcp_confd_lst[j]);
execl("nim_match_server", "nim_match_server", usr_handles[i], fd_args[0], usr_handles[j], fd_args[1], (char *) 0);
}
close(tcp_confd_lst[i]); close(tcp_confd_lst[j]);
tcp_confd_lst[i] = 0; tcp_confd_lst[j] = 0;
usr_in_game[i] = 0; usr_in_game[j] = 0;
}
}
}
}
}
}
Is there a method which allows wait to run even when select() is blocking? Preferably without signal handling since they are asynchronous.
EDIT: Actually, I found out that select has a timeval data structure which we can specify the timeout. Would using that be a good idea?
I think your options are:
Save all your child descriptors in a global array and call wait() from a signal handler. If you don't need the exit status of your children in your main loop, I think this is the easiest.
Instead of select, use pselect -- it will return upon receiving a specified (set of) signal(s), in your case, SIGCHLD. Then call wait/WNOHANG on all child PIDs. You will need to block/unblock SIGCHLD at the right moments before/after pselect(), see here: http://pubs.opengroup.org/onlinepubs/9699919799/functions/pselect.html
Wait on/cleanup child PIDs from a secondary thread. I think this is the most complicated solution (re. synchronization between threads), but since you asked, it's technically possible.
If you just want to prevent zombie processes, you could set up a SIGCHLD signal handler. If you want to actually wait for the return status, you could write bytes into a pipe (non-blocking, just in case) from the signal handler and then read those bytes in the select loop.
For how to handle SIGCHLD, see http://www.microhowto.info/howto/reap_zombie_processes_using_a_sigchld_handler.html -- you want to do something like while (waitpid((pid_t)(-1), 0, WNOHANG) > 0) {}
Perhaps the best approach is sending a single byte from the SIGCHLD signal handler to the main select loop (non-blocking, just in case) and doing the waitpid loop in the select loop when bytes can be read from the pipe.
You could also use a signalfd file descriptor to read the SIGCHLD signal, although that works only on Linux.
Related
I have a small problem, in practice I have to let two clients communicate (which perform different functions), with my concurrent server,
I discovered that I can solve this using the select, but if I try to implement it in the code it gives me a segmentation error, could someone help me kindly?
I state that before with a single client was a fable, now unfortunately implementing the select, I spoiled a bit 'all,
I should fix this thing, you can make a concurrent server with select ()?
can you tell me where I'm wrong with this code?
int main (int argc , char *argv[])
{
int list_fd,conn_fd;
int i,j;
struct sockaddr_in serv_add,client;
char buffer [1024];
socklen_t len;
time_t timeval;
char fd_open[FD_SETSIZE];
pid_t pid;
int logging = 1;
char swi;
fd_set fset;
int max_fd = 0;
int waiting = 0;
int compat = 0;
sqlite3 *db;
sqlite3_open("Prova.db", &db);
start2();
start3();
printf("ServerREP Avviato \n");
if ( ( list_fd = socket(AF_INET, SOCK_STREAM, 0) ) < 0 ) {
perror("socket");
exit(1);
}
if (setsockopt(list_fd, SOL_SOCKET, SO_REUSEADDR, &(int){ 1 }, sizeof(int)) < 0)
perror("setsockopt(SO_REUSEADDR) failed");
memset((void *)&serv_add, 0, sizeof(serv_add)); /* clear server address */
serv_add.sin_family = AF_INET;
serv_add.sin_port = htons(SERVERS_PORT2);
serv_add.sin_addr.s_addr = inet_addr(SERVERS_IP2);
if ( bind(list_fd, (struct sockaddr *) &serv_add, sizeof(serv_add)) < 0 ) {
perror("bind");
exit(1);
}
if ( listen(list_fd, 1024) < 0 ) {
perror("listen");
exit(1);
}
/* initialize all needed variables */
memset(fd_open, 0, FD_SETSIZE); /* clear array of open files */
max_fd = list_fd; /* maximum now is listening socket */
fd_open[max_fd] = 1;
//max_fd = max(conn_fd, sockMED);
while (1) {
FD_ZERO(&fset);
FD_SET(conn_fd, &fset);
FD_SET(sockMED, &fset);
len = sizeof(client);
if(select(max_fd + 1, &fset, NULL, NULL, NULL) < 0){exit(1);}
if(FD_ISSET(conn_fd, &fset))
{
if ( (conn_fd = accept(list_fd, (struct sockaddr *)&client, &len)) <0 )
perror("accept error");
exit(-1);
}
/* fork to handle connection */
if ( (pid = fork()) < 0 ){
perror("fork error");
exit(-1);
}
if (pid == 0) { /* child */
close(list_fd);
close(sockMED);
Menu_2(db,conn_fd);
close(conn_fd);
exit(0);
} else { /* parent */
close(conn_fd);
}
if(FD_ISSET(sockMED, &fset))
MenuMED(db,sockMED);
FD_CLR(conn_fd, &fset);
FD_CLR(sockMED, &fset);
}
sqlite3_close(db);
exit(0);
}
I cannot understand how you are trying to use select here, and why you want to use both fork to let a child handle the accepted connection socket, and select.
Common designs are:
multi processing server:
The parent process setups the listening socket and loops on waiting actual connections with accept. Then it forks a child to process the newly accepted connection and simple waits for next one.
multi threaded server:
A variant of previous one. The master thread starts a new thread to process the newly accepted connection instead of forking a new process.
asynchronous server:
The server setups a fd_set to know which sockets require processing. Initially, only the listening socket is set. Then the main loop is (in pseudo code:
loop on select
if the listening socket is present in read ready sockets, accept the pending connection and add is to the `fd_set`, then return to loop
if another socket is present in read ready socket
read from it
if a zero read (closed by peer), close the socket and remove it from the `fd_set`
else process the request and return to loop
The hard part here is that is processing takes a long time, the whole process is blocked, and it processing involves sending a lot of data, you will have to use select for the sending part too...
I'm initializing a daemon in C in a Debian:
/**
* Initializes the daemon so that mcu.serial would listen in the background
*/
void init_daemon()
{
pid_t process_id = 0;
pid_t sid = 0;
// Create child process
process_id = fork();
// Indication of fork() failure
if (process_id < 0) {
printf("Fork failed!\n");
logger("Fork failed", LOG_LEVEL_ERROR);
exit(1);
}
// PARENT PROCESS. Need to kill it.
if (process_id > 0) {
printf("process_id of child process %i\n", process_id);
exit(0);
}
//unmask the file mode
umask(0);
//set new session
sid = setsid();
if(sid < 0) {
printf("could not set new session");
logger("could not set new session", LOG_LEVEL_ERROR);
exit(1);
}
// Close stdin. stdout and stderr
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
}
The main daemon runs in the background and monitors a serial port to communicate with a microcontroller - it reads peripherals (such as button presses) and passes information to it. The main functional loop is
int main(int argc, char *argv[])
{
// We need the port to listen to commands writing
if (argc < 2) {
fprintf(stderr,"ERROR, no port provided\n");
logger("ERROR, no port provided", LOG_LEVEL_ERROR);
exit(1);
}
int portno = atoi(argv[1]);
// Initialize serial port
init_serial();
// Initialize server for listening to socket
init_server(portno);
// Initialize daemon and run the process in the background
init_daemon();
// Timeout for reading socket
fd_set setSerial, setSocket;
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 10000;
char bufferWrite[BUFFER_WRITE_SIZE];
char bufferRead[BUFFER_READ_SIZE];
int n;
int sleep;
int newsockfd;
while (1)
{
// Reset parameters
bzero(bufferWrite, BUFFER_WRITE_SIZE);
bzero(bufferRead, BUFFER_WRITE_SIZE);
FD_ZERO(&setSerial);
FD_SET(fserial, &setSerial);
FD_ZERO(&setSocket);
FD_SET(sockfd, &setSocket);
// Start listening to socket for commands
listen(sockfd,5);
clilen = sizeof(cli_addr);
// Wait for command but timeout
n = select(sockfd + 1, &setSocket, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
}
// This is for READING button
else if (n == 0) {
// This timeout is okay
// This allows us to read the button press as well
// Now read the response, but timeout if nothing returned
n = select(fserial + 1, &setSerial, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
} else if (n == 0) {
// timeout
// This is an okay tiemout; i.e. nothing has happened
} else {
n = read(fserial, bufferRead, sizeof bufferRead);
if (n > 0) {
logger(bufferRead, LOG_LEVEL_INFO);
if (strcmp(stripNewLine(bufferRead), "ev b2") == 0) {
//logger("Shutting down now", LOG_LEVEL_INFO);
system("shutdown -h now");
}
} else {
logger("Could not read button press", LOG_LEVEL_WARN);
}
}
}
// This is for WRITING COMMANDS
else {
// Now read the command
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0 || n < 0) logger("Could not accept socket port", LOG_LEVEL_ERROR);
// Now read the command
n = read(newsockfd, bufferWrite, BUFFER_WRITE_SIZE);
if (n < 0) {
logger("Could not read command from socket port", LOG_LEVEL_ERROR);
} else {
//logger(bufferWrite, LOG_LEVEL_INFO);
}
// Write the command to the serial
write(fserial, bufferWrite, strlen(bufferWrite));
sleep = 200 * strlen(bufferWrite) - timeout.tv_usec; // Sleep 200uS/byte
if (sleep > 0) usleep(sleep);
// Now read the response, but timeout if nothing returned
n = select(fserial + 1, &setSerial, NULL, NULL, &timeout);
if (n == -1) {
// Error. Handled below
} else if (n == 0) {
// timeout
sprintf(bufferRead, "err\r\n");
logger("Did not receive response from MCU", LOG_LEVEL_WARN);
} else {
n = read(fserial, bufferRead, sizeof bufferRead);
}
// Error reading from the socket
if (n < 0) {
logger("Could not read response from serial port", LOG_LEVEL_ERROR);
} else {
//logger(bufferRead, LOG_LEVEL_INFO);
}
// Send MCU response to client
n = write(newsockfd, bufferRead, strlen(bufferRead));
if (n < 0) logger("Could not write confirmation to socket port", LOG_LEVEL_ERROR);
}
close(newsockfd);
}
close(sockfd);
return 0;
}
But the CPU usages is always at 100%. Why is that? What can I do?
EDIT
I commented out the entire while loop and made the main function as simple as:
int main(int argc, char *argv[])
{
init_daemon();
while(1) {
// All commented out
}
return 0;
}
And I'm still getting 100% cpu usage
You need to set timeout to the wanted value on every iteration, the struct gets modified on Linux so I think your loop is not pausing except for the first time, i.e. select() is only blocking the very first time.
Try to print tv_sec and tv_usec after select() and see, it's modified to reflect how much time was left before select() returned.
Move this part
timeout.tv_sec = 0;
timeout.tv_usec = 10000;
inside the loop before the select() call and it should work as you expect it to, you can move many delcarations inside the loop too, that would make your code easier to maintan, you could for example move the loop content to a function in the future and that might help.
This is from the linux manual page select(2)
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
I think the bold part in the qoute is the important one.
I just wondered about how Instant Messengers and Online Games can accept and deliver messages so fast. (Network programming with sockets)
I read about that this is done with nonblocking sockets.
I tried blocking sockets with pthreads (each client gets its own thread) and nonblocking sockets with kqueue.Then I profiled both servers with a program which made 99 connections (each connection in one thread) and then writes some garbage to it (with a sleep of 1 second). When all threads are set up, I measured in the main thread how long it took to get a connection from the server (with wall clock time) (while "99 users" are writing to it).
threads (avg): 0.000350 // only small difference to kqueue
kqueue (avg): 0.000300 // and this is not even stable (client side)
The problem is, while testing with kqueue I got multiple times a SIGPIPE error (client-side). (With a little timeout usleep(50) this error was fixed). I think this is really bad because a server should be capable to handle thousands of connections. (Or is it my fault on the client side?) The crazy thing about this is the infamous pthread approach did just fine (with and without timeout).
So my question is: how can you build a stable socket server in C which can handle thousands of clients "asynchronously"? I only see the threads approach as a good thing, but this is considered bad practice.
Greetings
EDIT:
My test code:
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
#define NTHREADS 100
volatile unsigned n_threads = 0;
volatile unsigned n_writes = 0;
pthread_mutex_t main_ready;
pthread_mutex_t stop_mtx;
volatile bool running = true;
void stop(void)
{
pthread_mutex_lock(&stop_mtx);
running = false;
pthread_mutex_unlock(&stop_mtx);
}
bool shouldRun(void)
{
bool copy;
pthread_mutex_lock(&stop_mtx);
copy = running;
pthread_mutex_unlock(&stop_mtx);
return copy;
}
#define TARGET_HOST "localhost"
#define TARGET_PORT "1336"
void *thread(void *args)
{
char tmp = 0x01;
if (__sync_add_and_fetch(&n_threads, 1) == NTHREADS) {
pthread_mutex_unlock(&main_ready);
fprintf(stderr, "All %u Threads are ready...\n", (unsigned)n_threads);
}
int fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
if (fd <= 0) {
fprintf(stderr, "socket_create failed\n");
}
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "pre-write failed\n");
}
do {
/* Write some garbage */
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "in-write failed\n");
break;
}
__sync_add_and_fetch(&n_writes, 1);
/* Wait some time */
usleep(500);
} while (shouldRun());
socket_close(fd);
return NULL;
}
int main(int argc, const char * argv[])
{
pthread_t threads[NTHREADS];
pthread_mutex_init(&main_ready, NULL);
pthread_mutex_lock(&main_ready);
pthread_mutex_init(&stop_mtx, NULL);
bzero((char *)&hint, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
hint.ai_family = AF_INET;
if (getaddrinfo(TARGET_HOST, TARGET_PORT, &hint, &res) != 0) {
return -1;
}
for (int i = 0; i < NTHREADS; ++i) {
pthread_create(&threads[i], NULL, thread, NULL);
}
/* wait for all threads to be set up */
pthread_mutex_lock(&main_ready);
fprintf(stderr, "Main thread is ready...\n");
{
double start, end;
int fd;
start = get_wall_time();
fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
end = get_wall_time();
if (fd > 0) {
fprintf(stderr, "Took %f ms\n", (end - start) * 1000);
socket_close(fd);
}
}
/* Stop all running threads */
stop();
/* Waiting for termination */
for (int i = 0; i < NTHREADS; ++i) {
pthread_join(threads[i], NULL);
}
fprintf(stderr, "Performed %u successfull writes\n", (unsigned)n_writes);
/* Lol.. */
freeaddrinfo(res);
return 0;
}
SIGPIPE comes when I try to connect to the kqueue server (after 10 connections are made, the server is "stuck"?). And when too many users are writing stuff, the server cannot open a new connection. (kqueue server code from http://eradman.com/posts/kqueue-tcp.html)
SIGPIPE means you're trying to write to a socket (or pipe) where the other end has already been closed (so noone will be able to read it). If you don't care about that, you can ignore SIGPIPE signals (call signal(SIGPIPE, SIG_IGN)) and the signals won't be a problem. Of course the write (or send) calls on the sockets will still be failing (with EPIPE), so you need to make you code robust enough to deal with that.
The reason that SIGPIPE normally kills the process is that its too easy to write programs that ignore errors on write/send calls and run amok using up 100% of CPU time otherwise. As long as you carefully always check for errors and deal with them, you can safely ignore SIGPIPEs
Or is it my fault?
It was your fault. TCP works. Most probably you didn't read all the data that was sent.
And when too many users are writing stuff, the server cannot open a new connection
Servers don't open connections. Clients open connections. Servers accept connections. If your server stops doing that, there something wrong with your accept loop. It should only do two things: accept a connection, and start a thread.
Unix/C question here.
I have multiple sockets that I am trying to poll for periodic data. I don't want select to wait indefinitely so I have a timeout in place and I'm running in a loop. I have found that once a socket is ready to read, it is always ready to read. As in, I cannot have select go to sleep when there is no data to be read from any of the sockets.
for (i = 0; i < n_connections; i++) {
FD_SET( sockfd[i], &master );
if (sockfd[i] > fdmax)
fdmax = sockfd[i];
}
for(;;) {
int nready = 0;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
read_fds = master;
if ( (nready = select(fdmax+1, &read_fds, NULL, NULL, NULL)) == -1 ) {
fprintf( stderr, "Select Error\n" );
return FAILURE;
}
printf( "Number of ready descriptors: %d\n", nready );
for (i = 0; i <= fdmax; i++) {
if (FD_ISSET(i, &read_fds)) {
if (( nbytes = recv(i, buf, sizeof(buf), 0)) <= 0 ) {
if (nbytes == 0) {
//connection closed
printf("Socket %d hung up\n", i );
}
else {
fprintf( stderr, "Recv Error %d\n", nbytes);
}
}
else {
printf( "Data Received on %d: %s\n", i, buf );
}
}
} // end file descriptor loop
It seems that after my first read, the 1 second timeout no longer applies and the socket is always "ready to read", even if there are 0 bytes available. How can I get select to sleep until data comes in (for the one second, or by switching the final argument to NULL, indefinitely waiting for data to come in on the socket?)
Output:
Number of Ready Descriptors: 2
Data Received on 4: GreetingsChap
Data Received on 5: HiMatengsChap
Loop...
Number of Ready Descriptors: 2
Socket 4 hung up
Socket 5 hung up
Loop...
Number of Ready Descriptors: 2
Socket 4 hung up
Socket 5 hung up
Loop...
Thank you,
Note: Code updated for clarity
Updated based on #yvesBraumes suggestions - still doesn't work.
If you detect that a connection is closed, remove the socket from the fd set, otherwise select is going to report them (Socket 4 hung up).. select is not edge triggered, if you don't handle the event, it's going to report it again.
Indeed, if recv returns 0 (and not -1, with errno=EWOULDBLOCK), the socket is closed. You should call close() on it as well, and take it out of the select() call. Otherwise it will remain in WAIT1 and release select() each time.
You are using FD_ISSET incorrectly. You need to be passing a socket ID to the "fd" parameter, not an index:
if (FD_ISSET(i, &read_fds))...
needs to be
if (FD_ISSET(sockfd[i], &read_fds))...
Likewise with recv.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Interrupting epoll_wait with a non-IO event, no signals
I have a thread that is currently using epoll_wait to flag the arrival of data on some sockets. The timeout parameter is currently set to zero.
However, the thread also does other tasks. What I want to do is change this so that if there is no work to be done then make it an indefinite or long time out. This will dramatically reduce wasted CPU cycles spinning when there is no actual work to do.
The whole thing is driven mostly by the arrival of a message on a thread safe lock free queue.
So, what I think should happen is I should wake up the thread from it's long timeout using epoll_pwait.
However, I'm unsure what signal to send it and how this is done. I'm not familiar with Linux signals.
The following is similar to what I currently have. Dramatically shorted to show the concept. If you spot a bug, don't bother pointing it out, this is just an illustration that I've typed in here to help you understand what I'm wanting to achieve.
// Called from another thread...
void add_message_to_queue(struct message_t* msg)
{
add_msg(msg);
raise( ? ); // wake the state machine?
}
// different thread to the above.
main_thread()
{
struct message_t msg;
while (msg = get_message_from_queue())
process_message(msg);
timeout = work_available ? 0 : -1;
nfds = epoll_pwait(epfd, events, MAX_EPOLL_EVENTS, timeout);
for (i = 0; i < nfds; ++i)
{
if ((events[i].events & EPOLLIN) == EPOLLIN)
{
/// do stuff
}
}
run_state_machines();
}
So I guess my question is really, is this the right way of going about it? and if so, what signal do I send and do I need to define a signal handler or can I use the signal disposition "ignore" and still be woken?
Instead of signals, consider using a pipe. Create a pipe and add the file descriptor for the read end of the pipe to the epoll. When you want to wake the epoll_wait call, just write 1 character to the write end of the pipe.
int read_pipe;
int write_pipe;
void InitPipe()
{
int pipefds[2] = {};
epoll_event ev = {};
pipe(pipefds, 0);
read_pipe = pipefds[0];
write_pipe = pipefds[1];
// make read-end non-blocking
int flags = fcntl(read_pipe, F_GETFL, 0);
fcntl(write_pipe, F_SETFL, flags|O_NONBLOCK);
// add the read end to the epoll
ev.events = EPOLLIN;
ev.data.fd = read_pipe;
epoll_ctl(epfd, EPOLL_CTL_ADD, read_pipe, &ev);
}
void add_message_to_queue(struct message_t* msg)
{
char ch = 'x';
add_msg(msg);
write(write_pipe, &ch, 1);
}
main_thread()
{
struct message_t msg;
while (msg = get_message_from_queue())
process_message(msg);
timeout = work_available ? 0 : -1;
nfds = epoll_wait(epfd, events, MAX_EPOLL_EVENTS, timeout);
for (i = 0; i < nfds; ++i)
{
if (events[i].data.fd == read_pipe)
{
// read all bytes from read end of pipe
char ch;
int result = 1;
while (result > 0)
{
result = read(epoll_read, &ch, 1);
}
}
if ((events[i].events & EPOLLIN) == EPOLLIN)
{
/// do stuff
}
}
run_state_machines();
}