Posix select()/poll() and pthread IPC - c

This is kind of generic question - however I met this problem several times already and I still haven't found the best possible solution.
Let's imagine you have program (e.g. HTTP application server) that is multithreaded and that communicates over sockets (TCP, Unix, ...). Main thread is using asynchronous IO and select() or poll() POSIX calls to dispatch traffic from/to sockets. There are also worker threads that process requests and provides responses. To send response back to the client, worker thread synchronises with main thread (that polls) 'somehow'. Core of the questions is 'how' - in terms of what is efficient. I can use pipe() - socket based IPC mechanism - but this seems to me as quite huge overhead. I tend to use some pthread IPC techniques like mutex, condition variables etc. … but these will not work with select() or poll().
Is there a common technique in POSIX (and surroundings) that address this conflict?
I guess on Windows there is WaitForMultipleObjects() function that allows that.
Example program is crafted to illustrate an issue, I know that I can design master/worker pattern in a different way but this is not what I'm asking for. I have other cases where I'm in the same situation.

You could use a signal to poke the worker thread, which will interrupt the select() call and return EINTR. This gets even easier to do with pselect().
For this to work:
decide on a signal (or allocate a real-time signal)
attach an empty handler function to it (if the signal were ignored, the system call would be automatically restarted)
block the signal, at least in the worker thread.
use the signal mask argument in pselect() to unblock the signal while waiting.
Between threads, you can use pthread_kill to deliver the signal to the worker thread specifically. When another process should send the signal, you can either make sure the signal is blocked in all but the worker thread (so it will be delivered there), or use the signal handler to find out whether the signal was sent to the worker thread, and use pthread_kill to forward it explicitly (the worker thread still doesn't need to do anything in the signal handler).
Due to laziness on my part, I don't have a source code viewer online, but you can clone the LibreVISA git tree, and take a look at src/messagepump.cpp, where this method is used to poke the worker thread after another thread added a file descriptor to the watch list.

Simon Richthers answer is v good.
Another alternative might be to make main thread only responsible for listening for new connections and starting up a worker thread with the connection information so that the worker is responsible for all subsequent ‘transactions’ from this source.
My understanding is:
Main thread uses select.
Worker threads processes requests forwarded to it by main thread.
So need to synchronize between workers and main thread e.g. when
worker finishes a transaction need to send response back to main
thread which in turn forwards the response back to the source.
Why don't you remove the problem of having to synchronize between the worker thread and the main thread by making the worker thread responsible for all transactions from a particular connection?
Thus the main thread is only responsible for listening for new connections and starting up a worker thread with the connection information i.e. the file descriptor for the new connection.

First of all, the way to wake another thread is to use the pthread_cond_wait / pthread_cond_timedwait calls in thread A to wait, and for thread B to use pthread_cond_broadcast / pthread_cond_signal to pick it up. So, for instance if B is a producer and A is the consumer, the producer might add items to a linked list protected with a mutex. There would be an associated conditional variable such that after the addition of the item, it could wake thread B such that it went to see if any new items had arrived on the list, and if so removed them. I say 'associated' as then the same mutex can be associated with the condition variable as protects the list.
So far so good. Now you mention asynchronous I/O. What I've wanted to do several times is select() or poll() on a set of FDs and a set of condition variables, so the select(), poll() is interrupted when the condition variable is broadcasted to. There is no easy way of doing this directly; you cannot simply mix and match.
You thus need to do one of two things. Either:
work around the problem (for instance, use a self-connected pipe() to send one byte to wake the select() up either instead of the condition variable, as well as the condition variable, or from some additional thread waiting on the condition variable; or
convert to a more threaded model. IE use one thread for sending, one thread for receiving, and use a producer / consumer model, so the sender thread simply removes from a list / buffer and sends (blocking if necessary), and the received waits for I/O (blocking if necessary) and adds it to the list (this is what you put in italics at the end).
The second is a major design change for those of us brought up on asynchronous I/O, and the first is ugly. You are not the first to be dismayed by this, but I've not found an easy way around it. Re the first an inefficiency, if you only write one character to wake the select loop to the self-pipe, I don't think you are going to see too much inefficiency.

Related

pthread_kill vs pthread_cond_signal for pausing/resuming a thread on a specific point

This request is about PThreads and using conditions or signals to pause/resume a continuous cycle worker thread.
A while ago, I came into this:
https://stackoverflow.com/a/23945651/6421961
Basically, user johnnycrash uses sigwait() to get a thread into a pause state (waiting for external wakening) and pthread_kill(thread_id, USR1) to signal the thread into waking up. He claims it to be faster than using the mutex+condition construct and it appears to be less complex. I am developing a piece of software that would indeed require a thread to sleep until signaled and return to sleep after doing work in an infinite cycle (the eater of a feeder-eater paradigm).
I am using this to have a separate thread waiting for the conclusion of worker threads. In my current implementation, worker threads add their handles to a list protected by a mutex, signal the waiting thread with pthread_kill and finish with pthread_join.
My questions are all related:
How valid is it to actually use pthread_kill()+sigwait() instead of mutex+condition?
In case it is an acceptable solution, what pitfalls/race conditions
should one be aware of?
Would it be better to use pthread_sigqueue() instead of pthread_kill()? Would it actually be able catch signals sent while sigwait() is not running and immediately process them as soon as sigwait() is called?
Last question, derived from some contradicting information I found: Will different threads both paused with sigwait() expecting USR1 be able to be signaled independently, or will only one of them be able to actually catch the signal regardless of which one was signaled?
I will try to answer points 1 and 4.
pthread_kill() + sigqueue() and mutex+condition they both have their own purposes. When you're working with data (i.e. global variable) which is used by multiple threads in that case mutex are more appropriate. But, when you're waiting for an external event (like. network packet) and want to signal your thread based on that event pthread_kill() is more appropriate.
It depends how the signal (USR1) was sent. If it was sent using pthread_kill() or pthread_sigqueue() you can specify which thread you're sending that signal to, the only difference is with pthread_sigqueue() - you can send an additional information with the signal. You can also send signal to specific pid or group-wise signal sending using kill(). So, it largely depends on your need.

How to synchronise processing from WSARecvFrom() when using CompletionRoutine with multiple sockets

From the MSDN Documentation:
The transport providers allow an application to invoke send and receive operations from within the context of the socket I/O completion routine, and guarantee that, for a given socket, I/O completion routines will not be nested. This permits time-sensitive data transmissions to occur entirely within a preemptive context.
In our system we do have one thread calling WSARecvFrom() for multiple sockets. There is one CompletionRoutine for that thread handling all call backs from WSARecvFrom() opverlapped I/O.
Our tests showed that this Completion Routine is called like triggered from an Interrupt. Called for a socket while still processing the completeion Routine from an other socket.
How do we can prevent that this completion Routine gets not called while it is still processing Input from an other socket?
What Serialisation of data processing can we use ?
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API.
We can not use a Semaphore because when newly called the old ongoing processing is interreupted so a Semaphore would no be realeased and new processing blocks for ever.
Critical Sections or Mutex is not an Option because the Completion Routine Call back is made from within the same thread so CS or mutex would accept anyway and would not wait till the old processing is finished.
Does anyone have an Idea or even better approach to serialze (synchronize) data processing ?
If you read the WSARecvFrom() documentation again more carefully, it also says:
The completion routine follows the same rules as stipulated for Windows file I/O completion routines. The completion routine will not be invoked until the thread is in an alertable wait state such as can occur when the function WSAWaitForMultipleEvents with the fAlertable parameter set to TRUE is invoked.
The Alertable I/O documentation then states:
When the thread enters an alertable state, the following events occur:
The kernel checks the thread's APC queue. If the queue contains callback function pointers, the kernel removes the pointer from the queue and sends it to the thread.
The thread executes the callback function.
Steps 1 and 2 are repeated for each pointer remaining in the queue.
When the queue is empty, the thread returns from the function that placed it in an alertable state.
So it should be practically impossible for a given thread to overlap multiple pending completion routines on top of each other, because the thread receives and processes the routines in a serialized manner. The only way I could see that being different is if a completion routine is doing something to put the thread into a second alertable state while a previous alertable state is still in effect. I'm not sure what Windows does in that situation, but you should avoid doing it anyway.
Note there are hundrets of sockets receiving and sending realtime data. Synchronisation with waiting for multiple objects is not applicable as there is a maximum of 64 defined by the Win32 API
The WaitForMultipleObjects() documentation tells you how to work around that limitation:
To wait on more than MAXIMUM_WAIT_OBJECTS handles, use one of the following methods:
• Create a thread to wait on MAXIMUM_WAIT_OBJECTS handles, then wait on that thread plus the other handles. Use this technique to break the handles into groups of MAXIMUM_WAIT_OBJECTS.
• Call RegisterWaitForSingleObject to wait on each handle. A wait thread from the thread pool waits on MAXIMUM_WAIT_OBJECTS registered objects and assigns a worker thread after the object is signaled or the time-out interval expires.
I wouldn't wait on the sockets anyway, that is not very efficient. Using completion routines is fine as long as they are doing safe things.
Otherwise, I would suggest you stop using completion routines and switch to using an I/O Completion Port for the socket I/O instead. Then you are in more control of when the completion results are reported to you, because you have to call GetQueuedCompletionStatus() yourself to get the results of each I/O operation. You can have multiple sockets associated with a single IOCP, and then have a small pool of threads (typically one thread per CPU core works best) all calling GetQueuedCompletionStatus() on that IOCP. This way, you can process multiple I/O results in parallel, as they will be in different thread contexts and cannot overlap each other in the same thread. This does mean, however, that you can perform an I/O operation in one thread and the result may show up in a different thread. Just make sure your completion processing is thread-safe.
First of all let me thanks for all the helpful hints and comments to my question.
We did stop now using completion routines. We changed the application to use completion ports.
The biggest problem we had with completion routines is that every time the thread goes into an alertable state the completion routines can (and will) be called again from the OS. As seen in the Debugger also calling WSASendTo() from inside the completion routine puts the thread into an alertable state. So the completion routine is executed again before the previous execution of the completion routine comes to its end.
This makes it nearly impossible to synchronize data processing from multiple different sockets.
The approach using Completion Ports seems to be the perfect one. You then have control about what are doing when you are released from GetQueuedCompletionStatus() for processing a data buffer. You have to and you can do the synchronization of data processing by yourself in a linear fashion without being interrupted and newly executed while trying to process the data.

When to join the created threads in a simple multiclient server application in C?

I am writing a simple multi-client server communication program using POSIX threads in C. I am creating a thread every time a new client is connected, i.e. after the accept(...) routine in main().
I have put the accept(...) and the pthread_create(...) inside a while(1) loop, so that server continues to accept clients forever. Now, where should I write the pthread_join(...) routine after a thread exits.
More Info: Inside the thread's "start routine", I have used poll() & then recv() functions, again inside a while(1) loop to continuously poll for availability of client and receive the data from client, respectively. The thread exits in following cases:
1) Either poll() returns some error event or client hangs up.
2) recv() returns a value <= 0.
Language: C
Platform: Suse Linux Enterprise Server 10.3 (x86_64)
First up starting a new thread for each client is probably wasteful and surely won't scale very far. You should try a design where a thread handles more than one client (i.e. calls poll on more than one socket). Indeed, that's what poll(2), epoll etc were designed for.
That being said, in this design you likely needn't join the threads at all. You're not mentioning any reason why the main thread would need information from a thread that finished. Put another way, there's no need for joining.
Just set them as "detached" (pthread_detach or pthread_attr_setdetachstate) and they will be cleaned up automatically when their function returns.
The problem is that pthread_join blocks the calling thread until the thread exits. This means you can't really call it and hope the thread have exited as then the main thread will not be able to do anything else until the thread have exited.
One solution is that each child thread have a flag that is polled by the main thread, and the child thread set that flag just before exiting. When the main thread notices the flag being set, it can join the child thread.
Another possible solution, is if you have e.g. pthread_tryjoin_np (which you should have since you're on a Linux system). Then the main thread in its loop can simply try to join all the child threads in a non-blocking way.
Yet another solution may be to detach the child threads. Then they will run by themselves and do not need to be joined.
Ah, the ol' clean shutdown problem.
Assuming that you may want to cleanly disconnect the server from all clients under some circumstance or other, your main thread will have to tell the client threads that they're to disconnect. So how could this be done?
One way would be to have a pipe (one per client thread) between the main thread and client thread. The client thread includes the file descriptor for that pipe in its call to poll(). That way the main thread can easily send a command to the client thread, telling it to terminate. The client thread reads the command when poll() tells it that the pipe has become ready for reading.
So your main thread can then send some sort of command through the pipe to the client thread and then call pthread_join() waiting for the client thread to tidy itself up and terminate.
Similarly another pipe (again one per client thread) can be used by the client thread to send information to the main thread. Instead of being stuck in a call to accept(), the main thread can be using poll() to wait for a new client connection and for messages from the existing client threads. A timeout on poll() also allows the main thread to do something periodically.
Welcome to the world of the actor model of concurrent programming.
Of course, if you don't need a clean shut down then you can just let threads terminate as and when they want to, and just ctrl c the program to close it...
As other people have said getting the balance of work per thread is important for efficient scaling.

Interrupting two blocking pthreads by signals

In my application the main thread creates two joined threads; one which waits for user input by calling scanf() in a loop and another one which listens for incoming socket connections by calling accept() in a loop. New connections are handled in separate threads which are detached.
I want the program to shut itself down gracefully when a SIGINT signal is received. That means that the listening thread should stop accepting new connections, wait for threads currently serving connections to close and then then. The thread waiting for user input should also end and thereby allowing the main thread to end.
Scanf() and accept() both block but can be interrupted by a signal, however signals sent to the process can only be handled by one thread (AFAIK). My idea is to block SIGINT signals in all threads except the main thread which will wait for it. When it receives a SIGINT, it will then send SIGUSR1 to the two threads (one which blocks on scanf() and one which blocks on accept()) and then join on these threads.
Is this a good solution? Is there a better or perhaps standard way to achieve this?
The canonical solution for problems like this is Thread Cancellation. A cancellation request that arrives while a thread is blocked in a function which is a cancellation point will be acted upon immediately; otherwise, it's acted upon the next time the thread calls a function that's a cancellation point.
The only thing that might be painful about using cancellation is that it's built on a model of exception handling rather than failure returns. You need to either keep cancellation blocked most of the time (and only enable it during operations during which you want to handle cancelling) or else install cancellation cleanup handlers (basically, semi-ugly exception handlers in C) at each call frame level where you might have intermediate state/allocations/etc. to clear out when your thread is cancelled. Personally I prefer the first approach, which generally only requires one cancellation handlers be installed.
Any approach based on interrupting signals inherently has race conditions. If the signal is sent just before the blocking function is called, then when the signal handler returns, the blocking function will be called and will block indefinitely (since the check for "is it time to exit?" was already successfully passed).

What happens to other thread when one thread get blocked?

In Linux, if two threads are created and both of them are running, when one of them calls recv() or any IO syscall that blocks when no data is available, what would happen to the whole process?
Will the other thread block also? I guess this depends on how threading is implemented. If thread library is in user space and kernel totally unaware of the threads within process, then process is the scheduling entity and thus both threads got blocked.
Further, if the other thread doesn't block because of this, can it then send() data via the same socket, which is blocking the recv thread? Duplexing?
Any ideas?
You're absolutely right that the blocking behavior will depend on if the thread is implemented in kernel space, or in user space. If threading is implemented purely in user space (that is, the kernel is completely uninvolved with the threading), then any blocking entry point into the kernel will need to be wrapped with some non-blocking variant that can simulate blocking semantics to its calling "thread" (e.g. using AIO to send / recv data instead of blocking, and the completion callback makes the thread runnable, again).
In Linux (and every other extant major OS I can think of), threading is implemented at the kernel level, or similar, and a blocking call into the kernel will not cause all other threads to block.
Yes, you can send() to a socket for which another thread is blocked on recv().
Blocking calls in one thread should not affect other threads.
If the blocked thread locks a mutex prior to entering the blocked call and the second thread attempts to lock the same mutex, then they the second thread would need to wait for the blocking call to finish and for the first thread to release the lock.
It's completely possible to implement threads in user-space such that one thread can proceed while another thread block on I/O.
The non-blocked thread should be able to send on the socket while the other thread is blocking on it (I've written such code).

Resources