Cost of using select() in different threads for the same process - c

In my application, I'm starting to use the select() call in multiple locations, monitoring different things in my process (network connections, IPCs, messaging, files...).
All the calls are using their own set of file descriptors, meaning no descriptor is used twice across selects.
This means that some times, I have something like 5 select() calls blocking in different threads.
Is there a performance penalty of using select() multiple times in different threads, instead of, maybe, using only one call and dispatching results to the corresponding threads?
Actually, is there a limit to the number of select() calls pending at one time?
Is there a tool to measure this?
Since the application is likely to grow even bigger, I suspect that at some point, if this begins to get problematic, I'll have to code some kind of centralized select() which gathers all FDs to monitor and notifies client threads when data is ready to be gathered/written.
So I figured I'd better ask before...

There is unlikely to be any performance difference you will notice.
Inside the kernel, select adds your thread to a "wait queue" for each descriptor on which you are selecting and puts it to sleep. If you select on n descriptors, your thread gets added to n wait queues. When something pollable happens to the descriptor (e.g., data arrives on the socket), all threads on that wait queue get woken up.
Selecting on a huge number of descriptors would add you to a huge number of wait queues. Once woken up, your thread will have to be removed from all of those wait queues, including the ones on which there was no activity. So on this side, there might be some slight benefit to waiting on a small set of descriptors in multiple threads rather than a huge set of descriptors in one thread.
On the other hand, select itself requires that the kernel loop through all possible descriptors to see which ones are a member of your fd_set. So on this side, there might be some slight advantage to having just one thread making the select call...
Overall I would guess it is a wash.
If you are going to deal with a lot of descriptors, you are better off using a more scalable (albeit non-portable) mechanism like epoll. With epoll, multiple threads each handling a pool of descriptors should scale very nicely.

The select call should be handled in the OS, and as you said - its blocking, not polling - so it won't cause a performance penalty on your application. I also do not believe there will be any limits on using it aside from any limits on the amount of file descriptors your OS can have open in general, which has nothing to do with select.

Related

The posix C write() and thread-safety

There is a way to serialize the C write() so that I can write bytes on a socket, shared between k-threads, with no data-loss? I imagine that a solution to this problem includes user-space locking, and what about scalability? Thank you in advance.
I think the right answer depends on whether your threads need to synchronously wait for a response or not. If they just need to write some message to a socket and not wait for the peer to respond, I think the best answer is to have a single thread that is dedicated to writing messages from a queue that the other threads place messages on. That way, the worker threads can simply place their messages on the queue and get on with doing something else.
Of course, the queue has to be protected by a mutex but any one thread only has to hold the lock for as long as it is manipulating the queue (guaranteed to be quite a short time). The more obvious alternative of letting every thread write directly to the socket requires each thread to hold the lock for as long as it takes the write operation to complete. This will always be much longer than just adding an item to a queue since write is a system call and potentially, it could block for a long period.
Even if your threads need a response to their messages, it may still pay to do something similar. Your socket servicing thread becomes more complex because you'll have to do something like select() on the socket for reads and writes to stop it from blocking and you'll also need a way to match up messages to responses and a way to inform the threads when their responses have arrived.
Since POSIX does not seem to specify atomicity guarantees on send(2), you will likely have to use a mutex. Scalability of course goes down the drain with this sort of serialization.
One possible approach would be to use the locking mechanism. Every thread should wait for a lock before writing any thing on the socket and should release the lock, once it is done.
If all of your threads are sending exactly the same kind of messages, the receiver end would not have any problem in reading the data, but if different threads can send different kind of data with possible different info, you should have an unique message id associated with each kind of data and its better to send the thread id as well (although not necessary, but might help you in debugging small issues).
You can have a structure like:
typedef struct my_socket_data_st
{
int msg_id;
#ifdef __debug_build__
int thread_id;
#endif
size_t data_size_in_bytes;
.... Followed by your data ....
} my_socket_data_t
Scalability depends on a lot things including the hardware resources on which your application would be running. Since it is a network application, you will have to think about the network bandwidth as well. Although there is no (there are a few, but I think you can ignore them for now for your application) limitation from OS on sending/receiving data over a socket, but you will have to consider about making the send synchronous or asynchronous based on your requirement. Also since, you are taking a lock, you will have to think about lock congestion as well. If the lock is not available easily for other threads, that will degrade the performance by a huge factor.

Will calling select()/pselect() in secondary thread cause primary thread to block?

I have an application that I'm working on that requires a couple of secondary threads, and each will be responsible for a number of file handles (at least 1, upwards of 10). The file handles are not shared amongst the threads, so I don't have to worry about one secondary thread blocking the other when selecting to see what is ready to read/write. What I want to be sure of is that neither of the secondary threads will cause the main thread to stop executing while the select/pselect call is executing.
I would imagine that this is not a problem - one would imagine that such things would be done in, say, a web server - but I couldn't find anything that specifically said "yes, you can do this" when I Googled. Am I correct in my assumption that this will not cause any problems?
For clarification, what I have looks something like:
Main thread of execution ( select() loop handling incoming command messages and outgoing responses )
Secondary thread #1 ( select() loop providing a service )
Secondary thread #2 ( select() loop providing another service )
As I previously mentioned, none of the file handles are shared amongst the threads - they are created, used, and destroyed within an individual thread, with the other threads ignorant of their existence.
No you don't have to worry about them blocking the main thread. I have used select in multiple threads in various projects. As long as they have distinct FDSETS then you're fine and each one can be used like an independent event loop.
Isn't select supposed to block the whole process?
Have you tried to set the nonblocking mode on the socket?
Also, see select_tut manpage for some help.
Here's a relevant section from the select_tut manpage:
So what is the point of select()? Can't I just read and write to my descriptors whenever I want? The point of select() is that it watches multiple descriptors at the same time and properly puts the process to sleep if there is no activity.

Asynchronous File I/O using threads in C

I'm trying to understand how asynchronous file operations being emulated using threads. I've found next-to-nothing materials to read about the subject.
Is it possible that:
a process uses a thread to open a regular file (HDD).
the parent gets the file descriptor from the thread, now it may close the thread.
the parent uses the file descriptor with a new thread, reading X bytes from the file.
the parent gets the file descriptor with the seek-position of the current file state.
the parent may repeat these operations, without the need to open, or seek, every time it wishes to "continue" reading a new chunk of the file?
This is just a wild guess of mine, would appreciate if anybody mind to shed more light to clarify how it's being emulated efficiently.
UPDATE:
By efficient I actually mean that I don't want the thread to "wait" since the moment the file been opened. Think of a HTTP non-blocking daemon which serves a client with a huge file, you want to use the thread to read chunks of the file without blocking the daemon - but you don't want to keep the thread busy while "waiting" for the actual transfer to take place, you want to use the thread for other blocking operations of other clients.
To understand asynchronous I/O better, it may be helpful to think in terms of overlapping operation. That is, the number of pending operations (operations that have been started but not yet completed) can simutaneously go above one.
A diagram that explains asynchronous I/O might look like this: http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx
If you are using the asynchronous I/O capabilities provided by the underlying Operating System, then it is possible to asynchronously read from multiple files without spawning a equal number of threads.
If your underlying Operating System does not provide asynchronous I/O, or if you decide not to use it, in other words, you wish to emulate asynchronous operation by only using blocking I/O (the regular Read/Write provided by the Operating System) then it is necessary to spawn as many threads as the number of simutaneous I/O operations. This is because when a thread is making a function call to blocking I/O, the thread cannot continue its execution until the operation finishes. In order to start another blocking I/O operation, that operation has to be issued from another thread that is not already occupied.
When you open/create a file fire up a thread. Now store that thread id/ptr as your file handle.
Basically the thread will do nothing except sit in a loop waiting for an "event". A semaphore would be good here. When you want to do a read then you add the read command to a queue (remember to critical section the stack add), return a unique id, and then you increment the semaphore. If the thread is asleep it will now wake up and grab the first message off the queue and process it. When it has completed you remove the command from the queue.
To poll if a file read has completed you can, simply, check to see if its in the command queue. If its not there then the command has completed.
Furthermore if you want to allow synchronous reads as well then you can wait after sending the message through for an "event" to get triggered by the completion. You then check to see if the unique id is the queue and if it isn't you return control. If it still is then you go back to a wait state until the relevant unique id has been processed.

Nonblocking sockets with Select

I do not understand what the difference is between calling recv() on a non-blocking socket vs a blocking socket after waiting to call recv() after select returns that it is ready for reading. It would seem to me like a blocking socket will never block in this situation anyway.
Also, I have heard that one model for using non blocking sockets is try to make calls (recv/send/etc) on them after some amount of time has passed instead of using something like select. This technique seems slow and wasteful to be compared to using something like select (but then I don't get the purpose of non-blocking at all as described above). Is this common in networking programming today?
There's a great overview of all of the different options for doing high-volume I/O called The C10K Problem. It has a fairly complete survey of a lot of the different options, at least as of 2006.
Quoting from it, on the topic of using select on non-blocking sockets:
Note: it's particularly important to remember that readiness notification from the kernel is only a hint; the file descriptor might not be ready anymore when you try to read from it. That's why it's important to use nonblocking mode when using readiness notification.
And yes, you could use non-blocking sockets and then have a loop that waits if nothing is ready, but that is fairly wasteful compared to using something like select or one of the more modern replacements (epoll, kqueue, etc). I can't think of a reason why anyone would actually want to do this; all of the select like options have the ability to set a timeout, so you can be woken up after a certain amount of time to perform some regular action. I suppose if you were doing something fairly CPU intensive, like running a video game, you may want to never sleep but instead keep computing, while periodically checking for I/O using non-blocking sockets.
The select, poll, epoll, kqueue, etc. facilities target multiple socket/file descriptor handling scenarios. Imagine a heavy loaded web-server with hundreds of simultaneously connected sockets. How would you know when to read and from what socket without blocking everything?
If you call read on a non-blocking socket, it will return immediately if no data has been received since the last call to read. If you only had read, and you wanted to wait until there was data available, you would have to busy wait. This wastes CPU.
poll and select (and friends) allow you to sleep until there's data to read (or write, or a signal has been received, etc.).
If the only thing you're doing is sending and receiving on that socket, you might as well just use a non-blocking socket. Being asynchronous is important when you have other things to do in the meantime, such as update a GUI or handle other sockets.
For your first question, there's no difference in that scenario. The only difference is what they do when there is nothing to be read. Since you're checking that before calling recv() you'll see no difference.
For the second question, the way I see it done in all the libraries is to use select, poll, epoll, kqueue for testing if data is available. The select method is the oldest, and least desirable from a performance standpoint (particularly for managing large numbers of connections).

How to signal select() to return immediately?

I have a worker thread that is listening to a TCP socket for incoming traffic, and buffering the received data for the main thread to access (let's call this socket A). However, the worker thread also has to do some regular operations (say, once per second), even if there is no data coming in. Therefore, I use select() with a timeout, so that I don't need to keep polling. (Note that calling receive() on a non-blocking socket and then sleeping for a second is not good: the incoming data should be immediately available for the main thread, even though the main thread might not always be able to process it right away, hence the need for buffering.)
Now, I also need to be able to signal the worker thread to do some other stuff immediately; from the main thread, I need to make the worker thread's select() return right away. For now, I have solved this as follows (approach basically adopted from here and here):
At program startup, the worker thread creates for this purpose an additional socket of the datagram (UDP) type, and binds it to some random port (let's call this socket B). Likewise, the main thread creates a datagram socket for sending. In its call to select(), the worker thread now lists both A and B in the fd_set. When the main thread needs to signal, it sendto()'s a couple of bytes to the corresponding port on localhost. Back in the worker thread, if B remains in the fd_set after select() returns, then recvfrom() is called and the bytes received are simply ignored.
This seems to work very well, but I can't say I like the solution, mainly as it requires binding an extra port for B, and also because it adds several additional socket API calls which may fail I guess – and I don't really feel like figuring out the appropriate action for each of the cases.
I think ideally, I would like to call some function which takes A as input, and does nothing except makes select() return right away. However, I don't know such a function. (I guess I could for example shutdown() the socket, but the side effects are not really acceptable :)
If this is not possible, the second best option would be creating a B which is much dummier than a real UDP socket, and doesn't really require allocating any limited resources (beyond a reasonable amount of memory). I guess Unix domain sockets would do exactly this, but: the solution should not be much less cross-platform than what I currently have, though some moderate amount of #ifdef stuff is fine. (I am targeting mainly for Windows and Linux – and writing C++ by the way.)
Please don't suggest refactoring to get rid of the two separate threads. This design is necessary because the main thread may be blocked for extended periods (e.g., doing some intensive computation – and I can't start periodically calling receive() from the innermost loop of calculation), and in the meanwhile, someone needs to buffer the incoming data (and due to reasons beyond what I can control, it cannot be the sender).
Now that I was writing this, I realized that someone is definitely going to reply simply "Boost.Asio", so I just had my first look at it... Couldn't find an obvious solution, though. Do note that I also cannot (easily) affect how socket A is created, but I should be able to let other objects wrap it, if necessary.
You are almost there. Use a "self-pipe" trick. Open a pipe, add it to your select() read and write fd_set, write to it from main thread to unblock a worker thread. It is portable across POSIX systems.
I have seen a variant of similar technique for Windows in one system (in fact used together with the method above, separated by #ifdef WIN32). Unblocking can be achieved by adding a dummy (unbound) datagram socket to fd_set and then closing it. The downside is that, of course, you have to re-open it every time.
However, in the aforementioned system, both of these methods are used rather sparingly, and for unexpected events (e.g., signals, termination requests). Preferred method is still a variable timeout to select(), depending on how soon something is scheduled for a worker thread.
Using a pipe rather than socket is a bit cleaner, as there is no possibility for another process to get hold of it and mess things up.
Using a UDP socket definitely creates the potential for stray packets to come in and interfere.
An anonymous pipe will never be available to any other process (unless you give it to it).
You could also use signals, but in a multithreaded program you'll want to make sure that all threads except for the one you want have that signal masked.
On unix it will be straightforward with using a pipe. If you are on windows and want to keep using the select statement to keep your code compatible with unix, the trick to create an unbound UDP socket and close it, works well and easy. But you have to make it multi-threadsafe.
The only way I found to make this multi-threadsafe is to close and recreate the socket in the same thread as the select statement is running. Of course this is difficult if the thread is blocking on the select. And then comes in the windows call QueueUserAPC. When windows is blocking in the select statement, the thread can handle Asynchronous Procedure Calls. You can schedule this from a different thread using QueueUserAPC. Windows interrupts the select, executes your function in the same thread, and continues with the select statement. You can now in your APC method close the socket and recreate it. Guaranteed thread safe and you will never loose a signal.
To be simple:
a global var saves the socket handle, then close the global socket, the select() will return immediately: closesocket(g_socket);

Resources