I've seen a few write-ups comparing select() with poll() or epoll(), and I've seen many guides discussing the actual usage of select() with multiple sockets.
However, what I can't seem to find is a comparison to a non-blocking recv() call without select(). In the event of only having 1 socket to read from and 1 socket to write to, is there any justification for using the select() call? The recv() method can be setup to not block and return an error (WSAEWOULDBLOCK) when there is no data available, so why bother to call select() when you have no other sockets to examine? Is the non-blocking recv() call much slower?
You wouldn't want a non-blocking call to recv without some other means for waiting for data on the socket as you poll infinitely eating up cpu time.
If you have no other sockets to examine and nothing else to do in the same thread, a blocking call to read is likely to be the most efficient solution. Although in such a situation, considering the efficiency of this is like to be premature optimisation.
These kinds of considerations only tend to come into play as the socket count increases.
Nonblocking calls are only faster in the context of handling multiple sockets on a single thread.
If there is no data available, and you use non-blocking IO, recv() will return immediately.
Then what should the program do ? You would need to call recv() in a loop until data becomes available - this just uses CPU for pretty much no reason.
Spinning on recv() and burning CPU in that manner is very undesirable; you'd rather want the process to wait until data becomes available and get woken up; that's what select()/poll() and similar does.
And, sleep() in the loop in order to not burn CPU is not a good solution either. You'd introduce high latency in the processing as the program will not be able to process data as soon as the data is available.
select() and friends let you design the workflow in such a way that slowness of one socket does not impede the speed at which you can serve another. Imagine that data arrives fast from the receiving socket and you want to accept it as fast as possible and store in memory buffers. But the sending socket is slow. When you've filled up the sending buffers of the OS and send() gave you EWOULDBLOCK, you can issue select() to wait on both receiving and sending sockets. select() will fall through if either new data on the receiving socket arrived, or some buffers are freed and you can write more data to the sending socket, whichever happens first.
Of course a more realistic use case for select() is when you have multiple sockets to read from and/or to write to, or when you must pass the data between your two sockets in both directions.
In fact, select() tells you when the next read or write operation on a socket is known to succeed, so if you only try to read and write when select allows you, your program will almost work even if you didn't make the sockets non-blocking! It is still unwise to do, because there exist edge cases when the next operation still may block despite select() reported that the socket as "ready".
On the other hand, making the sockets non-blocking and not using select() is almost never advisable because of the reason explained by #Troy.
Related
I need to "wake up" a process that is waiting on epoll() from another process.
I've created a UDS (AF_UNIX) type SOCK_DGRAM where:
The client every few ms might send one char to the server
The server is waiting with epoll() on the socket for read
I don't need the data from the client, only to "wake up" from it
How can i do this the most efficiently?
Do i have to read() the data?
Can the server somehow ignore the data without overloading the socket's memory?
Do i have to read() the data? Can the server somehow ignore the data without overloading the socket's memory?
If you're receiving data on a socket on an ongoing basis then yes, you need to read that data, else the socket buffer will eventually fill. After it does, you will not receive any more data. You don't need to do anything with the data you read, and you can consume many bytes at a time if you wish, but reading the data is how you remove them from the socket buffer.
You will also find that epoll_wait() does not behave as you want if you do not read the data. If you are watching the socket fd in level-triggered mode, and there are already data available to read, then epoll_wait() will not block. If you are watching the socket fd in edge-triggered mode, and there are already data ready to read, then receiving more data will not cause epoll_wait() to unblock.
How can i do this the most efficiently?
Are you really worried about single-byte read() calls at rate not exceeding one every few milliseconds? Is this for some low-power embedded system?
I don't really see a lot of room for improvement if you've settled on using epoll for this. If it turns out not to perform well enough for you, then you could consider alternatives such as process-shared semaphores or signals, though it is by no means clear that either of these would be superior. This is what performance testing is for.
My application has ONLY 1 Unix TCP socket that it uses to recv() and send(). The socket is non-blocking. Given this, is there an advantage in doing a select() before a send()/recv()?
If the underlying TCP pipe is not ready for an I/O, the send()/recv() should immediately return with an EWOULDBLOCK or EAGAIN. So, what's the point of doing a select()? Seems like, it might only cause an additional system call overhead in this case. Am I missing anything?
EDIT: Forgot to mention: The application is single-threaded.
If your socket is non-blocking, then you need select (or preferably poll, which does not have the broken FD_SETSIZE limit and associated dangers) to block for you in place of the blocking that would be taking place (if the socket were not non-blocking) in send and recv. Otherwise you will spin, using 100% cpu time to do-nothing. In most cases, you could just as easily make the socket blocking and do away with select/poll. However, there is one interesting case to consider: blocking IO could deadlock if your program is blocked in send and the program at the other end of the socket is also blocked in send (or the opposite). With non-blocking IO and select/poll, you naturally detect this situation and process the pending input when writing your output is not possible.
You could just do recv() in a loop, but then you'll be consuming a lot of CPU time.
Better to wait on select() to avoid the extra CPU overhead. If you need to be doing background tasks, add a timeout to select() so you can wake periodically, even with no network traffic.
If your application is letency sensitive then in may be justified to spin in a tight recv() loop without select() and give it a dedicated CPU (otherwise scheduler will punish it and you end up having massive latency). If your app cannot afford it but still does gie a thread to serve this socket then just make the socket blocking on read side and let scheduler wake your thread up when data is available. On the sending side again depends on what you need, either make the socket blocking or spin.
Only if your application is single threaded and the logic is "receive-process-reply" you absolutely need a non blocking read/write socket, selector, and a write queue, so that you receive when data is there, process, pit response to the queue, register for writability, flush the queue to the socket when writable, unregister from writability. readability is to be registered for all the time.
I am writing a client that receives UDP datagrams from a single sender. All IO will be done in a single thread. Generally, there will either be no data, or a 30 MBit/s stream. My primary concern is in keeping latency as low as possible.
The plan is to block, waiting for data, in a loop with a short-ish timeout, so that the IO thread can be responsive to shutdown requests, etc.
I am inclined to use a blocking socket, set a timeout on it, and do a recvfrom() call. However, this seems to be much less common than a select()/poll() and recvfrom() combination on a nonblocking socket.
Given that I am only working with a single socket, it seems that the nonblocking approach is needlessly complicated. Am I missing something else? Is there a reason to prefer nonblocking sockets in this particular case?
If you have a dedicated thread for handling the socket then asynchronous I/O, select etc are useless. What you want is simply recvfrom(2) and handle the data as quickly as possible.
Any fancy mechanisms (epoll, libaio, etc.) won't help you get more speed out of your application.
With only a few peers, (and 'one' is surely in this set:), a thread with a blocking socket should be fine. The code is easier to write since state can be maintained in the dedicated thread - no need for the state-machines that are usually required with a non-blocking system.
Short timeout - do you need this? Do you shutdown this subsystem before app close? If not, could you just let it be killed by OS?
If you have to shut down the thread system, you could set some 'terminate' flag and send yourself a UDP message to unblock the thread so it realises it has to die.
Rgds,
Martin
I do not understand what the difference is between calling recv() on a non-blocking socket vs a blocking socket after waiting to call recv() after select returns that it is ready for reading. It would seem to me like a blocking socket will never block in this situation anyway.
Also, I have heard that one model for using non blocking sockets is try to make calls (recv/send/etc) on them after some amount of time has passed instead of using something like select. This technique seems slow and wasteful to be compared to using something like select (but then I don't get the purpose of non-blocking at all as described above). Is this common in networking programming today?
There's a great overview of all of the different options for doing high-volume I/O called The C10K Problem. It has a fairly complete survey of a lot of the different options, at least as of 2006.
Quoting from it, on the topic of using select on non-blocking sockets:
Note: it's particularly important to remember that readiness notification from the kernel is only a hint; the file descriptor might not be ready anymore when you try to read from it. That's why it's important to use nonblocking mode when using readiness notification.
And yes, you could use non-blocking sockets and then have a loop that waits if nothing is ready, but that is fairly wasteful compared to using something like select or one of the more modern replacements (epoll, kqueue, etc). I can't think of a reason why anyone would actually want to do this; all of the select like options have the ability to set a timeout, so you can be woken up after a certain amount of time to perform some regular action. I suppose if you were doing something fairly CPU intensive, like running a video game, you may want to never sleep but instead keep computing, while periodically checking for I/O using non-blocking sockets.
The select, poll, epoll, kqueue, etc. facilities target multiple socket/file descriptor handling scenarios. Imagine a heavy loaded web-server with hundreds of simultaneously connected sockets. How would you know when to read and from what socket without blocking everything?
If you call read on a non-blocking socket, it will return immediately if no data has been received since the last call to read. If you only had read, and you wanted to wait until there was data available, you would have to busy wait. This wastes CPU.
poll and select (and friends) allow you to sleep until there's data to read (or write, or a signal has been received, etc.).
If the only thing you're doing is sending and receiving on that socket, you might as well just use a non-blocking socket. Being asynchronous is important when you have other things to do in the meantime, such as update a GUI or handle other sockets.
For your first question, there's no difference in that scenario. The only difference is what they do when there is nothing to be read. Since you're checking that before calling recv() you'll see no difference.
For the second question, the way I see it done in all the libraries is to use select, poll, epoll, kqueue for testing if data is available. The select method is the oldest, and least desirable from a performance standpoint (particularly for managing large numbers of connections).
What happens if I have one socket, s, there is no data currently available on it, it is a blocking socket, and I call recv on it from two threads at once? Will one of the threads get the data? Will both get it? Will the 2nd call to recv return with an error?
One thread will get it, and there's no way to tell which.
This doesn't seem like a reasonable design. Is there a reason why you need two threads calling recv() on the same socket?
Socket implementations should be thread-safe, so exactly one thread should get the data when it becomes available. The other call should just block.
I can't find a reference for this, but here's my understanding:
A vendor's guarantee of thread-safety may mean only that multiple threads can each safely use their own sockets; it does not guarantee atomicity across a single call, and it doesn't promise any particular allocation of the socket's data among multiple threads.
Suppose thread A calls recv() on a socket that's receiving TCP data streaming in at a high rate. If recv() needs to be an atomic call, then thread A could block all other threads from executing, because it needs to be running continuously to pull in all the data (until its buffer is full, anyway.) That wouldn't be good. Hence, I would not assume that recv() is immune to context switching.
Conversely, suppose thread A makes a blocking call to recv() on a TCP socket, and the data is coming in slowly. Hence the call to recv() returns with errno set to EAGAIN.
In either of these cases, suppose thread B calls recv() on the same socket while thread A is still receiving data. When does thread A stop getting data handed to it so that thread B can start receiving data? I don't know of a Unix implementation that will try to remember that thread A was in the middle of an operation on the socket; instead, it's up to the application (threads A and B) to negotiate their use of it.
Generally, it's best to design the app so that only one of the threads will call recv() on a single socket.
From the man page on recv
A recv() on a SOCK_STREAM socket
returns as much available information
as the size of the buffer supplied can
hold.
Lets assume you are using TCP, since it was not specified in the question. So suppose you have thread A and thread B both blocking on recv() for socket s. Once s has some data to be received it will unblock one of the threads, lets say A, and return the data. The data returned will be of some random size as far as we are concerned. Thread A inspects the data received and decides if it has a complete "message", where a message is an application level concept.
Thread A decides it does not have a complete message, so it calls recv() again. BUT in the meantime B was already blocking on the same socket, and has received the rest of the "message" that was intended for thread A. I am using intended loosely here.
Now both thread A and thread B have an incomplete message, and will, depending on how the code is written, throw the data away as invalid, or cause weird and subtle errors.
I wish I could say I didn't know this from experience.
So while recv() itself is technically thread safe, it is a bad idea to have two threads calling it simultaneously if you are using it for TCP.
As far as I know it is completely safe when you are using UDP.
I hope this helps.