I have this thread in my application that monitors a set of client sockets. I use select() to block until a client makes a request, so that I can handle it efficiently without multiplying threads.
Now, problem is, when I add a new client to the list of clients, I have to wait for the timeout of select() (set to 10 seconds) to actually add the new socket to the listened sockets.
So I'd like to make select() crack before the timeout so that a client can be listened to immediately.
I already have a solution to that: create a dummy socketpair that I always include in my listened sockets list, and in which I write to make select() crack, but I'm hoping there's a better solution out there.
Edit:
I have no access to eventfd() because the GLibc I use is too old (and I have no mean to update it). So I might have to use a fifo or a socket.
Do you know any?
Thanks!
The usual way of waking up a select loop is to add the read end of a pipe() fd pair to the select's watching set. When you need to wake up the select loop, write some dummy data to the write end of the file descriptor.
Note that on linux you might also want to consider using an eventfd() instead of a pipe() - it may be somewhat more efficient (although less portable).
You could also handle the listen socket in the select loop instead of handing it off to another thread - this would wake up the select loop implicitly when a new client comes.
You can use the same select() call to wait for the incoming connection by including the listener socket in the FD set; this way, when it indicates that a connection is waiting, you can accept the connection without blocking and add the new file descriptor to the set.
You can raise signal to force EINTR from select(), but signal processing in multithreaded programs is black magic and socketpair() is simpler and more reliable.
You can use WSAEventSelect to link a windows socket with HEVENT and wait then linked to socket HEVENT with another descriptors using WaitForMultipleObjects.
Related
The following strategies seem to work well:
Using a single thread/process with a nonblocking accept() call on the listener socket, regardless of how the program handles the accepted request.
Using multiple threads/processes with a blocking accept() call in each process. When a connection comes in, this wakes up exactly one accept().
What doesn't work well is EPOLLIN watching the listener socket each thread/process with accept() in a callback. This wakes every thread/process up, though only one can succeed in actually accept()ing. This is just like the bad old days of blocking accept() causing a stampede when a connection would come in.
Is there a way to only have a single thread/process wake up to accept() while still using EPOLLIN? Or should I rewrite to use blocking accept()s, just isolated using threads?
It's not an option to have only a single thread/process run accept() because I'm trying to manage the processes as a pool in a way where each process doesn't need to know whether it's the only daemon accept()ing on the listener socket.
You need to use EPOLLET or EPOLLONESHOT so that exactly one thread gets woken by the EPOLLIN event when a new connection comes in. The handling thread then needs to call accept in a loop until it returns EAGAIN (EPOLLET) or manually reset with epoll_ctl (EPOLLONESHOT) in order for more connections to be handled.
In general when using multiple threads and epoll, you want to use EPOLLET or EPOLLONESHOT. Otherwise when an event happens, multiple threads will be woken to handle it and they may interfere with each other. At best, they'll just waste time figuring out that some other thread is handling the event before waiting again. At worst they'll deadlock or corrupt stuff.
How about multiple sockets listening on the same proto+address+port? This can be accomplished with the Linux SO_REUSERPORT. https://lwn.net/Articles/542629/ . I have not tried it, but I think it should work even with epoll, since only one socket gets the actual event.
caveat emptor This is a non-portable, Linux-only solution. SO_REUSEPORT also suffers from some bugs/features that are detailed in the linked article.
I'm new to linux programming and not entirely familiar with all the synchronization facilities so I'd like to ask more knowledgeable people how they might go about solving this problem.
I have a single thread that I would like to run through a loop. The stopping point in the loop will be a read operation on a socket. I want the read operation to block for some period of time and then timeout. However, I need a way unblock the thread from the read, if some event needs attention. The "event" could be any one of a number of different things so I need some way to tell the thread what cause the read to unblock.
I know that you can unblock a blocked read with a signal but I'm not sure how that's done.
See the select() system call.
This is especially useful for waiting for multiple file channels.
You can set timeout of socket operation. Example:
struct timeval timeout;
timeout.tv_sec = TIMEOUT_SEC;
timeout.tv_usec = TIMEOUT_MSEC;
setsockopt(sock_fd, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));
/* now receive msg */
recvmsg(sock_fd, &msg, 0);
When you want to make your socket blocking, do:
timeout.tv_sec = 0;
timeout.tv_usec = 0;
setsockopt(sock_fd, SOL_SOCKET, SO_RCVTIMEO, &timeout, sizeof(timeout));
epoll seems to be the way to go:
The epoll API performs a similar task to poll(2): monitoring multiple
file descriptors to see if I/O is possible on any of them. The epoll
API can be used either as an edge-triggered or a level-triggered inter‐
face and scales well to large numbers of watched file descriptors. The
following system calls are provided to create and manage an epoll
instance:
man epoll for more info. You might want to see "Example for Suggested Usage" section on manual.
See also epoll vs select
Sounds like you want to use select() as others have mentioned, but you also want a way to interrupt it when a "message" of some sort is available. A typical way of interrupting a select() is to use the self pipe trick. Basically you create a pipe() and also select() on the read file descriptor of the pipe. When a message arrives in the queue maintained by your program, write a byte to the pipe. This will cause your select call to return and you'll be able to check to see if your pipe is ready for reading. If it is then you know you have a message to process (whatever that is in your context), so you process it and then go back to select(). Better yet, you could have your pipe actually be your message queue. If you just use the pipe as a way to signal that messages are on your queue, make sure you actually read() the bytes out of your pipe each time through, or it will fill up eventually and block you from writing more notifications to it.
Although, as others have mentioned, why not just have one thread service your queue and do your writes to the socket, while another thread does the reads? Probably a lot simpler.
Perhaps these two libraries may be of use to you:
libev
libuv
They both use the event-driven paradigm on one or more threads (if so desired). Of course, you can implement your own event-driven framework using already mentioned APIs and conditional variables, but that might be more work than necessary.
I want to write a asynchronous socket server in C, but before I do I'm doing some research. Looking at the select() socket example shown here: http://www.gnu.org/s/hello/manual/libc/Server-Example.html#Server-Example I can see that the example program will only accept one client per select loop (if I'm reading it right). So if there are 20 clients and two more try to connect, will it only accept the 21st client then process the other 20 (worst case, assuming all 20 others require reading) and THEN accept the 22nd? Would it be better if I break the loop after accepting a client so it can select() again and take care of all pending clients before processing connected ones? Or does that defeat the purpose of using select()? Thanks.
The server pattern shown in the example you linked to is fine; there isn't any significant problem introduced by the loop only accepting one socket per iteration.
The key point to keep in mind is that in a well-designed select() loop, the only place the process should ever block is inside the select() call. In particular, if coded correctly, the server will never block inside send(), recv(), or accept(). It's best to set all sockets to non-blocking mode (via fcntl(fd, F_SETFL, O_NONBLOCK)) in order to guarantee this behavior.
Given that, the precise ordering of "which clients get serviced first within any particular event loop iteration" doesn't matter, because all clients' sockets get handled very soon after they have data ready-for-read (or buffer space ready-for-write), and all new connections get accepted quickly.
select() leaves it up to the programmer to decide how to handle notifications. One call to select() can indicate that any or all sockets have bytes to be read and some connection requests to process. The application can process all the notifications before calling select() or process one notification before calling select() again.
You can use poll() on the listening socket after accept() to see if there are more clients waiting to connect.
Note the the number of concurrent connect-attempts is handled by the backlog parameter to listen(server_sock, backlog) - backlog is 1 in the sample you reference.
A situation I have under Windows XP (SP3) has been driving me nuts, and I'm reaching the end of my tether, so maybe someone can provide some inspiration.
I have a C++ networking program (non-GUI). This program is built to compile and run under Windows, MacOS/X, and Linux, so it uses select() and non-blocking I/O as the basis for its event loop.
In addition to its networking duties, this program needs to read text commands from stdin, and exit gracefully when stdin is closed. Under Linux and MacOS/X, that's easy enough -- I just include STDIN_FILENO in my read fd_set to select(), and select() returns when stdin is closed. I check to see that FD_ISSET(STDIN_FILENO, &readSet) is true, try to read some data from stdin, recv() returns 0/EOF, and so I exit the process.
Under Windows, on the other hand, you can't select on STDIN_FILE_HANDLE, because it's not a real socket. You can't do non-blocking reads on STDIN_FILE_HANDLE, either. That means there is no way to read stdin from the main thread, since ReadFile() might block indefinitely, causing the main thread to stop serving its network function.
No problem, says I, I'll just spawn a thread to handle stdin for me. This thread will run in an infinite loop, blocking in ReadFile(stdinHandle), and whenever ReadFile() returns data, the stdin-thread will write that data to a TCP socket. That socket's connection's other end will be select()'d on by the main thread, so the main thread will see the stdin data coming in over the connection, and handle "stdin" the same way it would under any other OS. And if ReadFile() returns false to indicate that stdin has closed, the stdin-thread just closes its end of the socket-pair so that the main thread will be notified via select(), as described above.
Of course, Windows doesn't have a nice socketpair() function, so I had to roll my own using listen(), connect(), and accept() (as seen in the CreateConnectedSocketPair() function here. But I did that, and it seems to work, in general.
The problem is that it doesn't work 100%. In particular, if stdin is closed within a few hundred milliseconds of when the program starts up, about half the time the main thread doesn't get any notification that the stdin-end of the socket-pair has been closed. What I mean by that is, I can see (by my printf()-debugging) that the stdin-thread has called closesocket() on its socket, and I can see that the main thread is select()-ing on the associated socket (i.e. the other end of the socket-pair), but select() never returns as it should... and if it does return, due to some other socket selecting ready-for-whatever, FD_ISSET(main_thread_socket_for_socket_pair, &readSet) returns 0, as if the connection wasn't closed.
At this point, the only hypothesis I have is that there is a bug in Windows' select() implementation that causes the main thread's select() not to notice that the other end of the socket-pair has closed by the stdin-thread. Is there another explanation? (Note that this problem has been reported under Windows 7 as well, although I haven't looked at it personally on that platform)
Just for the record, this problem turned out to be a different issue entirely, unrelated to threading, Windows, or stdin. The actual problem was an inter-process deadlock, where the parent process was blocked, waiting for the child processes to quit, but sometimes the child processes would be simultaneously blocked, waiting on the parent to supply them with some data, and so nothing would move forward.
Apologies to all for wasting your time on a red herring; if there's a standard way to close this case as unwarranted, let me know and I'll do it.
-Jeremy
Is it possible you have a race condition? Eg. Do you ensure that the CreateConnectedSocketPair() function has definitely returned before the stdin-thread has a chance to try closing its socket?
I am studying in your code. In the CreateConnectedSocketPair(), socket1 is used for listen(), and newfd is used for send/recv data. So, why does "socket1 = newfd"? How to close the listenfd then?
Not a solution, but as a workaround, couldn't you send some magic "stdin has closed" message across the TCP socket and have your receiving end disconnect its socket when it sees that and run whatever 'stdin has closed' handler?
Honestly your code is too long and I don't have time right now to spend on it.
Most likely the problem is in some cases closing the socket doesn't cause a graceful (FIN) shutdown.
Checking for exceptions returning from your select may catch the remainder of cases. There is also the (slim) possibility that no notification is actually being sent to the socket that the other end has closed. In that case, there is no way other than timeouts or "keep alive"/ping messages between the endpoints to know that the socket has closed.
If you want to figure out exactly what is happening, break out wireshark and look for FINs and RSTs (and the absence of anything). If you see the proper FIN sequence going across when your socket is closed, then the problem must be in your code. if you see RST, it may be caught by exceptions, and if you don't see anything you'll need to devise a way in your protocol to 'ping' each side of the connection to make sure they are still alive, or set a sufficiently short timeout for more data.
Rather than chasing perceived bugs in select(), I'm going to address your original fallacy that drove you away from simple, reliable, single-threaded design.
You said "You can't do non-blocking reads on STDIN_FILE_HANDLE, either. That means there is no way to read stdin from the main thread, since ReadFile() might block indefinitely" but this simply isn't the whole story. Look at ReadConsoleInput, WSAEventSelect, and WaitForMultipleObjects. The stdin handle will be signalled only when there is input and ReadConsoleInput will return immediately (pretty much the same idea behind select() in Unix).
Or, use ReadFileEx and WaitForMultipleObjectsEx to have the console reads fire off an APC (which isn't all that asynchronous, it runs on the main thread and only during WaitForMultipleObjectsEx or another explicit wait function).
If you want to stick with using a second thread to get async I/O on stdin, then you might try closing the handle being passed to select instead of doing a socket shutdown (via closesocket on the other end). In my experience select() tends to return really quickly when one of the fds it is waiting on gets closed.
Or, maybe your problem is the other way around. The select docs say "For connection-oriented sockets, readability can also indicate that a request to close the socket has been received from the peer". Typically you'd send that "request to close the socket" by calling shutdown(), not closesocket().
I have a worker thread that is listening to a TCP socket for incoming traffic, and buffering the received data for the main thread to access (let's call this socket A). However, the worker thread also has to do some regular operations (say, once per second), even if there is no data coming in. Therefore, I use select() with a timeout, so that I don't need to keep polling. (Note that calling receive() on a non-blocking socket and then sleeping for a second is not good: the incoming data should be immediately available for the main thread, even though the main thread might not always be able to process it right away, hence the need for buffering.)
Now, I also need to be able to signal the worker thread to do some other stuff immediately; from the main thread, I need to make the worker thread's select() return right away. For now, I have solved this as follows (approach basically adopted from here and here):
At program startup, the worker thread creates for this purpose an additional socket of the datagram (UDP) type, and binds it to some random port (let's call this socket B). Likewise, the main thread creates a datagram socket for sending. In its call to select(), the worker thread now lists both A and B in the fd_set. When the main thread needs to signal, it sendto()'s a couple of bytes to the corresponding port on localhost. Back in the worker thread, if B remains in the fd_set after select() returns, then recvfrom() is called and the bytes received are simply ignored.
This seems to work very well, but I can't say I like the solution, mainly as it requires binding an extra port for B, and also because it adds several additional socket API calls which may fail I guess – and I don't really feel like figuring out the appropriate action for each of the cases.
I think ideally, I would like to call some function which takes A as input, and does nothing except makes select() return right away. However, I don't know such a function. (I guess I could for example shutdown() the socket, but the side effects are not really acceptable :)
If this is not possible, the second best option would be creating a B which is much dummier than a real UDP socket, and doesn't really require allocating any limited resources (beyond a reasonable amount of memory). I guess Unix domain sockets would do exactly this, but: the solution should not be much less cross-platform than what I currently have, though some moderate amount of #ifdef stuff is fine. (I am targeting mainly for Windows and Linux – and writing C++ by the way.)
Please don't suggest refactoring to get rid of the two separate threads. This design is necessary because the main thread may be blocked for extended periods (e.g., doing some intensive computation – and I can't start periodically calling receive() from the innermost loop of calculation), and in the meanwhile, someone needs to buffer the incoming data (and due to reasons beyond what I can control, it cannot be the sender).
Now that I was writing this, I realized that someone is definitely going to reply simply "Boost.Asio", so I just had my first look at it... Couldn't find an obvious solution, though. Do note that I also cannot (easily) affect how socket A is created, but I should be able to let other objects wrap it, if necessary.
You are almost there. Use a "self-pipe" trick. Open a pipe, add it to your select() read and write fd_set, write to it from main thread to unblock a worker thread. It is portable across POSIX systems.
I have seen a variant of similar technique for Windows in one system (in fact used together with the method above, separated by #ifdef WIN32). Unblocking can be achieved by adding a dummy (unbound) datagram socket to fd_set and then closing it. The downside is that, of course, you have to re-open it every time.
However, in the aforementioned system, both of these methods are used rather sparingly, and for unexpected events (e.g., signals, termination requests). Preferred method is still a variable timeout to select(), depending on how soon something is scheduled for a worker thread.
Using a pipe rather than socket is a bit cleaner, as there is no possibility for another process to get hold of it and mess things up.
Using a UDP socket definitely creates the potential for stray packets to come in and interfere.
An anonymous pipe will never be available to any other process (unless you give it to it).
You could also use signals, but in a multithreaded program you'll want to make sure that all threads except for the one you want have that signal masked.
On unix it will be straightforward with using a pipe. If you are on windows and want to keep using the select statement to keep your code compatible with unix, the trick to create an unbound UDP socket and close it, works well and easy. But you have to make it multi-threadsafe.
The only way I found to make this multi-threadsafe is to close and recreate the socket in the same thread as the select statement is running. Of course this is difficult if the thread is blocking on the select. And then comes in the windows call QueueUserAPC. When windows is blocking in the select statement, the thread can handle Asynchronous Procedure Calls. You can schedule this from a different thread using QueueUserAPC. Windows interrupts the select, executes your function in the same thread, and continues with the select statement. You can now in your APC method close the socket and recreate it. Guaranteed thread safe and you will never loose a signal.
To be simple:
a global var saves the socket handle, then close the global socket, the select() will return immediately: closesocket(g_socket);