Non blocking sockets when using I/O multiplexing - c

Should I use non-blocking or blocking TCP sockets when using an I/O multiplexing API like poll(2) or epoll(2)?
Some people suggest using non-blocking sockets here but the I/O multiplexing APIs inform you anyway if there is data to read so what is wrong with a blocking socket here?

If your TCP server is single-threaded and uses blocking I/O, then it's likely that any client that connects to it will be able to deny service to all of the other clients simply by sending only a partial-message, or alternatively by refusing to read any data from its TCP socket after the server sends data. In the former case, the server may block for a long time (perhaps forever) waiting for the entire message to be received from the client; during that time, the server will not be able to respond to other clients. In the latter case, the server will block for a long time (perhaps forever) waiting the client to read some TCP data so that the server-socket's send-buffer can be drained enough to fit some more outgoing data to that client.
One way to avoid that problem is to set all of the server's sockets to non-blocking I/O mode; that way the server knows it can never get "stuck" inside a recv() or a send() call, and thus can remain responsive to all clients regardless of whether any particular client is behaving nicely, or not. In the non-blocking design, the only place the server ever blocks is inside select() or poll() or similar, because those calls are designed to return whenever any client needs service, rather than blocking on only a single client. (the tradeoff is that with non-blocking I/O your server's buffering/queueing logic will need to be a bit more elaborate, since you can no longer assume that any particular fixed number of bytes will be sent or received during any given send or receive operation)
The other way to avoid the problem is to make a multi-threaded server; that has the advantage that each client gets its own thread, and therefore a badly-behaved client will block only its own thread and not the threads servicing other clients. The disadvantage is that now your server is multi-threaded, with all of the additional pitfalls that multithreading introduces.
(and, for completeness, the third approach is simply to ignore the possibility of badly-behaved/poorly-connected clients, and use a single-threaded/blocking model. That works fine for toy examples where clients are expected to be non-hostile, and where the network they are connecting over is reliable, but doesn't work so well in real life)

Non-blocking IO is used when you prefer an error response (EWOULDBLOCK / EAGAIN) over your thread waiting (blocking) until an IO operation becomes possible.
This leads to the question of how is the IO multiplexing achieved?
If you're using a thread-per-connection model (or a process-per-connection), using blocking IO might be more comfortable.
However, if the same thread is serving multiple IO objects, blocking IO would be hazardous and could bring the whole application to a halt.
It is better to use non-blocking IO when a single thread serves multiple IO objects.
Note that the issue might not be noticeable at first when polling (using select / poll or epoll/kqueue).
Since the IO operations are only performed by a code path that already "knows" that the IO operation will not block (it was polled and known to be an available operation).
This masks the issue that somewhere in the code an IO operation might be called directly without polling first, resulting in a blocking IO call that will grind the application to a halt.

Related

Synchronizing between UDP and TCP

I'm currently implementing a daemon server that acts as 2 servers. One of the servers is recieving logs via UDP from a collection of producers. The second server is broadcasting every log that was received from a producer to a consumer who is currently connected via TCP.
These are 2 separte sockets. My current(pretty basic) implementation is to use select() on these 2 sockets, and handle every read signal accordingly, so my code is basicly(NOTE this is pseudo code)
for(;;) {
FDSET(consumers_server)
FDSET(producers_server)
select()
if consumers_server is set:
add new client to the consumers array
if producers server is set:
broadcast the log to every consumer in the array
}
This works just fine, the problem ocurres when this code is put in to stress. When multiple produers are sending logs(UDP) the real bottleneck here is the consumers which are TCP. Sending a log to the consumers can result in blocking, which i can't afford.
I've tried using non-blocking sockets and select()ing the consumers write fds, the problem is this would result in saving the non-sent logs in a buffer, until they can be sent. This results in a very unelegant massive code, and the system is also low on resources(mainly RAM)
I'm running on a linux distro.
An alternative approach to synchronize between these UDP and TCP connections would be welcomed.
This is doomed to failure. Sooner or later you will be unable to send to the TCP consumer. Whether that manifests itself as blocking or EAGAIN/EWOULDBLOCK isn't really relevant to the underlying problem, which is that the producer is overrunning the consumer. You have to decide what to do about that. You can have a certain amount of internal buffering but at some point you will have to stop reading from the UDP producers. At that point, UDP datagrams will be dropped and your system will lose data, and of course it is liable to lose data anyway by virtue of using UDP.
Don't do this. Use TCP for the producers: or else just accept the data loss and use blocking mode. Non-blocking mode only moves the problem slightly and complicates your code.

Reading multiple UDP messages without polling

I would like to use the recvmmsg call to read multiple UDP messages from ONE single socket at once. I'm reading data from a single multicast group.
When I read TCP data, I usually use poll/select with a non-blocking socket (and timeout) to be notified when that is ready to be read. I follow this approach as I am aware of the issue of spurious wakeup and potential troubles of having a blocking socket.
As my application must be very quick, if I follow the same approach with recvmmsg I will introduce an extra system call (poll/select) that might slow down the execution.
So my two questions are the following:
With UDP, can I safely read from BLOCKING sockets using recvmmsg without poll/select or do I have to apply the same principle I've used for TCP (non-blocking+poll)?
Suppose I have a huge amount of multicast traffic, would you go for non-blocking socket + recvmmsg only (no poll) and burn a lot of CPU?
I am using Linux: CentOS 7 and Oracle Linux.
You can always use blocking mode, with both TCP and UDP sockets.
If you want to impose a read timeout there is setsockopt() with the SO_RCVTIMEO option.
I follow this approach as I am aware of the issue of spurious wakeup
What spurious wakeup? Never seen it in 25 years of network programming.
and potential troubles of having a blocking socket.
Never heard of those either.
Using select() and non-blocking mode with a single socket is pointless unless your platform doesn't support SO_RCVTIMEO. It's an extra system call, for a start.
The option of using blocking or non-blocking depends on what is the final purpose of the application.
- Say it's just a sample chat application showing the usage of UDP combined with TCP then you can use either.
- But if you are planning to make this module a part of highly used application with lots of data flowing then probably creating multiple threads/processes to handle different tasks will come in handy. The parent thread will to wait for the message but for processing it will spawn a different child thread and hence make the parent available for the next message.
But in a nutshell I don't see any issue with your first option of using a blocking socket without poll/select for a UDP application considering it's just for homework purposes.

select() equivalence in I/O Completion Ports

I am developing a proxy server using WinSock 2.0 in Windows. If I wanted to develop it in blocking model, select() was the way to wait for client or remote server to receive data from. Is there any applicable way to do this so using I/O Completion Ports?
I used to have two Contexts for two directions of data using I/O Completion Ports. But having a WSARecv pending couldn't receive any data from remote server! I coudn't find the problem.
Thanks in advance.
EDIT. Here's the WorkerThread Code on currently developed I/O Completion Ports. But I am asking about how to implement select() equivalence.
I/O Completion Ports provide an indication of when an I/O operation completes, they do not indicate when it is possible to initiate an operation. In many situations this doesn't actually matter. Most of the time the overlapped I/O model will work perfectly well if you assume it is always possible to initiate an operation. The underlying operating system will, in most cases, simply do the right thing and queue the data for you until it is possible to complete the operation.
However, there are some situations when this is less than ideal. For example you can always send to a socket using overlapped I/O. You can do this even when the remote peer is not reading and the TCP stack has started to use flow control and has filled the TCP window... This simply uses resources on your local machine in a completely uncontrolled manner (not entirely uncontrolled, but controlled by the peer, which is not ideal). I write about this here and in many situations you DO need to actively manage this kind of thing by tracking how many outstanding I/O write requests you have and using that as an indication of 'readiness to send'.
Likewise if you want a 'readiness to recv' indication you could issue a 'zero byte' read on the socket. This is a read which is issued with a zero length buffer. The read returns when there is data to read but no data is returned. This would give you the indication that there is data to be read on the connection but is, IMHO, pointless unless you are suffering from the very unlikely situation of hitting the I/O page lock limit, as you may as well read the data when it becomes available rather than forcing multiple kernel to user mode transitions.
In summary, you don't really need an answer to your question. You need to look at how the API works and write your code to work with it rather than trying to force the API to work in a way that other APIs that you are familiar with work.

Use of Listen() sys call in a multi threaded TCP server

I am in the middle of a multi-threaded TCP server design using Berkely SOCKET API under linux in system independent C language. The Server has to perform I/O multiplexing as the server is a centralized controller that manages the clients (that maintain a persistent connection with the server forever (unless a machine on which client is running fails etc)). The server needs to handle a minimum of 500 clients.
I have a 16 core machine, what I want is that I spawn 16 threads(one per core) and a main thread. The main thread will listen() to the connections and then dispatch each connection on the queue list to a thread which will then call accept() and then use the select() sys call to perform I/O multiplexing. Now the problem is how do I know that when to dispatch a thread to call accept() . I mean how do I find out in the main thread that there is a connection pending at the listen() so that I can assign a thread to handle that connection. All help much appreciated.
Thanks.
The listen() function call prepares a socket to accept incoming connections. You then use select() on that socket and get a notification that a new connection has arrived. You then call accept on the server socket and a new socket id will be returned. If you like you can then pass that socket id onto your thread.
What I would do is have a single thread for accepting connections and receiving data which then dispatches the data to a queue as a work item for processing.
Note that if each of your 16 threads is going to be running select (or poll, or whatever) anyway, there is no problem with them all adding the server socket to their select sets.
More than one may wake when the server socket has in incoming connection, but only one will successfully call accept, so it should work.
Pro: easy to code.
Con:
naive implementation doesn't balance load (would need eg. global
stats on number of accepted sockets handled by each thread, with
high-load threads removing the server socket from their select sets)
thundering herd behaviour could be problematic at high accept rates
epoll or aio/asio. I suspect you got no replies to your earlier post because you didn't specify linux when you asked for a scalable high-performnce solution. Asynchronous solutions on different OS are implemented with substantial kernel support and linux aio, Windows IOCP etc. are different enough that 'system independent' does not really apply - nobody could give you an answer.
Now that you have narrowed the OS down to linux, look up the appropriate asynchronous solutions.

Non-block vs select() call in socket

I have to implement a game server in C which handles multiple clients and continuously exchange information with them. The client may not be sending information at all times.Should I assign a thread with non-blocking socket to them or use select() call.
Which one is better?
Both will work just as well in most cases. Note that the thread version will use blocking sockets, and the select-based uses non-blocking sockets. In the case of a server, you may feel that events for data received is a good model.
The threaded version will have the memory-overhead of allocating a stack for each thread (often the size of a page), but you can program as if you have only one client.
The evented version needs to maintain state between callbacks, which can be more work, but again, in servers it feels quite natural.
select() is the way to go, that's what it's made for. If you go for the threaded non-blocking approach, you will have to implement a sleep after each tick or the thread will use all available cpu time. So, the worst case response time, if one client is sending data, is your sleep value. You could also implement one thread per socket and make it blocking, but depending on how many sockets you have, that will be much overhead.
With select() you can watch all sockets at once (no matter if they are blocking or not, btw) and only have to process those which are active.
If you are on linux an have many sockets to watch, you can take a look at epoll()

Resources