I have an application that is going to work like a p2p-software where all peer are going to talk to each other. Since the communication will be TCP i thought that I could use epool(4) so that multiple connections can be handled. Since each peer will send data very often, I thought that I will establish a persistent connection to each peer that will be used under the applications lifetime.
Now, one thing that I don't know how to handle is that since the connection is never closed how do I know when i should stop receiving data with read() and call epool_wait() again to listen after more packages? Or is there a better way of dealing with persistent TCP connections?
You should set the socket to non-blocking, and when epoll indicates there is data to read
you should call read() in a loop until read() returns -1 and errno is EWOULDBLOCK
That is, your read loop could look sometihng like:
for(;;)
ssize_t ret;
ret = read(...);
if(ret == 0) {
//client disconnected, handle it, remove the fd from the epoll set
break;
} else if(ret == -1) {
if(errno == EWOULDBLOCK) {
// no more data, return to epoll loop
} else {
//error occured, handle it remove the fd from the epoll set
}
break;
}
// handle the read data
}
If you're not using edge triggered mode with epoll, you don't really need the loop - you could get away with doing just 1 read and return to the epoll loop. But handle the return values just like the above code.
That should have been 'epoll', not 'epool'...not familiar with epoll, but have a look here at the Beej's guide to see an example of the sockets using 'poll'...look at section 7.2 in there to see how it is done, also look at the section 9.17 for the usage of 'poll'...
Hope this helps,
Best regards,
Tom.
read() reads as much data as is immediately available (but no more that you request). Just run read() on the active socket, with a big-enough buffer (you probably don't need it bigger than your MTU… 2048 bytes will do) and call epoll_wait() when it completes.
Related
Let's suppose I've created a listening socket:
sock = socket(...);
bind(sock,...);
listen(sock, ...);
Is it possible to do epoll_wait on sock to wait for incoming connection? And how do I get client's socket fd after that?
The thing is on the platform I'm writing for sockets cannot be non-blocking, but there is working epoll implementation with timeouts, and I need to accept connection and work with it in a single thread so that it doesn't hang if something goes wrong and connection doesn't come.
Without knowing what this non-standard platform is it's impossible to know exactly what semantics they gave their epoll call. But on the standard epoll on Linux, a listening socket will be reported as "readable" when an incoming connection arrives, and then you can accept the connection by calling accept. If you leave the socket in blocking mode, and always check for readability using epoll's level-triggered mode before each call to accept, then this should work – the only risk is that if you somehow end up calling accept when no connection has arrived, then you'll get stuck. For example, this could happen if there are two processes sharing a listening socket, and they both try to accept the same connection. Or maybe it could happen if an incoming connection arrives, and then is closed again before you call accept. (Pretty sure in this case Linux still lets the accept succeed, but this kind of edge case is exactly where I'd be suspicious of a weird platform doing something weird.) You'd want to check these things.
Non-blocking mode is much more reliable because in the worst case, accept just reports that there's nothing to accept. But if that's not available, then you might be able to get away with something like this...
Since this answer is the first up in the results in duckduckgo. I will just chime in to say that under GNU/Linux 4.18.0-18-generic (Ubuntu 18.10).
The asynchronously accept an incoming connection using one has to watch for errno value EWOULDBLOCK (11) and then add the socket to epoll read set.
Here is a small shot of scheme code that achieves that:
(define (accept fd)
(let ((out (socket:%accept fd 0 0)))
(if (= out -1)
(let ((code (socket:errno)))
(if (= code EWOULDBLOCK)
(begin
(abort-to-prompt fd 'read)
(accept fd))
(error 'accept (socket:strerror code))))
out)))
In the above (abort-to-prompt fd 'read) will pause the coroutine and add fd to epoll read set, done as follow:
(epoll-ctl epoll EPOLL-CTL-ADD fd (make-epoll-event-in fd)))
When the coroutine is unpaused, the code proceed after the abort to call itself recursively (in tail-call position)
In the code I am working in Scheme, it is a bit more involving since I rely on call/cc to avoid callbacks. The full code is at source hut.
That is all.
I am building a webserver which can accept and handle multiple client connections. I'm using select() for this.
Now while this is going on, if a particular connected socket hasn't had any activity (send or recv) on it, I want to close it. So if no requests come from a connected client for a period of time, I'll close the socket. There are multiple such connected sockets and I need to do this monitoring for each.
I need this functionality to create persistent connections as my webserver has to support HTTP 1.1
What is the best way to do this?
I'd suggest setting the timeout of the call to select to be the minimum time until the next socket will timeout. Then, if it timed out, close the idle sockets, and repeat. Something like this pseudocode:
timeout = default_timeout;
foreach(socket s)
{
timeout = min(timeout, (s.last_send_or_recv_time + IDLE_TIMEOUT - now()));
}
result = select(..., timeout);
if(result == 0)
{
foreach(socket s)
{
if(now() - s.last_send_or_recv_time >= IDLE_TIMEOUT)
{
close(s);
remove_from_socket_list(s);
}
}
}
else
{
// Handle received data, update last_send_or_recv_time, etc.
}
If you're using threads and blocking mode, you just need to set a suitable SO_RCVTIMEOUT on the socket and when you get read() returning -1 with errno == EAGAIN/EWOULDBLOCK, the timeout has happened, so you just close the socket and exit the thread.
If you're using select() it's more complicated. Adam Rosenfield's suggestion looks plausible on the surface, but in practice, if the other sockets are busy enough, the select() timeout might never happen at all, or at least not happen for many minutes after a socket actually became too dead. If you want to enforce the timeout strictly, as you should in a server, you have to associate a last-read time with each socket and, at the bottom of the select() loop, iterate over the fd set picking out those fds that were not ready, check their last-read time, and close the ones that are too dead.
I'm currently working on a project which involves multiple clients connected to a server and waiting for data. I'm using select and monitoring the connection for incoming data. However, the client just continues to print nothing, acting as if select has discovered incoming data. Perhaps I'm attacking this wrong?
For the first piece of data the server does send, it is displayed correctly. However, the server then disconnects and the client continues to spew blank lines.
FD_ZERO(&readnet);
FD_SET(sockfd, &readnet);
while(1){
rv = select(socketdescrip, &readnet, NULL, NULL, &timeout);
if (rv == -1) {
perror("select"); // error occurred in select()
} else if (rv == 0) {
printf("Connection timeout! No data after 10 seconds.\n");
} else {
// one or both of the descriptors have data
if (FD_ISSET(sockfd, &readnet)) {
numbytes = recv(sockfd, buf, sizeof buf, 0);
printf("Data Received\n");
buf[numbytes] = '\0';
printf("client: received '%s'\n",buf);
sleep(10);
}
}
}
I think you need to check the result of recv. If it returns zero, I believe it means the server has closed the socket.
Also (depending on the implementation), you may need to pass socketdescrip+1 to select.
If I remember correctly, you need to initialise set of fds before each call to select() because select() corrupts it.
So move FD_ZERO() and FD_SET() inside the loop, just before select().
acting as if select has discovered
incoming data. Perhaps I'm attacking
this wrong?
In addition to what was said before, I'd like to note that select()/poll() do tell you not when "data are there" but rather that next corresponding system call will not block. That's it. As was said above, recv() doesn't block and properly returns 0, what means EOF, connection was closed by the other side.
Though on most *nix systems in the case only first call of recv() would return 0, following calls would return -1. When using async I/O rigorous error checking is a must!
And personally I would strongly suggest to use poll() instead. Unlike select(), it doesn't destroy its arguments and works fine with high numbered socket descriptors.
When server closes the connection, it will send a packet taking FIN flag to client side to announce that it no longer sends data. The packet is processed by TCP/IP stack at the client side and has no data for application level. The application level is notified to trigger select because something happened on the monitored file descriptor, and recv() return 0 bytes because no data sent by server.
Is this true when talking about your code?
select(highest_file_descriptor+1, &readnet, NULL, NULL, &timeout);
In your simple example (with FD_ZERO and FD_SET moved inside the while(1) loop as qrdl said) it should look like this:
select(sockfd+1, &readnet, NULL, NULL, &timeout);
Also - please note that when recv returns 0 bytes read it means that connection was closed - no more data! Your code is also buggy - when something bad happens on recv (it returns <0 when this happens) you will have serious trouble because something like buf[-1] may lead to unpredictable results. Please handle this case properly.
While I respect the fact that you try to use the low-level BSD sockets API I must say that I find it awfully inefficient. That's why I recommend to you if possible to use ACE which is a very efficient and productive framework which has a lot of things already implemented when it comes to network programming (ACE_Reactor for example is something that makes it easier to do what you're trying to achieve here).
If I got a file descriptor (socket fd), how to check this fd is avaiable for read/write?
In my situation, the client has connected to server and we know the fd.
However, the server will disconnect the socket, are there any clues to check it ?
You want fcntl() to check for read/write settings on the fd:
#include <unistd.h>
#include <fcntl.h>
int r;
r = fcntl(fd, F_GETFL);
if (r == -1)
/* Error */
if (r & O_RDONLY)
/* Read Only */
else if (r & O_WRONLY)
/* Write Only */
else if (r & O_RDWR)
/* Read/Write */
But this is a separate issue from when the socket is no longer connected. If you are already using select() or poll() then you're almost there. poll() will return status nicely if you specify POLLERR in events and check for it in revents.
If you're doing normal blocking I/O then just handle the read/write errors as they come in and recover gracefully.
You can use select() or poll() for this.
In C#, this question is answered here
In general, socket disconnect is asynchronous and needs to be polled for in some manner. An async read on the socket will typically return if it's closed as well, giving you a chance to pick up on the status change quicker. Winsock (Windows) has the ability to register to receive notification of a disconnect (but again, this may not happen for a long time after the other side "goes away", unless you use some type of 'keepalive' (SO_KEEPALIVE, which by default may not notice for hours, or an application-level heartbeat).
I found the recv can check. when socket fd is bad, some errno is set.
ret = recv(socket_fd, buffer, bufferSize, MSG_PEEK);
if(EPIPE == errno){
// something wrong
}
Well, you could call select(). If the server has disconnected, I believe you'll eventually get an error code returned... If not, you can use select() to tell whether you're network stack is ready to send more data (or receive it).
From one thread I got the following code:
int next_stuff(char **code){
...
len=read(file_desc,buffer+end_len,packet_size-end_len);
if(len<=0)
{
if(len==-1 && errno==EAGAIN) return(0);
else return(-1);
}
...
}
while (next_stuff(&buff) == 0)
{
...
}
On the other thread I'd like to finish that socket and exit this operation, but only doing a
close(file_desc);
does not cause read to return nonblocked. Am I missing something?
EDIT:
shutdown does not work as well. And I am trying that on Linux 2.6.23
shutdown(fd, SHUT_RD);
$ man -s 2 shutdown
NAME
shutdown -- shut down part of a full-duplex connection
SYNOPSIS
#include <sys/socket.h>
int shutdown(int socket, int how);
DESCRIPTION
The shutdown() call causes all or part of a full-duplex connection on the socket
associated with socket to be shut down.
If how is SHUT_RD, further receives will be disallowed. If how is
SHUT_WR, further sends will be
disallowed. If how is SHUT_RDWR, further sends and receives will be disallowed.
RETURN VALUES
The shutdown() function returns the value 0 if successful; otherwise the value -1 is
returned and the global variable errno is set to indicate the error.
In general, if you do not want blocking socket calls, you would use select() to see if the socket is ready to read, write, or is in the error state. In addition, pass a timeout value to select() so that this call isn't blocking forever. After the select() call returns, you can see if the application wants to quit and if so do the "right" thing (that's for you to decide).
If your read() call is nonblocking, it should return fairly fast, as all it will be doing is inserting memory.
To prevent doing any damage, you would use a mutex around your call to read() and close() such that they cant both run at the same time.
If your socket is blocking, i think you should make it nonblocking.
The other answers about non-blocking sockets are good, and I recommend recoding to use that approach.
As a direct answer to your question, though, try calling shutdown() and see if that will break the other thread out of read. I'm afraid close() is just decrementing the usage count, while shutdown() will actively tear down the socket.
I don't think you've provided enough information to answer this question, yet. For example if the socket that you've opened is UDP, then a close on the sending side will have no effect on the receiving side. If it is TCP, then something else is broken. I suggest that if you are really dealing with sockets, you use recv or recvfrom instead of read.
In the case of TCP, your read will return 0 bytes, an indication that the other side has closed the connection.
If you are really doing this between two threads instead of two processes, a pipe may be more appropriate. That's not to say that a pipe could not also be used between two separate processes, it just takes a bit more set up in that case.