I am a newbie in C. I just noticed that the connect() function on the client side can return as long as the TCP three-way hand-shake is finished. I mean connect() can even return before the accept() on the server side is called (correct me if I am wrong). Based on this knowledge, my question is that when I call select() afterwards on the client side, and watch the file descriptor to wait for it to be writeable, when select() successfully returns, that means the server side has already called accept() and now I can safely write to the server side, right? Many thanks for your time.
int flags = fcntl(fd, F_GETFL);
flags |= O_NONBLOCK;
fcntl(fd, F_SETFL, flags);
if (connect(fd, (struct sockaddr *)saptr, salen) < 0)
{
if (errno != EINPROGRESS)
/* error_return */
}
fd_set set;
FD_ZERO (&set);
FD_SET (fd, &set);
select (FD_SETSIZE, NULL, &set, NULL, &timeout)
/* Here, if select returns 1, that means accept() is already called
on the server side, and now I can safely write to the server, right? */
when select() successfully returns, that means the server side has already called accept()
No, not necessarily. connect() returns when the connection attempt is complete, having either succeeded or failed. On the remote side, this is handled by the network stack, outside the context of any application. The subsequent accept() itself produces no additional communication.
and now I can safely write to the server side, right?
There are all kinds of things you could mean by "safely", but if you mean that the local side can write at least one byte without blocking then yes, select() promises you that. Whatever you successfully write will be sent over the wire to the remote side. It may be buffered there for a time, depending on the behavior of the software on the remote end. Whether that software has yet accept()ed the connection or not is not directly relevant to that question.
Update: note also that the network stack maintains a per-socket queue of established connections that have not yet been accept()ed (its backlog). This queuing behavior is one reason why a server might not accept() connections immediately after they are established, especially under heavy load.
'I mean connect() can even return before the accept() on the server
side is called'
Yes, it can, and does.
when I call select() afterwards on the client side, and watch the file
descriptor to wait for it to be writeable, when select() successfully
returns, that means the server side has already called accept() and
now I can safely write to the server side, right?
Sure. Write away:)
Related
I'm working on this calendar/reminders server-client thing. I've stumbled across a bug where I may have misunderstood the behaviour of the accept() function while trying to handle new connections.
The client basically specifies a day to view and sends it to the server, then gets some text back and closes the connection.(I have been using telnet and netcat to test this so far though.)
After I hit ctrl+d on netcat after I send the command and receive the message, the server gets and infinite output loop of "New connection\n".
The way I understood accept() was that when it is called, it sets the left hand side to a socket descriptor for a connection on the listen() backlog, or waits until there is a connection before returning. So either I am mistaken or I am doing something wrong:
bind(client_socket, (struct sockaddr*)&server_address, sizeof(server_address));
listen(client_socket, 5);
//start main loop. first level checks for commands
while (1)
{
client_socket = accept(client_socket, NULL, NULL);
printf("New connection.\n");
recv(client_socket, buffer, sizeof(buffer), 0);
/*Bunch of code here that interprets what was sent with some string manipulation
and serves back parts of a text file. No socket functions other than send() twice here*/
recv(client_socket, buffer, sizeof(buffer), 0);
}
The idea I had in mind was, once the job is done, wait until the client closes the connection, which sends a message of length 0(hence the recv() at the end), then loop back to the accept() which accepts or waits for the next connection. What am I missing here?
In your second call to accept, you are passing as the listening socket the descriptor for the first connection rather than the listening socket. You really need to check for errors on all these calls. Otherwise, your code will be impossible to debug.
client_socket = accept(client_socket, NULL, NULL);
This is fine the first time. But it leaks the descriptor for the listening socket. So you can't accept another connection.
Let's suppose I've created a listening socket:
sock = socket(...);
bind(sock,...);
listen(sock, ...);
Is it possible to do epoll_wait on sock to wait for incoming connection? And how do I get client's socket fd after that?
The thing is on the platform I'm writing for sockets cannot be non-blocking, but there is working epoll implementation with timeouts, and I need to accept connection and work with it in a single thread so that it doesn't hang if something goes wrong and connection doesn't come.
Without knowing what this non-standard platform is it's impossible to know exactly what semantics they gave their epoll call. But on the standard epoll on Linux, a listening socket will be reported as "readable" when an incoming connection arrives, and then you can accept the connection by calling accept. If you leave the socket in blocking mode, and always check for readability using epoll's level-triggered mode before each call to accept, then this should work – the only risk is that if you somehow end up calling accept when no connection has arrived, then you'll get stuck. For example, this could happen if there are two processes sharing a listening socket, and they both try to accept the same connection. Or maybe it could happen if an incoming connection arrives, and then is closed again before you call accept. (Pretty sure in this case Linux still lets the accept succeed, but this kind of edge case is exactly where I'd be suspicious of a weird platform doing something weird.) You'd want to check these things.
Non-blocking mode is much more reliable because in the worst case, accept just reports that there's nothing to accept. But if that's not available, then you might be able to get away with something like this...
Since this answer is the first up in the results in duckduckgo. I will just chime in to say that under GNU/Linux 4.18.0-18-generic (Ubuntu 18.10).
The asynchronously accept an incoming connection using one has to watch for errno value EWOULDBLOCK (11) and then add the socket to epoll read set.
Here is a small shot of scheme code that achieves that:
(define (accept fd)
(let ((out (socket:%accept fd 0 0)))
(if (= out -1)
(let ((code (socket:errno)))
(if (= code EWOULDBLOCK)
(begin
(abort-to-prompt fd 'read)
(accept fd))
(error 'accept (socket:strerror code))))
out)))
In the above (abort-to-prompt fd 'read) will pause the coroutine and add fd to epoll read set, done as follow:
(epoll-ctl epoll EPOLL-CTL-ADD fd (make-epoll-event-in fd)))
When the coroutine is unpaused, the code proceed after the abort to call itself recursively (in tail-call position)
In the code I am working in Scheme, it is a bit more involving since I rely on call/cc to avoid callbacks. The full code is at source hut.
That is all.
[I asked something similar before. This is a more focused version.]
What can cause a server's select() call on a TCP socket to consistently time-out rather than "see" the client's close() of the socket? On the client's side, the socket is a regular socket()-created blocking socket that successfully connects to the server and successfully transmits a round-trip transaction. On the server's side, the socket is created via an accept() call, is blocking, is passed to a child server process via fork(), is closed by the top-level server, and is successfully used by the child server process in the initial transaction. When the client subsequently closes the socket, the select() call of the child server process consistently times-out (after 1 minute) rather than indicating a read-ready condition on the socket. The select() call looks for read-ready conditions only: the write-ready and exception arguments are NULL.
Here's the simplified but logically equivalent select()-using code in the child server process:
int one_svc_run(
const int sock,
const unsigned timeout)
{
struct timeval timeo;
fd_set fds;
timeo.tv_sec = timeout;
timeo.tv_usec = 0;
FD_ZERO(&fds);
FD_SET(sock, &fds);
for (;;) {
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timeo);
if (status < 0)
return errno;
if (status == 0)
return ETIMEDOUT;
/* This code not reached when client closes socket */
/* The time-out structure, "timeo", is appropriately reset here */
...
}
...
}
Here's the logical equivalent of the sequence of events on the client-side (error-handling not shown):
struct sockaddr_in *raddr = ...;
int sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
(void)bindresvport(sock, (struct sockaddr_in *)0);
connect(sock, (struct sockaddr *)raddr, sizeof(*raddr));
/* Send a message to the server and receive a reply */
(void)close(sock);
fork(), exec(), and system() are never called. The code is considerably more complex than this, but this is the sequence of relevant calls.
Could Nagel's algorithm cause the FIN packet to not be sent upon close()?
Most likely explanation is that you're not actually closing the client end of the connection when you think you are. Probably because you have some other file descriptor that references the client socket somewhere that is not being closed.
If your client program ever does a fork (or related calls that fork, such as system or popen), the forked child might well have a copy of the file descriptor which would cause the behavior you're seeing.
One way to test/workaround the problem is to have the client do an explicit shutdown(2) prior to closing the socket:
shutdown(sock, SHUT_RDWR);
close(sock);
If this causes the problem to go away then that is the problem -- you have another copy of the client socket file descriptor somewhere hanging around.
If the problem is due to children getting the socket, the best fix is probably to set the close-on-exec flag on the socket immediately after creating it:
fcntl(sock, F_SETFD, fcntl(sock, F_GETFD) | FD_CLOEXEC);
or on some systems, use the SOCK_CLOEXEC flag to the socket creation call.
Mystery solved.
#nos was correct in the first comment: it's a firewall problem. A shutdown() by the client isn't needed; the client does close the socket; the server does use the right timeout; and there's no bug in the code.
The problem was caused by the firewall rules on our Linux Virtual Server (LVS). A client connects to the LVS and the connection is passed to the least-loaded of several backend servers. All packets from the client pass through the LVS; all packets from the backend server go directly to the client. The firewall rules on the LVS caused the FIN packet from the client to be discarded. Thus, the backend server never saw the close() by the client.
The solution was to remove the "-m state --state NEW" options from the iptables(8) rule on the LVS system. This allows the FIN packets from the client to be forwarded to the backend server. This article has more information.
Thanks to all of you who suggested using wireshark(1).
select() call of Linux will modify value of timeout argument. From man page:
On Linux, select() modifies timeout to reflect the amount of time not
slept
So your timeo will runs to zero. And when it is zero select will return immediately (mostly with return value zero).
The following change may help:
for (;;) {
struct timeval timo = timeo;
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timo);
I have a AF_INET/SOCK_STREAM server written in C running on Android/Linux which looks more ore less like this:
...
for (;;) {
client = accept(...);
read(client, &message, sizeof(message));
response = process(&message);
write(client, response, sizeof(*response));
close(client);
}
As far as I know, the call to close should not terminate the connection to the client immediately, but it apparently does: The client reports "Connection Reset by Peer" before it has had a chance to read the server's response.
If I insert a delay between write() and close() the client can read the response as expected.
I got a hint that it might have to do with the SO_LINGER option, but I checked it's value and both members of struct linger (l_onoff, l_linger) have a value of zero.
Any ideas?
Stevens describes a configuration in which this can happen, but it depends on the client sending more data after the server has called close() (after the client should “know” that the connection is being closed). UNP 2nd ed s5.12.
Try tcpdumping the conversation to find out what’s really going on. If there's any possibility that a “clever” gateway (e.g. NAT) is between the two endpoints, tcpdump both ends and look for discrepancies.
Connection gets reset when you call close() on connection with data being sent. Specially for this case the sequence of shutdown() with SHUT_WR flag and then blocking read() is used.
Shutting down the writing end of the socket sends FIN and returns immediately, and the said read() blocks and returns 0 as soon as your peer replies with FIN in due turn. Basically, this is what you need in place of the delay between write() and close() you are talking about.
You do not need do anything with linger options in this case, leave it all to default.
SO_LINGER should be set (i.e. set to 1 not 0) if you want queued data to be sent before a close is effected.
SO_LINGER
Lingers on a close() if data is present. This option controls the
action taken when unsent messages
queue on a socket and close() is
performed. If SO_LINGER is set, the
system shall block the calling thread
during close() until it can transmit
the data or until the time expires. If
SO_LINGER is not specified, and
close() is issued, the system handles
the call in a way that allows the
calling thread to continue as quickly
as possible. This option takes a
linger structure, as defined in the
header, to specify the
state of the option and linger
interval.
I'm currently working on a project which involves multiple clients connected to a server and waiting for data. I'm using select and monitoring the connection for incoming data. However, the client just continues to print nothing, acting as if select has discovered incoming data. Perhaps I'm attacking this wrong?
For the first piece of data the server does send, it is displayed correctly. However, the server then disconnects and the client continues to spew blank lines.
FD_ZERO(&readnet);
FD_SET(sockfd, &readnet);
while(1){
rv = select(socketdescrip, &readnet, NULL, NULL, &timeout);
if (rv == -1) {
perror("select"); // error occurred in select()
} else if (rv == 0) {
printf("Connection timeout! No data after 10 seconds.\n");
} else {
// one or both of the descriptors have data
if (FD_ISSET(sockfd, &readnet)) {
numbytes = recv(sockfd, buf, sizeof buf, 0);
printf("Data Received\n");
buf[numbytes] = '\0';
printf("client: received '%s'\n",buf);
sleep(10);
}
}
}
I think you need to check the result of recv. If it returns zero, I believe it means the server has closed the socket.
Also (depending on the implementation), you may need to pass socketdescrip+1 to select.
If I remember correctly, you need to initialise set of fds before each call to select() because select() corrupts it.
So move FD_ZERO() and FD_SET() inside the loop, just before select().
acting as if select has discovered
incoming data. Perhaps I'm attacking
this wrong?
In addition to what was said before, I'd like to note that select()/poll() do tell you not when "data are there" but rather that next corresponding system call will not block. That's it. As was said above, recv() doesn't block and properly returns 0, what means EOF, connection was closed by the other side.
Though on most *nix systems in the case only first call of recv() would return 0, following calls would return -1. When using async I/O rigorous error checking is a must!
And personally I would strongly suggest to use poll() instead. Unlike select(), it doesn't destroy its arguments and works fine with high numbered socket descriptors.
When server closes the connection, it will send a packet taking FIN flag to client side to announce that it no longer sends data. The packet is processed by TCP/IP stack at the client side and has no data for application level. The application level is notified to trigger select because something happened on the monitored file descriptor, and recv() return 0 bytes because no data sent by server.
Is this true when talking about your code?
select(highest_file_descriptor+1, &readnet, NULL, NULL, &timeout);
In your simple example (with FD_ZERO and FD_SET moved inside the while(1) loop as qrdl said) it should look like this:
select(sockfd+1, &readnet, NULL, NULL, &timeout);
Also - please note that when recv returns 0 bytes read it means that connection was closed - no more data! Your code is also buggy - when something bad happens on recv (it returns <0 when this happens) you will have serious trouble because something like buf[-1] may lead to unpredictable results. Please handle this case properly.
While I respect the fact that you try to use the low-level BSD sockets API I must say that I find it awfully inefficient. That's why I recommend to you if possible to use ACE which is a very efficient and productive framework which has a lot of things already implemented when it comes to network programming (ACE_Reactor for example is something that makes it easier to do what you're trying to achieve here).