TCP connection - delayed close() and RST - c

I have both TCP client and TCP server run on RHEL 5.3 on different machines.
I'm killing server and FIN is sent to the client. ACK is sent back by client's OS back immediately.
Client discovers the close (by read() returning zero) and perfroms close only after 90 sec.
At this stage I verified netstat on both sides and it's as expected (FIN_WAIT_2 on server and CLOSE_WAIT on client).
Due client close() after 90 sec, client's OS sends FIN to server, but in response we receive RST from server and not ACK as expected.
I also saw several times that due to "delayed" close(), client's OS sent RST instead of FIN.
Please note, that in both cases there's no pending reading packets on both sides and SO_LINGER option is not activated.
Any ideas?

The RST indicates that some "data" was lost. In this case, the "data" is the information that the client side closed the socket cleanly - the FIN from the client was not reported to the server side application (because it had been killed).
In other words, the RST tells the client that the server never saw end-of-stream from the client.

Related

What happens when a server closes the connection and the client sends some data at the same time?

I have a server written in C that closes the connection if the connection is sitting idle for a specific time. I have an issue (that rarely happens). Read is failing on the client side and it says Connection broken. I suspect the server is closing the connection and the client is sending some data at the same time.
Consider the following scenario (A is server, B is the client)
B initiates the connection and the connection between A and B is established.
B is sitting idle and the idle timeout is reached.
A initiates the close
Before B receives the FIN from A, it starts sending request to A
After B sends the request, it will read the response
Since A has already closed the connection, B is not able to read.
My questions are
Is this a possible situation ?
How to handle idle timeout for clients?
How to close the connection between A and B properly (avoid B sending request during the process). In short, how to close the connection atomically?
By my only little more than rudimentary network experience... and assuming that you are talking about a connection-oriented connection like TCP/IP in contrary to UDP/IP that is connection-less.
Yes, of course. You cannot avoid it.
There are multiple ways to do it, but all of them include: Send something from the client before the server's timeout elapses. If the client has no data to send, let it send something like a "life sign". This could be an empty data message, it all depends on your application protocol. Or make the timeout as long as necessary, including some margin. Some protocol timeout only after 3 times of allowed idle time.
You cannot close the connection atomically, because client and server are separated. Each packet on the network needs some time to be transmitted, and both can start sending at the very same moment, the server its closing message, and the client a new data message. There is nothing that you can do about this.
You need to make the client handle this situation properly. For example, it can accept such a broken connection and interpret it as closed. You should have already some reaction, if the server closes the connection while the client is idle.
How to close the connection between A and B properly (avoid B sending request during the process).
Server detects timeout
Server sends timeout detection message to the Client
Server waits for a reply (if timeout, assume Client dead)
if Client receives a timeout detection from the Server, it replies with ACK (or something like that)
if Server receives an ACK from the Client, then 'gracefully' closes the connection
from now on, neither the Server nor the Client should send/receive any messages (after sending the ACK, do not immediately close the connection from the client side, do linger for the agreed timeout - see setsockopt: SO_LINGER)
On top of that, like the other answers suggested, the Client should send a heartbeat if idle (to avoid timeout detections).

Purpose of an orphan TCP socket in FIN_WAIT_2 state?

I implemented a hello-world like client & echo server on Linux, and used tcpdump to watch the packet exchanges between the two. The server forks a child process on each accepted connection, nothing fancy.
Once the child process serving the connection is killed, the server TCP socket goes into FIN-WAIT-2 (after sending a FIN and receiving the ACK), shown by command ss -tap. ss also shows it is orphaned since the Process column for this entry is empty.
Then I sent one more message from client, it triggers two more TCP messages:
client push the message to server
server respond with a RST
and then ss shows the server socket went missing, I assume it went back to CLOSED state, and got reclaimed.
My question is this:
I can understand for an un-orphaned socket, FIN_WAIT_2 can serve the purpose of half-closed connection. But for an orphaned socket, what is the point? Why not go back to CLOSED directly? I read from this post that FIN_WAIT_2 helps to prevent a future connection being mistakenly closed, but if that's the reason, then in my case, the server should NOT close the socket after receiving a regular message - it should wait forever until client sends a FIN, correct?

TCP RST sent by kernel when client is killed or crashed

I've a client and server running on the same server(Linux machine) and TCP connection in between them.
I've observed that when I kill client, Kernel/OS sends RST packet after exactly 2 seconds after the client is killed.
My question is which kernel parameter os Socket options govern this timer(2 secs)?
A RST isn't ordinarily sent between peers in a normal connection termination. A FIN is. When you kill the client, a FIN is sent on the connection to indicate to the server that the client won't be sending any more data.
But the server is apparently not paying attention to the FIN it receives when the client is killed (i.e. it would need to attempt a recv on the socket and react appropriately to the end-of-file indication it will get -- usually that means close its own socket). Subsequently, the server is attempting to send data to the client but the connection is closed. That is what results in a RST packet being sent.
RST means (roughly) "there is no active connection available to receive the data you're sending; it's pointless to send more."
And so the timing of that RST is likely based on when the server next attempts to send to the client, not on any kernel / OS configuration setting. If the server doesn't attempt to send and it doesn't close, the connection should just sit there idle forever, and no RST will be sent.
As mentioned in the UNIX Network programming Volume 1 Section Generic socket options, if a client is killed, TCP will send a FIN across the connection.

Connection loss detection with poll()

I am making a client server application. Previously in the application if the client went down the server would try to reconnect ( i.e. if recv() on the server side returns 0 value the server would go back to accept connection ). Now I want to modify the server by allowing it to connect to multiple clients. I thought of using poll() so server could check on each client for sometime. I wanted to know with poll how can I check if the connection to client is lost?
When use multiplex io with poll, you can handle connection shutdown with following events:
POLLIN when there is data to read, and when you does the read or recv function call, make sure you checked the return value, typically a return value of 0 indicates that the connection has been shutdown. This is the same as your previous single client version.
POLLRDHUP which indicates the peer has closed the connection, or shut down writing half of connection.
POLLERR for other errors.
When the three event are triggered, it means the client has closed the connection or there is errors on the socket, you typically close the sockets.

Is it guaranteed that an RST packet will be sent when a process terminates?

If I have a process with a connected socket, and I terminate this process, then Windows will cause an RST packet to be sent.
Is it guaranteed (is it documented somewhere) that an RST packet will always be sent when a process terminates, or could a FIN packet be sent instead?
TCP is not supposed to send an RST packet when a connection is closed. To close a connection, TCP goes through the following states on the client side:
Send a FIN packet. This action will change TCP state to FIN_WAIT_1.
In FIN_WAIT_1, TCP waits for an acknowledgement (ACK) from the server.
Once the acknowledgement is received, TCP enters FIN_WAIT_2.
In FIN_WAIT_2, TCP waits for a FIN packet from the server
Once FIN arrives, the client sends an ACK and enters TIME_WAIT
TIME_WAIT is exited after a while (typically, 30 secs. or 1 minute). The purpose of this state is to make it possible to resend the final ACK to the server in case it was lost.
There is no RST packet anywhere. RST is used to respond to unexpected traffic, not to close a connection.
For example, if you send a TCP packet to port 80, and the server is not running an HTTP server (and assuming the packet makes all the way to the server and is not blocked / ignored), then an RST reply is sent back to the client.

Resources