I have write a TCP client and server in C. These software run on the same computer ONLY.
1) My TCP client send a command to the server (localhost).
2) The server works hard and give a response.
The problem is if the TCP Client closes the connection, I am unable to detect a client connection closed WHILE server is going to do its long work. I can only detect this with the SEND function, but it's too late because the server have already work.
I understand that this must be very hard to make this detection if the machines are remote. In my case, it is the same machine, which should make the task easier, but I have not found a solution ... Can be with the select function?
Thanks you.
You can do it with select like this:
Use select for read events on the socket. So select unblocks when the socket is readable
When a request arrives at the server, start work in a different thread
When the client closes the connection, select will unblock and recv will read 0 bytes. If so, you can stop the worker thread
If the worker thread finishes the task without being interrupted, it can send the result
Related
I have a server written in C that closes the connection if the connection is sitting idle for a specific time. I have an issue (that rarely happens). Read is failing on the client side and it says Connection broken. I suspect the server is closing the connection and the client is sending some data at the same time.
Consider the following scenario (A is server, B is the client)
B initiates the connection and the connection between A and B is established.
B is sitting idle and the idle timeout is reached.
A initiates the close
Before B receives the FIN from A, it starts sending request to A
After B sends the request, it will read the response
Since A has already closed the connection, B is not able to read.
My questions are
Is this a possible situation ?
How to handle idle timeout for clients?
How to close the connection between A and B properly (avoid B sending request during the process). In short, how to close the connection atomically?
By my only little more than rudimentary network experience... and assuming that you are talking about a connection-oriented connection like TCP/IP in contrary to UDP/IP that is connection-less.
Yes, of course. You cannot avoid it.
There are multiple ways to do it, but all of them include: Send something from the client before the server's timeout elapses. If the client has no data to send, let it send something like a "life sign". This could be an empty data message, it all depends on your application protocol. Or make the timeout as long as necessary, including some margin. Some protocol timeout only after 3 times of allowed idle time.
You cannot close the connection atomically, because client and server are separated. Each packet on the network needs some time to be transmitted, and both can start sending at the very same moment, the server its closing message, and the client a new data message. There is nothing that you can do about this.
You need to make the client handle this situation properly. For example, it can accept such a broken connection and interpret it as closed. You should have already some reaction, if the server closes the connection while the client is idle.
How to close the connection between A and B properly (avoid B sending request during the process).
Server detects timeout
Server sends timeout detection message to the Client
Server waits for a reply (if timeout, assume Client dead)
if Client receives a timeout detection from the Server, it replies with ACK (or something like that)
if Server receives an ACK from the Client, then 'gracefully' closes the connection
from now on, neither the Server nor the Client should send/receive any messages (after sending the ACK, do not immediately close the connection from the client side, do linger for the agreed timeout - see setsockopt: SO_LINGER)
On top of that, like the other answers suggested, the Client should send a heartbeat if idle (to avoid timeout detections).
My server main thread run infinite while loop to accept connection from clients. After a server get connected with one client, it allocate a thread to handle client task, then close connection. After the task finish, I want my a allocated thread to send data back to the client. How can I achieve this? THank you so much.
client1 --connect--> server --ask--> thread A to do a task that client1 ask to do
close connection
Thread A finished the task, wanna send back result >>>> How?
Don't close the connection before sending the response back.
When a client opens a connection there are really two communication channels involved. One is made by the client to the server, the second is made by the server to the client. When you "accept" a connection on the server side, the remainder the client communication channel is also fully established.
By closing the connection, you destroy both channels. If your server attempted to respond, the client has already received the close indicators, and has destroyed its ability to receive your data.
I implemented a hello-world like client & echo server on Linux, and used tcpdump to watch the packet exchanges between the two. The server forks a child process on each accepted connection, nothing fancy.
Once the child process serving the connection is killed, the server TCP socket goes into FIN-WAIT-2 (after sending a FIN and receiving the ACK), shown by command ss -tap. ss also shows it is orphaned since the Process column for this entry is empty.
Then I sent one more message from client, it triggers two more TCP messages:
client push the message to server
server respond with a RST
and then ss shows the server socket went missing, I assume it went back to CLOSED state, and got reclaimed.
My question is this:
I can understand for an un-orphaned socket, FIN_WAIT_2 can serve the purpose of half-closed connection. But for an orphaned socket, what is the point? Why not go back to CLOSED directly? I read from this post that FIN_WAIT_2 helps to prevent a future connection being mistakenly closed, but if that's the reason, then in my case, the server should NOT close the socket after receiving a regular message - it should wait forever until client sends a FIN, correct?
I am making a client server application. Previously in the application if the client went down the server would try to reconnect ( i.e. if recv() on the server side returns 0 value the server would go back to accept connection ). Now I want to modify the server by allowing it to connect to multiple clients. I thought of using poll() so server could check on each client for sometime. I wanted to know with poll how can I check if the connection to client is lost?
When use multiplex io with poll, you can handle connection shutdown with following events:
POLLIN when there is data to read, and when you does the read or recv function call, make sure you checked the return value, typically a return value of 0 indicates that the connection has been shutdown. This is the same as your previous single client version.
POLLRDHUP which indicates the peer has closed the connection, or shut down writing half of connection.
POLLERR for other errors.
When the three event are triggered, it means the client has closed the connection or there is errors on the socket, you typically close the sockets.
I have a standard client server situation. Clients connect to the server, and the server manages multiple client connections using select() (or similar). Everything uses the POSIX system level networking API.
Sometimes the server needs to close a connection. I want the server to be able to send a message to the client before closing the socket, to inform the client about the reason for the connection being closed.
What is the best way to do that?
The straight forward approach would be to simply have the server write the message and then close (close()), but I imagine that this is problematic, as the client might then get a write error due to the connection having been closed by the server, before the client gets a chance to read the final message written by the server.
Is that the case, or can I be sure that the client does not get a write error until after it has read everything?
Is there a better way to do it?
If possible, I would prefer a solution that is based only on the POSIX specification.
The server should:
Write the message.
Shut down the write side of the connection with a call to shutdown.
Continue reading from the connection until it detects a normal shutdown so that all data sent by the client is read.