when the server's child close socket - c

On the concurrent server, server spawns many children(assume that I am using multiple process when clients connet). So, if client close sockets(close() function), it sends FIN to server and receive ACK from the server.
Finally, server's read() function returns 0 and exit() function is called. It causes server child to terminate and close socket, and send FIN to its client.
In this situation, how can server receive ACK even though server's child socket is closed? and how can server re-send FIN when client doesn't receive the FIN even though there is no connected socket because of child which is terminated?
Does kernel keep that terminated process's socket until finishing final four-handshaking although it is closed?

Yes it does. close() is normally asynchronous.

Yes, close() on sockets is normally asynchronous, and sockets can linger after application is terminated. You can easily see them in netstat output in their approriate state (for example, TIME_WAIT or FIN_WAIT2).

Related

TCP server stop connection

I have a classic TCP server that accepts connections with system call accept().
In a specific situation I have n connections accepted and so n childs created.
I need to stop connection when generic event A occurs on the server. What is the best way to close socket in server and in client?
Now I do it such way: when event A occurs, server sends a specific message to client and then it executes close(). Client reads socket and if the message is the special message it closes.
But if I need to do something else in the client this way is very bad. What can I do?
By 'n childs created' I assume you mean n child processes, with each child process handling the TCP connection to one client program?
If so, one way to handle it would be to have your server's parent-process keep track of the process IDs (as returned by fork()) of the child processes it has spawned (via a list or lookup-table or similar). Then when you need a particular connection to go away, your server's parent-process can call kill(the_child_process_id, SIGTERM) or similar to get the child-process to go away ASAP. (SIGTERM asks nicely; if you want to go full-nuclear you could specify SIGKILL instead).
In any case, once the child-process on the server has received the signal and exited, the OS will make sure that the server's side of the TCP connection is closed, and the client's OS will then be notified that the TCP connection is closed, and based on that, the client (if it is coded correctly to handle remote-closed connections, i.e. to react to recv() returning 0 by closing its own TCP socket) will then do the right thing as well.

TCP: What happens when client connects, sends data and disconnects before accept

I'm testing some code in C and I've found strange behaviour with TCP socket calls.
I've defined one listening thread which accepts clients synchronously and after accepting the client it process it in a for loop until it disconnects. Thus only one client at a time is handled. So I call accept in a loop and then recv in an inner loop until received an empty buffer.
I fire 5 threads with clients, I call connect, send and finally close
I get no error in any call. Everything seems to be fine.
However when I print received message on the server side it turns out that only the first client got through to the server, i.e. accept never fires on other clients.
So my questions are:
Shouldn't connect wait until server calls accept? Or is the kernel layer taking care of buffering under the hood?
If it's not the case then shouldn't the server be able to accept the socket anyway, even if it is in a disconnected state? I mean is it expected to lose all the incoming data?
Or should I assume that there's a bug in my code?
The TCP state-machine performss a synchronized dance with the client's state machine. All of this is performed at OS-level (The TCP/IP stack); the userspace process only can do some systemcalls to influence this machinery now and then. Once the client calls listen() this machinery is started; and new connections will be establisched.
Remember the second argument for listen(int fd, int backlog) ? The whole 3way handshake is completed (by the TCP stack) before accept() delivers the fd to the server in userland. So: the sockets are in connected state, but the user process hasn't picked them up yet (by calling accept() )
Not calling accept() will cause the new connections to be queued up by the kernel. These connections are fully functional, but obviously the data buffers could fill up and the connection would get throttled.
Suggested reading: Comer& Stevens: Internetworking with TCP/IP 10.6-10.7 (containing the TCP state diagram)

How to handle socket read in C when the remote server closes socket possibly before read is finished?

Client blocks on the read call waiting to read n bytes.
Server writes n bytes and closes the connection immediately.
Can read call return negative or zero in this case if the socket gets closed before read is finished or due to some other issue? (client/server running on same linux box in this case)
I am facing such a scenario but not sure how this works in TCP/IP subsystem and how to resolve it.
Sever:
write
close
Client:
read
close
The safe way to close a socket connection is first calling shutdown to signal that you won't be writing, keep reading the data that the remote side sends, and then shutdown the reading side and close the socket. If you close the socket before reading data sent to you the OS resets the connection (sends a packet with the RST flag set) and the remote side interprets this as an error.
TCP treats the connection serially, and the reader processes everything in the order that the sender transmitted. When the sender closes the connection, the reader will get an EOF after it has read all the data that was sent, not before.

Graceful Shutdown Server Socket in Linux

I want to be able to stop listening on a server socket in linux and ensure that all connections that are open from a client's point of view are correctly handled and not abruptly closed (ie: receive ECONNRESET).
ie:
sock = create_socket();
listen(sock, non_zero_backlog);
graceful_close(sock);
if thought calling close() and handling already accept'd sockets would be enough but there can be connections that are open in the kernel backlog which will be abruptly closed if you call close() on the server socket.
The only working way to do that (that I have found) is to:
prevent accept() from adding more clients
have a list of the open sockets somewhere and to wait until they are all properly closed which means:
using shutdown() to tell the client that you will no longer work on that socket
call read() for a while to make sure that all the client has sent in
the meantime has been pulled
then using close() to free each client socket.
THEN, you can safely close() the listening socket.
You can (and should) use a timeout to make sure that idle connections will not last forever.
You are looking at a limitation of the TCP socket API. You can look at ECONNRESET as the socket version of EOF or, you can implement a higher level protocol over TCP which informs the client of an impending disconnection.
However, if you attempt the latter alternative, be aware of the intractable Two Armies Problem which makes graceful shutdown impossible in the general case; this is part of the motivation for the TCP connection reset mechanism as it stands. Even if you could write graceful_close() in a way that worked most of the time, you'd probably still have to deal with ECONNRESET unless the server process can wait forever to receive a graceful_close_ack from the client.

Does connect() block for TCP socket?

Hi I am reading TLPI (The Linux Programming Interface), I have a question about connect().
As I understand, connect() will immediately return if the pending connection numbers of listen() doesn't reach "backlog".
And it will blocks otherwise. (according to figure 56-2)
But for TCP socket, it will always block until accept() on server side is called (according to figure 61-5).
Am I correct?
Because I saw that in the example code (p.1265), it calls listen() to listen to a specific port and then calls connect() to that port BEFORE calling accept().
So connect() blocks forever in this case, doesn't it?
Thanks!!
There's hardly any "immediately" regarding networking, stuff can be lost on the way, and an operation that should be performed immediately in theory might not do so in practice, and in any case there's the end to end transmission time.
However
connect() on a TCP socket is a blocking operation unless the socket descriptor is put into non-blocking mode.
The OS takes care of the TCP handshake, when the handshake is finished, connect() returns. (that is,
connect() does not block until the other end calls accept())
A successful TCP handshake will be queued to the server application, and can be accept()'ed any time later.
connect() blocks until finishing TCP 3-way handshake. Handshake on listening side is handled by TCP/IP stack in kernel and finished without notifying user process. Only after handshake is completed (and initiator can return from connect() call already), accept() in user process can pick up new socket and return. No waiting accept() needed for completing handshake.
The reason is simple: if you have single threaded process listening for connections and require waiting accept() for establishing connections, you can't respond to TCP SYN's while processing another request. TCP stack on initating side will retransmit, but on moderately loaded server chances are high this retransmitted packet still will arrive while no accept() pending and will be dropped again, resulting in ugly delays and connection timeouts.
connect is a blocking call by default, but you can make it non blocking by passing to socket the SOCK_NONBLOCK flag.

Resources