Detecting connection close on AF_UNIX, SOCK_SEQPACKET socket without using poll - c

For a bound and connected (to an unbound peer) socket created using socket(AF_UNIX, SOCK_SEQPACKET, 0), is there a way to detect that the remote end has hung up, without using poll (the POLLHUP flag will be set), select (for some reason, appearing in the write fd set), or the like?
Unlike SOCK_STREAM, 0-length payloads are valid and indistinguishable from the end of the stream (as far as I can tell; I keep getting 0-length chunks when calling recvmsg on a socket for which the remote end has called shutdown or close). Also there aren't any known (to me) flags visible in struct msghdr that signal the end of the connection. The only thing I see in the Linux kernel source (net/unix/af_unix.c:unix_seqpacket_recvmsg) is the kernel returning -ENOTCONN presumably if the socket is not connected, and then performing the same work as for SOCK_DGRAM.
Note that it is important to use recvmsg rather than recv, recvfrom, etc, because I need to check the flags for MSG_TRUNC.

Related

Is it possible to do epoll on accept event?

Let's suppose I've created a listening socket:
sock = socket(...);
bind(sock,...);
listen(sock, ...);
Is it possible to do epoll_wait on sock to wait for incoming connection? And how do I get client's socket fd after that?
The thing is on the platform I'm writing for sockets cannot be non-blocking, but there is working epoll implementation with timeouts, and I need to accept connection and work with it in a single thread so that it doesn't hang if something goes wrong and connection doesn't come.
Without knowing what this non-standard platform is it's impossible to know exactly what semantics they gave their epoll call. But on the standard epoll on Linux, a listening socket will be reported as "readable" when an incoming connection arrives, and then you can accept the connection by calling accept. If you leave the socket in blocking mode, and always check for readability using epoll's level-triggered mode before each call to accept, then this should work – the only risk is that if you somehow end up calling accept when no connection has arrived, then you'll get stuck. For example, this could happen if there are two processes sharing a listening socket, and they both try to accept the same connection. Or maybe it could happen if an incoming connection arrives, and then is closed again before you call accept. (Pretty sure in this case Linux still lets the accept succeed, but this kind of edge case is exactly where I'd be suspicious of a weird platform doing something weird.) You'd want to check these things.
Non-blocking mode is much more reliable because in the worst case, accept just reports that there's nothing to accept. But if that's not available, then you might be able to get away with something like this...
Since this answer is the first up in the results in duckduckgo. I will just chime in to say that under GNU/Linux 4.18.0-18-generic (Ubuntu 18.10).
The asynchronously accept an incoming connection using one has to watch for errno value EWOULDBLOCK (11) and then add the socket to epoll read set.
Here is a small shot of scheme code that achieves that:
(define (accept fd)
(let ((out (socket:%accept fd 0 0)))
(if (= out -1)
(let ((code (socket:errno)))
(if (= code EWOULDBLOCK)
(begin
(abort-to-prompt fd 'read)
(accept fd))
(error 'accept (socket:strerror code))))
out)))
In the above (abort-to-prompt fd 'read) will pause the coroutine and add fd to epoll read set, done as follow:
(epoll-ctl epoll EPOLL-CTL-ADD fd (make-epoll-event-in fd)))
When the coroutine is unpaused, the code proceed after the abort to call itself recursively (in tail-call position)
In the code I am working in Scheme, it is a bit more involving since I rely on call/cc to avoid callbacks. The full code is at source hut.
That is all.

Active close vs passive close in terms of socket API?

In TCP we say one side of the connection performs an "active close" and the other side performs a "passive close".
In terms of the Linux sockets API, how do you differentiate the active close and the passive close?
For example, suppose we have two connected Linux TCP sockets, A and P, that have exchanged information over the application-level protocol and they are both aware that it is time to close their sockets (neither expect to send or receive any more data to or from each other).
We want socket A to perform the active close, and for P to be the passive close.
There are a few things A and P could do. For example:
call shutdown(SHUT_WR)
call recv and expect to get 0 back
call close.
something else
What combination of these things and in what order should A do?... and what combination of these things and in what order should P do?
In terms of the Linux sockets API, how do you differentiate the active
close and the passive close?
The 'active' close is simply whichever side of the socket sends a FIN or RST packet first, typically by calling close().
What combination of these things and in what order should A do?... and
what combination of these things and in what order should P do?
In practice, most of this is application- and application-protocol specific. I will describe the minimum/typical requirement to answer your question, but your mileage may vary depending on what you are specifically trying to accomplish.
You may first call shutdown() on Socket A if you want to terminate communication in one direction or the other (or both) on Socket A. From your description, both programs already know they're done, perhaps due to application protocol messages, so this may not be necessary.
You must call close() on Socket A in order to close the socket and release the file descriptor.
On Socket P, you simply keep reading until recv() returns 0, and then you must call close() to close the socket and release the file descriptor.
For further reading, there are a number of good tutorials out there, and Beej's Guide to Network Programming is quite popular.
Active open is when you issue connect(2) explicitly to make a connection to a remote site. The call blocks until you get the socket opened on the other side (except if you issued O_NONBLOCK fcntl(2) call before calling connect(2).
Passive open is when you have a socket listen(2)ing on a connection and you have not yet issued an accept(2) system call. The accept(2) call normally blocks until you have a completely open connection and gives you a socket descriptor to communicate over it, or gives you inmediately a socket descriptor if the connection handshake has already finished when you issue the accept(2) syscall (this is a passive open). The limit in the number of passively open connections the kernel can accept on your behalf while you prepare yourself to make the accept(2) system call is what is called the listen(2) value.
Active close is what happens when you explicitly call shutdown(2) or close(2) system calls. As with passive open, there's nothing you can do to make a passive close (it's something that happens behind the scenes, product of other side's actions). You detect a passive close when the socket generates an end of file condition (this is, read(2) always returns 0 bytes on reading) meaning the other end has done a shutdown(2) (or close(2)) and the connection is half (or full) closed. When you explicitly shutdown(2) or close(2) your side, it's an active close.
NOTE
if the other end does an explicit close(2) and you continue writing on the socket, you'll get an error due to the impossibility of sending that data (in this case we can talk about a passive close(2) ---one that has occured without any explicit action from our side) but the other end can do a half close calling shutdown(2). This makes the tcp to send a FIN segment only and conserves the socket descriptor to allow the thread to receive any pending data in transit or not yet sent. Only when it receives and acknowledges the other end's FIN segment will it signal you that no more data remains in transit.

How to notify an abnormal client termination to server?

As the Title already says im looking for a way, to get notified when a client closes his Session unnormal.
I'm using the freeBSD OS.
The server is running with Xamount threads (depending on CPUcore amount). So I'm not forking, and there isn't a own process for each client.
That's why sending an deathpackage all time_t seconds, to recive a SIGPIPE isn't an option for me.
But i need to remove left clients from the kqueue, because otherwise after too many accept()'s my code will obviously run into memory troubles.
Is there a way, I can check without high performance loose per client, they are connected or not?
Or any event-notification, that would trigger if this happens? Or maybe is there a way of letting a programm send any signal to a port, even in abnormal termination case, before the Client process will exite?
Edit: that answer misses the question, because it's not about using kqueue. But if someone else finds the question by the title, it may be helpful anyway ...
I've often seen the following behaviour: if a client dies, and the server does a select() on the client's socket descriptor, select() returns with return code > 0 and FD_ISSET( fd ) will be true for that descriptor. But when you now try to read form the socket, read() (or recv()) return ERROR.
For a 'normal' connection using that to detect a client's death works fine for us, but there seems to be a different behaviour when the socket connection is tunneled but we haven't yet managed to figure that out completely.
According to the kqueue man page, kevent() should create an event when the socket has shutdown. From the description of th filter EVFILT_READ:
EVFILT_READ
Takes a descriptor as the identifier, and returns whenever there is data available to read. The behavior of the filter is slightly different depending on the descriptor type.
Sockets
Sockets which have previously been passed to listen() return when there is an incoming connection pending. data contains the size of the listen backlog.
Other socket descriptors return when there is data to be read, subject to the SO_RCVLOWAT value of the socket buffer. This may be overridden with a per-filter low water mark at the time the filter is added by setting the NOTE_LOWAT flag in fflags, and specifying the new low water mark in data. On return, data contains the number of bytes of protocol data available to read.
If the read direction of the socket has shutdown, then the filter also sets EV_EOF in flags, and returns the socket error (if any) in fflags. It is possible for EOF to be returned (indicating the connection is gone) while there is still data pending in the socket
buffer.

Windows socket seems to be Non Duplex

I'm writing a client-server program, where the client is C++/winapi and the server is C#/.net.
the client have a loop where it reads from server (and may block the calling thread [denote t1] , which is fine with me). it also have another thread [denote t2] , that wait on an Event object with a timeout.
if the timeout is reached (and the Event is yet to be singled) the t2 thread, will write (exacly on byte) on the same socket.
The problem I have, is that it seems like the write won't return untill the read on t1 returns (in some legitimate scnerions it will never happen) , as if the socket was not full-duplex.
P.S : socket is an AF_INET/ SOCK_STREAM and I'm using Readfile and WriteFile for socket IO.
thanks.
Neither sockets not read() and write(), or send() and recv(), behave that way. You must have some synchronization of your own.
I have been programming with WinSock for over a decade, and I can assure you that sockets are always full duplex.
The only way WriteFile() (or send() or WSASend()) would block the calling thread for any amount of time is if the socket is running in blocking mode and its outbound queue of data waiting to be transmitted has been completely full (the size of the queue is controlled by the SO_SNDBUF socket option). That indicates that the other party (your C# server) is not reading inbound data from its socket endpoint and acknowledging the received data in a timely manner so your socket endpoint can remove that data from its outbound queue so new data can be accepted for transmission.
If you don't want your call to WriteFile() to block, you can either:
enable the SO_SNDTIMEO socket option to specify a timeout for blocking writes.
use select(), WSAAsyncSelect(), or WSAAsyncEvent() to detect when the socket is actually writable (ie, when it can accept data without blocking) before writing anything new to the socket.
switch to non-blocking I/O, asynchronous overlapped I/O, or I/O completion ports.

close vs shutdown socket?

In C, I understood that if we close a socket, it means the socket will be destroyed and can be re-used later.
How about shutdown? The description said it closes half of a duplex connection to that socket. But will that socket be destroyed like close system call?
This is explained in Beej's networking guide. shutdown is a flexible way to block communication in one or both directions. When the second parameter is SHUT_RDWR, it will block both sending and receiving (like close). However, close is the way to actually destroy a socket.
With shutdown, you will still be able to receive pending data the peer already sent (thanks to Joey Adams for noting this).
None of the existing answers tell people how shutdown and close works at the TCP protocol level, so it is worth to add this.
A standard TCP connection gets terminated by 4-way finalization:
Once a participant has no more data to send, it sends a FIN packet to the other
The other party returns an ACK for the FIN.
When the other party also finished data transfer, it sends another FIN packet
The initial participant returns an ACK and finalizes transfer.
However, there is another "emergent" way to close a TCP connection:
A participant sends an RST packet and abandons the connection
The other side receives an RST and then abandon the connection as well
In my test with Wireshark, with default socket options, shutdown sends a FIN packet to the other end but it is all it does. Until the other party send you the FIN packet you are still able to receive data. Once this happened, your Receive will get an 0 size result. So if you are the first one to shut down "send", you should close the socket once you finished receiving data.
On the other hand, if you call close whilst the connection is still active (the other side is still active and you may have unsent data in the system buffer as well), an RST packet will be sent to the other side. This is good for errors. For example, if you think the other party provided wrong data or it refused to provide data (DOS attack?), you can close the socket straight away.
My opinion of rules would be:
Consider shutdown before close when possible
If you finished receiving (0 size data received) before you decided to shutdown, close the connection after the last send (if any) finished.
If you want to close the connection normally, shutdown the connection (with SHUT_WR, and if you don't care about receiving data after this point, with SHUT_RD as well), and wait until you receive a 0 size data, and then close the socket.
In any case, if any other error occurred (timeout for example), simply close the socket.
Ideal implementations for SHUT_RD and SHUT_WR
The following haven't been tested, trust at your own risk. However, I believe this is a reasonable and practical way of doing things.
If the TCP stack receives a shutdown with SHUT_RD only, it shall mark this connection as no more data expected. Any pending and subsequent read requests (regardless whichever thread they are in) will then returned with zero sized result. However, the connection is still active and usable -- you can still receive OOB data, for example. Also, the OS will drop any data it receives for this connection. But that is all, no packages will be sent to the other side.
If the TCP stack receives a shutdown with SHUT_WR only, it shall mark this connection as no more data can be sent. All pending write requests will be finished, but subsequent write requests will fail. Furthermore, a FIN packet will be sent to another side to inform them we don't have more data to send.
There are some limitations with close() that can be avoided if one uses shutdown() instead.
close() will terminate both directions on a TCP connection. Sometimes you want to tell the other endpoint that you are finished with sending data, but still want to receive data.
close() decrements the descriptors reference count (maintained in file table entry and counts number of descriptors currently open that are referring to a file/socket) and does not close the socket/file if the descriptor is not 0. This means that if you are forking, the cleanup happens only after reference count drops to 0. With shutdown() one can initiate normal TCP close sequence ignoring the reference count.
Parameters are as follows:
int shutdown(int s, int how); // s is socket descriptor
int how can be:
SHUT_RD or 0
Further receives are disallowed
SHUT_WR or 1
Further sends are disallowed
SHUT_RDWR or 2
Further sends and receives are disallowed
This may be platform specific, I somehow doubt it, but anyway, the best explanation I've seen is here on this msdn page where they explain about shutdown, linger options, socket closure and general connection termination sequences.
In summary, use shutdown to send a shutdown sequence at the TCP level and use close to free up the resources used by the socket data structures in your process. If you haven't issued an explicit shutdown sequence by the time you call close then one is initiated for you.
I've also had success under linux using shutdown() from one pthread to force another pthread currently blocked in connect() to abort early.
Under other OSes (OSX at least), I found calling close() was enough to get connect() fail.
"shutdown() doesn't actually close the file descriptor—it just changes its usability. To free a socket descriptor, you need to use close()."1
Close
When you have finished using a socket, you can simply close its file descriptor with close; If there is still data waiting to be transmitted over the connection, normally close tries to complete this transmission. You can control this behavior using the SO_LINGER socket option to specify a timeout period; see Socket Options.
ShutDown
You can also shut down only reception or transmission on a connection by calling shutdown.
The shutdown function shuts down the connection of socket. Its argument how specifies what action to perform:
0
Stop receiving data for this socket. If further data arrives, reject it.
1
Stop trying to transmit data from this socket. Discard any data waiting to be sent. Stop looking for acknowledgement of data already sent; don’t retransmit it if it is lost.
2
Stop both reception and transmission.
The return value is 0 on success and -1 on failure.
in my test.
close will send fin packet and destroy fd immediately when socket is not shared with other processes
shutdown SHUT_RD, process can still recv data from the socket, but recv will return 0 if TCP buffer is empty.After peer send more data, recv will return data again.
shutdown SHUT_WR will send fin packet to indicate the Further sends are disallowed. the peer can recv data but it will recv 0 if its TCP buffer is empty
shutdown SHUT_RDWR (equal to use both SHUT_RD and SHUT_WR) will send rst packet if peer send more data.
linux: shutdown() causes listener thread select() to awake and produce error. shutdown(); close(); will lead to endless wait.
winsock: vice versa - shutdown() has no effect, while close() is successfully catched.

Resources