Can you pass a TCP connection from one process to the other? - c

I am doing some network testing, and I am connecting between linux boxes with 2 small C programs that are just using function:
connect()
After connection some small calculations are made and recorded to local file, and I instruct one of the programs to close the connection and then run a netcat listener on the same port. The first program then retries the connection and connects to netcat.
I wondered if someone could advise if it is possible to maintain the initial connection whilst freeing the port and pass the connection to netcat on that port (so that the initial connection is not closed).

Each TCP connection is defined by the four-tuple (target IP address, target port, source IP address, source port), so there is no need to "free up" the port on either machine.
It is very common for a server process to fork() immediately after accept()ing a new connection. The parent process closes its copy of the connection descriptor (returned by accept()), and waits for a new connection. The child process closes the original socket descriptor, and executes the desired program or script that should handle the actual connection. In many cases the child moves the connection descriptor to standard input and standard output (using dup2()), so that the executed script or program does not even need to know it is connected to a remote client: everything it writes to standard output is sent to the remote client, and everything the remote client sends is readable from standard input.
If there is an existing process that should handle the connection, and there is an Unix domain socket connection (stream, datagram or seqpacket socket; makes no difference) between the two processes, it is possible to transfer the connection descriptor as an SCM_RIGHTS ancillary message. See man 2 sendmsg, man 2 recvmsg, man 3 cmsg, and man 7 unix for details. This only works on the same machine over an Unix domain socket, because the kernel actually duplicates the descriptor from one process to the other; really, the kernel does some funky magic to make this happen.
If your server-side logic is something like
For each incoming connection:
Do some calculations
Store calculations into a file
Store incoming data from the connection into a file (or standard output)
then I recommend using pthreads. Just create the desired number of threads, have all of them wait for an incoming connection by calling accept() on the listening socket, and have each thread handle the connection by themselves. You can even use stdio.h I/O for the file I/O. For more complex output -- multiple statements per chunk --, you'll need a pthread_mutex_t per output stream, and remember to fflush() it before releasing the mutex. I suspect a single multithreaded program that does all that, and exits nicely if interrupted (SIGINT aka CTRL+C), should not exceed three hundred lines of C.

If you only need to output data from a stream socket, or if you only need to write input to it, you can treat it as a file handle.
So in a Posix program you can use dup2() to duplicate the socket handle to the value 1, which is standard output. Then close the original handle. Then use exec() to overwrite your program with "cat", which will write the output from standard input aka filehandle 1 aka your socket.

Related

Active close vs passive close in terms of socket API?

In TCP we say one side of the connection performs an "active close" and the other side performs a "passive close".
In terms of the Linux sockets API, how do you differentiate the active close and the passive close?
For example, suppose we have two connected Linux TCP sockets, A and P, that have exchanged information over the application-level protocol and they are both aware that it is time to close their sockets (neither expect to send or receive any more data to or from each other).
We want socket A to perform the active close, and for P to be the passive close.
There are a few things A and P could do. For example:
call shutdown(SHUT_WR)
call recv and expect to get 0 back
call close.
something else
What combination of these things and in what order should A do?... and what combination of these things and in what order should P do?
In terms of the Linux sockets API, how do you differentiate the active
close and the passive close?
The 'active' close is simply whichever side of the socket sends a FIN or RST packet first, typically by calling close().
What combination of these things and in what order should A do?... and
what combination of these things and in what order should P do?
In practice, most of this is application- and application-protocol specific. I will describe the minimum/typical requirement to answer your question, but your mileage may vary depending on what you are specifically trying to accomplish.
You may first call shutdown() on Socket A if you want to terminate communication in one direction or the other (or both) on Socket A. From your description, both programs already know they're done, perhaps due to application protocol messages, so this may not be necessary.
You must call close() on Socket A in order to close the socket and release the file descriptor.
On Socket P, you simply keep reading until recv() returns 0, and then you must call close() to close the socket and release the file descriptor.
For further reading, there are a number of good tutorials out there, and Beej's Guide to Network Programming is quite popular.
Active open is when you issue connect(2) explicitly to make a connection to a remote site. The call blocks until you get the socket opened on the other side (except if you issued O_NONBLOCK fcntl(2) call before calling connect(2).
Passive open is when you have a socket listen(2)ing on a connection and you have not yet issued an accept(2) system call. The accept(2) call normally blocks until you have a completely open connection and gives you a socket descriptor to communicate over it, or gives you inmediately a socket descriptor if the connection handshake has already finished when you issue the accept(2) syscall (this is a passive open). The limit in the number of passively open connections the kernel can accept on your behalf while you prepare yourself to make the accept(2) system call is what is called the listen(2) value.
Active close is what happens when you explicitly call shutdown(2) or close(2) system calls. As with passive open, there's nothing you can do to make a passive close (it's something that happens behind the scenes, product of other side's actions). You detect a passive close when the socket generates an end of file condition (this is, read(2) always returns 0 bytes on reading) meaning the other end has done a shutdown(2) (or close(2)) and the connection is half (or full) closed. When you explicitly shutdown(2) or close(2) your side, it's an active close.
NOTE
if the other end does an explicit close(2) and you continue writing on the socket, you'll get an error due to the impossibility of sending that data (in this case we can talk about a passive close(2) ---one that has occured without any explicit action from our side) but the other end can do a half close calling shutdown(2). This makes the tcp to send a FIN segment only and conserves the socket descriptor to allow the thread to receive any pending data in transit or not yet sent. Only when it receives and acknowledges the other end's FIN segment will it signal you that no more data remains in transit.

Can we make a non-blocking server with blocking sockets?

I have to make a simple IRC client/server programs for my IT school. The subject asks us to use select(2) for socket polling but forbids us to use O_NONBLOCK sockets.
Your server will accept multiple simultaneous connections.
Attention, the use of fork is prohibited. So you should imperatively use select
Your server must not be blocking.
This has nothing to do with non-blocking sockets, which are prohibited (so do not use fcntl(s, O_NONBLOCK))
I’m wondering if it is even possible to design a non-blocking server (which does not fork) with blocking sockets even using select(2).
Here is a simple example: let say we have a simple text protocol with one command per line. Each client has a buffer. When select(2) tells us a client is ready for read(2), we read until we found a \n in the client buffer, therefor we process the command. With non-blocking sockets, we would read until EAGAIN.
Let imagine now that we are using blocking sockets and a malicious client sends text with no line-break. select(2) tells us data is available, we then read(2) on the client. But we will never read the expected \n. Instead of returning EAGAIN, the syscall will block indefinitely. This is a denial of service attack.
Is it really possible to design a non-blocking server with blocking-sockets and select(2) (no fork(2))?
Yes, you read once from the socket that select tells you is ready. If the read contains the \n, then process that line. Otherwise, store any data that was received, and immediately go back to the select.
This means of course, that for every open socket, you must maintain state information, and a buffer of data read so far. This allows the code to process each read independently, without the need to finish a full line before going back to the select.
It's impossible.
select() blocks, and therefore so does any program that calls it.
The behaviour defined by Posix for send() in blocking mode is that it blocks until all the data supplied has been transferred to the socket send buffer. Unless you're going to delve into low-water marks and so on, it is impossible to know in advance whether there is enough room in he socket send buffer for any given send() to complete without blocking, and therefore impossible for any program that calls send() not to block.
Note that select() doesn't help you with this. It can tell when you when there is some room, but not when there is enough.

C Sockets: write() followed by close() results in incomplete data transfer

I'm attempting to write a rudimentary file server that takes a filename from a client and responds by sending the data over TCP to the client. I have a working client and server application for the most part but I'm observing some odd behavior, consider the following
while ((num_read = read (file_fd, file_buffer, sizeof (file_buffer))) > 0)
{
if (num_read != write (conn_fd, article_buffer, num_read))
{
perror ("write");
goto out;
}
}
out:
close(file_fd); close(sub_fd);
file_fd is a file descriptor to the file being sent over the network, conn_fd is a file descriptor to a connect()ed TCP socket.
This seems to work for small files, but when my files get larger(megabyte+) it seems that some non-consistent amount of data at the end of the file will fail to transfer.
I suspected the immediate close() statements after write might have something to do with it so I tried a 1 second sleep() before both close() statements and my client successfully received all of the data.
Is there any better way to handle this than doing a sleep() on the server side?
A successful "write" on a socket does not mean the data has been successfully sent to the peer.
If you are on a unix deriviative, you can perform a "man 7 socket" and examine SO_LINGER" as a potential solution.
edit: Due to EJP's comment (thank you), I reread what Stevens has to say about the subject in "Unix Network Programming" of ensured delivery of all data to a peer. He says the following (in Volume 1 of Second edition, page 189):
... we see that when we close our end of the connection, depending on the function called (close or shutdown) and whether he SO_LINGER socket option is set, the return can occur at three differrent times.
close returns immediately, without waiting at all (the defaults; Figure 7.6)
close lingers until the ACK of our FIN is received (Figure 7.7), or
shutdown followed by a read waits until we receive the peer's FIN (Figure 7.8)
His figures, and his commentary, indicate other than "application level acknowledgement", the combination of shutdown(), followed by a read() waiting for a zero return code (i.e. notification that the socket has been closed), is the only way to ensure the client application has received the data.
If, however, it is only important that the data has been successfully delivered (and acknowledged) the the peer's computer, then SO_LINGER would be sufficient.

How to notify an abnormal client termination to server?

As the Title already says im looking for a way, to get notified when a client closes his Session unnormal.
I'm using the freeBSD OS.
The server is running with Xamount threads (depending on CPUcore amount). So I'm not forking, and there isn't a own process for each client.
That's why sending an deathpackage all time_t seconds, to recive a SIGPIPE isn't an option for me.
But i need to remove left clients from the kqueue, because otherwise after too many accept()'s my code will obviously run into memory troubles.
Is there a way, I can check without high performance loose per client, they are connected or not?
Or any event-notification, that would trigger if this happens? Or maybe is there a way of letting a programm send any signal to a port, even in abnormal termination case, before the Client process will exite?
Edit: that answer misses the question, because it's not about using kqueue. But if someone else finds the question by the title, it may be helpful anyway ...
I've often seen the following behaviour: if a client dies, and the server does a select() on the client's socket descriptor, select() returns with return code > 0 and FD_ISSET( fd ) will be true for that descriptor. But when you now try to read form the socket, read() (or recv()) return ERROR.
For a 'normal' connection using that to detect a client's death works fine for us, but there seems to be a different behaviour when the socket connection is tunneled but we haven't yet managed to figure that out completely.
According to the kqueue man page, kevent() should create an event when the socket has shutdown. From the description of th filter EVFILT_READ:
EVFILT_READ
Takes a descriptor as the identifier, and returns whenever there is data available to read. The behavior of the filter is slightly different depending on the descriptor type.
Sockets
Sockets which have previously been passed to listen() return when there is an incoming connection pending. data contains the size of the listen backlog.
Other socket descriptors return when there is data to be read, subject to the SO_RCVLOWAT value of the socket buffer. This may be overridden with a per-filter low water mark at the time the filter is added by setting the NOTE_LOWAT flag in fflags, and specifying the new low water mark in data. On return, data contains the number of bytes of protocol data available to read.
If the read direction of the socket has shutdown, then the filter also sets EV_EOF in flags, and returns the socket error (if any) in fflags. It is possible for EOF to be returned (indicating the connection is gone) while there is still data pending in the socket
buffer.

What's the better way: 1 pipe and 1 socket, or 1 socket?

I have a server-program which processes audio-data and passes it thru to the audio-drivers.
The server-program copies the audio-data and puts the copy in a named FIFO in a seconds thread.
If there is no client reading on the other side of the FIFO it does not matter, because it just blocks the FIFO-thread.
Now I would like to add a "control"-functionality like "increase volume, play faster etc." so the eventually connected client can control the server-program.
The important thing is: If the client eventually disconnects (through close() or abort) the server has detect this and should fall back into normal mode and forget all the commands from the client.
I have never used sockets until now, so I'm not sure what's the best way:
use the FIFO from server->client as it is and add a socket just for client->server communication?
use one socket to stream server->client and give commands from client->server (in byte-format?)
I would use "AF_UNIX, SOCK_STREAM" for the socket. Is #2 the better variant? And how can I determine if the client disconnected without a close()?
i vote option nr.2 and a possible solution for that is:
1-create socket[sock_stream....];
2-fork()[inherits the socket descriptor];
-father[use to read];
-son[use to write];
you can implement to detect a client disconnection when read() from socket descriptor returns 0bytes

Resources