I am writing a TCP server application that interacts with terminals. It accepts incoming TCP connections directly and then handles I/O all within the process. One requirement is to be able to disable echo when necessary, nothing too out of the ordinary.
I was initially trying to call tcgetattr directly on the TCP socket fd, which failed since you can't call tcgetattr and tcsetattr on sockets, you have to call them on TTY/PTY fds. So I created a pseudoterminal pair and inserted them in the middle. The master fd relays to the TCP socket fd.
Now, I am able to disable canonical mode, but for some reason I can't disable the echo.
I have the following architecture:
[Client] <--- TCP socket ---> [Server] socket fd <---> PTY master <---> PTY slave
This is all in a single process. I'm not doing forkpty type stuff like most PTY examples do. I'm simply launching another thread to handle the PTY master, which relays between the PTY master FD and the socket FD. The actual application thread reads and writes only using the slave FD.
All the examples I see of putting the PTY into "raw" mode (to disable echo, etc.) do so on STDIN_FILENO, which in my case would be where the socket fd is (for example, The Linux Programming Interface, ch. 62-4):
if (ttySetRaw(STDIN_FILENO, &userTermios) == -1)
errExit("ttySetRaw");
These program examples all assume a program is launched from a terminal, but in my case, since my program is the network login service, it's creating a pseudoterminal from scratch and relaying directly to the socket: there's no "STDIN" in the middle anywhere, and this is a multithreaded program so using STDIN would make no sense.
tcsetattr succeeds for term.c_lflag &= ~ECHO; on the slave FD, but clearly this doesn't seem to be sufficient. Since you can't do terminal operations directly on socket FDs, how would I properly disable echo here? If I debug data relayed in the PTY master thread, I'm not seeing input characters relayed back to the socket, but yet I still see them on the terminal. If I try to call tcsetattr on the master FD, this not only doesn't work but also has the effect of re-enabling input buffering, making the problem worse. But these are the only fds I can touch, so I'm confused what else to try here.
Ended up finding that this was due to Telnet's local echo, not anything pseudoterminal-related after all.
This can't be disabled using termios, you have to send Telnet escape sequences to do so:
#include <arpa/telnet.h>
unsigned char echo_ctl[] = {IAC, WILL, TELOPT_ECHO};
write(fd, echo_ctl, 3);
Source: http://www.verycomputer.com/174_d636f401932e1db5_1.htm
Similarly, if using a tool like netcat to test, you'd want to do something like stty -echo && nc 127.0.0.1 23
Related
I am operating in a Fedora environment, using c code to open a USB port to talk with a serial device.
For the most part, my program works fine. However, as part of testing, I regularly open a terminal window and run the screen command and then manually send commands to the serial device. That works fine as well, but afterwards, the port is no longer accessible to the c program. I've closed the screen instance either with ctrl-a k, OR with ctrl-a d followed by the appropriate sudo kill -9 <ID>. Afterwards there seems to be no evidence of the screen instance (sudo lsof /dev/tty* shows no "screen"), however, running my c program fails. As far as I can tell, the open(...) command just hangs. The only way to restore connectivity is to remove and reinsert the USB cable to the device.
So,
Is there a better way to close the "screen" instance than the two I've been using?
Why would "open" not return?
1) Is there a better way to close the "screen" instance than the two I've been using?
That should be irrelevant if your program is made to be independent of any prior configuration and performs its own full initialization.
2) Why would "open" not return?
A printf after the open() never fires when the port is locked up in this mode
An open() syscall for a serial terminal could block if the DCD (Data Carrier Detect) line from a modem is not asserted.
Your program can ignore the state of the DCD line during an open() by specifying the O_NONBLOCK option, e.g:
fd = open("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_NONBLOCK);
However that option will also put the serial terminal in nonblocking mode, which will force your application to (inefficiently) poll the system for reading data instead of using the preferred event-driven capability.
Your program can revert back to blocking mode by issuing an fcntl() call to clear the non-blocking option, e.g.:
fcntl(fd, F_SETFL, 0);
The above actually clears the five modifiable file-status flags, i.e. O_APPEND, O_ASYNC, O_DIRECT, O_NOATIME, and O_NONBLOCK flags.
The Linux kernel code that blocks the open() of the serial terminal from continuing is the while (1) loop in tty_port_block_til_ready().
Note that if the previous open of the serial terminal had set the CLOCAL termios flag, then no modem is presumed to be connected, and the check for DCD is abandoned.
I want to use named fifo channel and I want to implement a timeout when I write in this fifo.
fd = open(pipe, O_WRONLY);
write(fd, msg, len);
Program is blocked by function open, so using the function select will not work.
Thanks.
use select() and its timeout argument.
Read pipe(7), fifo(7), poll(2)
You might setup a timer or or alarm with a signal handler (see time(7) & signal(7)) before your call to open(2) - but I won't do that - or you could use the O_NONBLOCK flag, since fifo(7) says:
A process can open a FIFO in nonblocking mode. In this case, opening
for read-only will succeed even if no-one has opened on the write
side yet, opening for write-only will fail with ENXIO (no such device
or address) unless the other end has already been opened.
However, you need something (some other process reading) on the other side of the FIFO or pipe.
Perhaps you should consider using unix(7) sockets, i.e. the AF_UNIX address family. It looks more relevant to your case: change your code above (trying to open for writing a FIFO) to a AF_UNIX socket on the client side (with a connect), and change the other process to become an AF_UNIX socket server.
As 5gon12eder commented, you might also look into inotify(7). Or even perhaps D-bus !
I'm guessing that FIFOs or pipes are not the right solution in your situation. You should explain more and give a broader picture of your concerns and goals.
I am doing some network testing, and I am connecting between linux boxes with 2 small C programs that are just using function:
connect()
After connection some small calculations are made and recorded to local file, and I instruct one of the programs to close the connection and then run a netcat listener on the same port. The first program then retries the connection and connects to netcat.
I wondered if someone could advise if it is possible to maintain the initial connection whilst freeing the port and pass the connection to netcat on that port (so that the initial connection is not closed).
Each TCP connection is defined by the four-tuple (target IP address, target port, source IP address, source port), so there is no need to "free up" the port on either machine.
It is very common for a server process to fork() immediately after accept()ing a new connection. The parent process closes its copy of the connection descriptor (returned by accept()), and waits for a new connection. The child process closes the original socket descriptor, and executes the desired program or script that should handle the actual connection. In many cases the child moves the connection descriptor to standard input and standard output (using dup2()), so that the executed script or program does not even need to know it is connected to a remote client: everything it writes to standard output is sent to the remote client, and everything the remote client sends is readable from standard input.
If there is an existing process that should handle the connection, and there is an Unix domain socket connection (stream, datagram or seqpacket socket; makes no difference) between the two processes, it is possible to transfer the connection descriptor as an SCM_RIGHTS ancillary message. See man 2 sendmsg, man 2 recvmsg, man 3 cmsg, and man 7 unix for details. This only works on the same machine over an Unix domain socket, because the kernel actually duplicates the descriptor from one process to the other; really, the kernel does some funky magic to make this happen.
If your server-side logic is something like
For each incoming connection:
Do some calculations
Store calculations into a file
Store incoming data from the connection into a file (or standard output)
then I recommend using pthreads. Just create the desired number of threads, have all of them wait for an incoming connection by calling accept() on the listening socket, and have each thread handle the connection by themselves. You can even use stdio.h I/O for the file I/O. For more complex output -- multiple statements per chunk --, you'll need a pthread_mutex_t per output stream, and remember to fflush() it before releasing the mutex. I suspect a single multithreaded program that does all that, and exits nicely if interrupted (SIGINT aka CTRL+C), should not exceed three hundred lines of C.
If you only need to output data from a stream socket, or if you only need to write input to it, you can treat it as a file handle.
So in a Posix program you can use dup2() to duplicate the socket handle to the value 1, which is standard output. Then close the original handle. Then use exec() to overwrite your program with "cat", which will write the output from standard input aka filehandle 1 aka your socket.
I have a server-program which processes audio-data and passes it thru to the audio-drivers.
The server-program copies the audio-data and puts the copy in a named FIFO in a seconds thread.
If there is no client reading on the other side of the FIFO it does not matter, because it just blocks the FIFO-thread.
Now I would like to add a "control"-functionality like "increase volume, play faster etc." so the eventually connected client can control the server-program.
The important thing is: If the client eventually disconnects (through close() or abort) the server has detect this and should fall back into normal mode and forget all the commands from the client.
I have never used sockets until now, so I'm not sure what's the best way:
use the FIFO from server->client as it is and add a socket just for client->server communication?
use one socket to stream server->client and give commands from client->server (in byte-format?)
I would use "AF_UNIX, SOCK_STREAM" for the socket. Is #2 the better variant? And how can I determine if the client disconnected without a close()?
i vote option nr.2 and a possible solution for that is:
1-create socket[sock_stream....];
2-fork()[inherits the socket descriptor];
-father[use to read];
-son[use to write];
you can implement to detect a client disconnection when read() from socket descriptor returns 0bytes
I have a network daemon (poll()/accept()/fork() style) which is leaking socket file descriptors, one per client in the TIME_WAIT state.
As far as I can see I can shutdown()ing and then close()ing definitely-no-longer-needed sockets. Other sockets (for example the server socket in the client side of the fork) are just close()ed. All sockets have SO_REUSEADDR set and SO_LINGER is off. I am using _exit() to exit the program and I am using non-blocking polling socket operations so as to set a ''dying'' flag in my signal handler -- this allows me to later pick up the dying flag and free(), shutdown(), close(), which would otherwise be dangerous in a signal handler.
But still a fd leak -- What is the best way to debug this kind of problem? It would help to know which socket is loitering at exit, as there are many fds involved in the process.
Cheers!
Sockets in TIME_WAIT mode are NOT leaking -- TIME_WAIT means that the application has finished with the socket and has closed it and cleaned it up, but the kernel is still remembering the socket so as to respond properly to late/orphan/duplicate packets that might be floating around in the network. After a little while, the kernel will automatically delete the TIME_WAIT sockets, but until then, they remain as a reminder to the kernel to not reuse the port unless an app specifically asks for it with SO_REUSEADDR.
I figured this out.
Infact I had fixed the bug already by closing the cli_fd in the server side of the fork; however I did not notice the bug was fixed because i was using natstat wrongly to could open fds.
For the record, the output of netstat -n | grep TIME_WAIT | wc -l should not be used to count file descriptors for sockets which are hanging around -- this is what i was doing wrong. Use lsof or fstat instead.
Anyway - the server is no longer running out of fds under considerable load.
Cheers