C: setup pseudoterminal and open with xterm - c

The following simplified piece of code is executed by a thread in the background. The thread runs until he is told to exit (by user input).
In the code below I have removed some error checking for better readability. Even with error checking the code works well and both the master and the slave are created and/or opened.
...
int master, slave;
char *slavename;
char *cc;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
slave = open(slavename, O_RDWR);
printf("master: %d\n",master);
printf("slavename: %s\n",slavename);
On my machine the output is the following:
master: 3
slavename: /dev/pts/4
So I thought that opening an xterm with the command xterm -S4/3 (4 = pt-slave, 3 = pt-master) while my program is running should open a new xterm window for the created pseudoterminal. But xterm just starts running without giving an error or any further informations but does not open a window at all. Any suggestions on that?
EDIT:
Now with Wumpus Q. Wumbley's help xterm starts normally, but I can't redirect any output to it. I tried:
dup2(slave, 1);
dup2(slave, 2);
printf("Some test message\n");
and opening the slave with fopen and then using fprinf. Both didn't work.

The xterm process needs to get access to the file descriptor somehow. The intended usage of this feature is probably to launch xterm as a child process of the one that created the pty. There are other ways, though. You could use SCM_RIGHTS file descriptor passing (pretty complicated) or, if you have a Linux-style /proc filesystem try this:
xterm -S4/3 3<>/proc/$PID_OF_YOUR_OTHER_PROGRAM/fd/3
'
You've probably seen shell redirection operators before: < for stdin, > for stdout, 2> for stderr (file descriptor 2). Maybe you've also seen other file descriptors being opend for input or output with things like 3<inputfile 4>outputfile. Well the 3<> operator here is another one. It opens file descriptor 3 in read/write mode. And /proc/PID/fd/NUM is a convenient way to access files opened by another process.
I don't know about the rest of the question. I haven't tried to use this mode of xterm before.
OK, the trick with /proc was a bad idea. It's equivalent to a fresh open of /dev/ptmx, creating a new unrelated pty.
You're going to have to make the xterm a child of your pty-creating program.
Here's the test program I used to explore the feature. It's sloppy but it revealed some interesting things. One interesting thing is that xterm writes its window ID to the pty master after successful initialization. This is something you'll need to deal with. It appears as a line of input on the tty before the actual user input begins.
Another interesting thing is that xterm (the version in Debian at least) crashes if you use -S/dev/pts/2/3 in spite of that being specifically mentioned in the man page as an allowed format.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
int main(void)
{
int master;
char *slavename, window[64], buf[64];
FILE *slave;
master = posix_openpt(O_RDWR);
grantpt(master);
unlockpt(master);
slavename = ptsname(master);
printf("master: %d\n", master);
printf("slavename: %s\n", slavename);
snprintf(buf, sizeof buf, "-S%s/%d", strrchr(slavename,'/')+1, master);
if(!fork()) {
execlp("xterm", "xterm", buf, (char *)0);
_exit(1);
}
slave = fopen(slavename, "r+");
fgets(window, sizeof window, slave);
printf("window: %s\n", window);
fputs("say something: ", slave);
fgets(buf, sizeof buf, slave);
fprintf(slave, "you said %s\nexiting in 3 seconds...\n", buf);
sleep(3);
return 0;
}

Related

Prevent read() systemcall returing with 0 when run as background process

I have a piece of software that is able to read commands from stdin for debug purposes in a separate thread. When my software runs as foreground process read behaves as expected, its blocking and waits for input by the user, i.e the thread sleeps.
When the software is run as a background process, read constantly returns 0 (possible EOF detected?).
The problem here is, that this specific read is in a while(true) loop. It runs as fast as it can and steals precious CPU load on my embedded device.
I tried redirecting /dev/null to the process but the behavior was the same. I am running my custom Linux on an ARM Cortex A5 board.
The problematic piece of code follows and is run inside its own thread:
char bufferUserInput[256];
const int sizeOfBuffer = SIZE_OF_ARRAY(bufferUserInput);
while (1)
{
int n = read(0, bufferUserInput, sizeOfBuffer); //filedes = 0 equals to reading from stdin
printf("n is: %d\n", n);
printf("Errno: %s",strerror(errno));
if (n == 1)
{
continue;
}
if ((1 < n)
&& (n < sizeOfBuffer)
&& ('\n' == bufferUserInput[n - 1]))
{
printf("\r\n");
bufferUserInput[n - 1] = '\0';
ProcessUserInput(&bufferUserInput[0]);
} else
{
n = 0;
}
}
I am looking for a way to prevent read from constantly returning when running in the background and wait for user input (which of course will never come).
If you start your program in the "background" (as ./program &) from a shell script, it's stdin will be redirected from /dev/null (with some exceptions).
Trying to read from /dev/null will always return 0 (EOF).
Example (on linux):
sh -c 'ls -l /proc/self/fd/0 & wait'
... -> /dev/null
sh -c 'dd & wait'
... -> 0 bytes copied, etc
The fix from the link above should also work for you:
#! /bin/sh
...
exec 3<&0
./your_program <&3 &
...
When stdin is not a terminal, read is returning with 0 because you are at the end of the file. read only blocks after reading all available input when there could be more input in the future, which is considered to be possible for terminals, pipes, sockets, etc. but not for regular files nor for /dev/null. (Yes, another process could make a regular file bigger, but that possibility isn't considered in the specification for read.)
Ignoring the various problems with your read loop that other people have pointed out (which you should fix anyway, as this will make reading debug commands from the user more reliable) the simplest change to your code that will fix the problem you're having right now is: check on startup whether stdin is a terminal, and don't launch the debug thread if it isn't. You do that with the isatty function, declared in unistd.h.
#include <stdio.h>
#include <unistd.h>
// ...
int main(void)
{
if (isatty(fileno(stdin)))
start_debug_thread();
// ...
}
(Depending on your usage context, it might also make sense to run the debug thread when stdin is a pipe or a socket, but I would personally not bother, I would rely on ssh to provide a remote (pseudo-)terminal when necessary.)
read() doesn't return 0 when reading from the terminal in a backgrounded process.
It either continues to block while causing a SIGTTIN to be sent to the process (which may break the blocking and cause retval=-1,errno=EINTR to be returned or it causes retval=-1, errno EIO if SIGTTIN is ignore.
The snippet below demonstrates this:
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
int main()
{
char c[256];
ssize_t nr;
signal(SIGTTIN,SIG_IGN);
nr = read(0,&c,sizeof(c));
printf("%zd\n", nr);
if(0>nr) perror(0);
fflush(stdout);
}
The code snippet you've shown can't possibly test reveal 0-returns since you never test for zero-ness in the return value.

Perl, how do I create a pipe to my exec'd child?

I am trying to pass data from my perl script to my c program using a pipe (uni-directional).
I need to find a way to to do this without messing with the child programs STDIN or STDOUT, so I try creating a new handle and passing the fd.
I create 2 IO::Handles and create a pipe. I write to one end of the pipe and attempt to pass the File descriptor of the other end of the pipe to my child program that is being execed. I pass the file descriptor by setting an ENV variable. Why does this not work? (It does not print out 'hello world'). As far as I know, file descriptors and pipes are inherited by the child when exec'd.
Perl script:
#!/opt/local/bin/perl
use IO::Pipe;
use IO::Handle;
my $reader = IO::Handle->new();
my $writer = IO::Handle->new();
$reader->autoflush(1);
$writer->autoflush(1);
my $pipe = IO::Pipe->new($reader, $writer);
print $writer "hello world";
my $fh = $reader->fileno;
$ENV{'MY_FD'} = $fh;
exec('./child') or print "error opening app\n";
# No more code after this since exec replaces the current process
C Program, app.c (Compiled with gcc app.c -o child):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d\n", fd);
printf("message: %s\n", buf);
}
Output:
fd: 3
message:
The message is never passed through the pipe to the C program. Any suggestions?
Your pipe file descriptors are set FD_CLOEXEC, and so are closed upon exec().
Perl's $^F controls this behavior. Try something like this, before you call IO::Pipe->new:
$^F = 10; # Assumes we don't already have a zillion FDs open
Alternatively, you can with Fcntl clear the FD_CLOEXEC flag yourself after creating the pipe.
I found the solution. Some people said that it was not possible with exec, that it would not see pipes or file descriptors, but that was not correct.
Turns out that perl closes/invalidates all fd > 2 automatically unless you say otherwise.
Adding the following flags to the FD fixes this problem (where READ is the handle here, NOT STDIN):
my $flags = fcntl(READ, F_GETFD, 0);
fcntl(READ, F_SETFD, $flags & ~FD_CLOEXEC);
Your program is failing because exec calls another program and never returns. It isn't designed for communication with another process at all.
You probably wrote the above code based on the IO::Pipe documentation, which says "ARGS are passed to exec". That isn't what it means, though. IO::Pipe is for communication between two processes within your Perl script, which are created by fork. They mean the execution of the new process, rather than a call to exec in your own code.
Edit: for one-directional communication, all you need is open with a pipe:
open my $prog, '|-', './child' or die "can't run program: $!";
print {$prog} "Hello, world!";
Rodrigo, I can tell you that your file descriptor is no longer valid when you exec into the c app.
Please be aware that I just say it is INVALID, but it still exists in the environment variables. The FD=3 will continue existing until the whole process ends.
You can check the fd by fcntl. The code is listing below
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d, if fd still valid: %d\n", fd, fcntl(fd, F_GETFD));
printf("strlen %d\n", (int)strlen(buf));
printf("message: %s\n", buf);
}
You can see that MY_FD=3 will always in ENV as the process doesn't destroy itself, so you can get fd as 3. But, this file descriptor has been invalid. so the result of fcntl(fd, F_GETFD) will be -1, and the length you read from fd will be 0.
That's why you will never see the "hello world" sentence.
One more thing, #dan1111 is right, but you don't need to open a new pipe, as you have already done so.
All you need to is just set MY_FD=0, like
$ENV{'MY_FD'} = 0;
The STDIN/OUT is another independent process that always exists, so the pipe will not broken down when your perl app exec into c app. That's why you can read from what you input in app.
If your requirement is writing from another file hanle, please try to make that file handle an independent process and always exist, just like STDIN.

Is there a API (like dup) to duplicate fstream so it can

I want to write a stream into one FILE *fp at the same time the stream should be copied onto another fp too is there a better way to write my debug function by eliminating one fprintf?
const int logflag=1;
#define debug(args ...) if (logflag) { FILE *flog = fopen("test.log", "a+"); fprintf( flog, args); fclose(flog); } fprintf(stderr, args);
int main()
{
debug("test"); // writes test into both stderr and flog
debug("test2");
}
The short answer is no, it's two different file pointers and you can only write to one at a time. Actually, dup still doesn't help you because it closes the duplicated file descriptor:
"dup2() makes newfd be the copy of oldfd, closing newfd first if necessary"
from the dup2 man-pages
However, if your goal is to have both a log to the screen and to a file, you are better served by using the tools Linux already provides you. A generally good practice (I don't remember the source for this) is to have a program print its output and debugging to a stdout/stderr and let the calling user determine how to handle the output.
Following this, if all of your output goes to stderr, you can do the following when executing the program:
$ ./program 2>&1 | tee file.log

Named pipe written content life

I created and written to a named pipe in C under Linux. For how long the text that is written in there is saved in the named pipe?
From what I have done, and the bytes of the pipe file after my program is run I suppose that the text is not preserved in the pipe after the program ends. In the mkfifo manual there is no info about this. I know that ordinary pipes are destroyed after the process that have created them is closed. But what about named pipes, that are still in your file system after the program has finished?
This is the code I use to create a named pipe and to write/read from it.
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <string.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
int FIFOFileDescriptorID;
FIFOFileDescriptorID = mkfifo(argv[1], 0660);
int ProccesID = fork();
if (ProccesID == 0) {
int TempFileDescriptor = 0;
char buffer[512] = "Some random text goes here...";
TempFileDescriptor = open(argv[1], O_WRONLY);
write(TempFileDescriptor, &buffer, sizeof(buffer));
close(TempFileDescriptor);
} else {
int TempFileDescriptor = 0;
char buffer[512];
TempFileDescriptor = open(argv[1], O_RDONLY);
read(TempFileDescriptor, &buffer, sizeof(buffer));
close(TempFileDescriptor);
printf("Received string: %s\n", buffer);
}
return 0;
}
After I have run this program and created and use the pipe for write/read, I run another one – just to read the text from the given pipe. Indeed, there was no text there.
I will exam this thing better, because there is a good change, after I start the program do delete/create the pipe again.
It'll not save anything. When you read/write something to the named pipe, it the process will be blocked unless some other process writes/reads from the same named pipe.
The file stays in the file-system. But the content goes away when reading/writing finishes.
From linux manual,
Once you have created a FIFO special file in this way, any process
can open it for reading or writing, in the same way as an ordinary file.
However, it has to be open at both ends simultaneously before you can
proceed to do any input or output operations on it. Opening a FIFO for
reading normally blocks until some other process opens the same FIFO for
writing, and vice versa.
Here is some code I wrote up to test named pipes. I made sure to handle all errors:
cleanup in SIGPIPE
Look at Wikipedia: http://en.wikipedia.org/wiki/Named_pipe - named pipes persist beyond the lifetime of the process that created or used them, until they are explicitly deleted.

Open/close strategy for /proc pseudo-file

I have written a C utility for Linux that checks the contents of /proc/net/dev once every second. I open the file using fopen("/proc/net/dev", "r") and then fclose() when I'm done.
Since I'm using a 'pseudo' file rather than a real one, does it matter if I open/close the file each time I read from it, or should I just open it when my app starts and keep it open the whole time? The utility is launched as a daemon process and so may run for a long time.
It shouldn't matter, no. However, there might be issues with caching/buffering, which would mean it's actually best (safest) to do as you do it, and re-open the file every time. Since you do it so seldom, there's no performance to be gained by not doing it, so I would recommend keeping your current solution.
What you want is unbuffered reading. Assuming you can't just switch to read() calls, open the device, and then set the stream to unbuffered mode. This has the additional advantage that there is no need to close the stream when you're done. Just rewind it, and start reading again.
FILE *f = fopen("/proc/net/dev", "r");
setvbuf(f, NULL, _IONBF, 0);
while (running)
{
rewind(f);
...do your reading...
}
The pseudo files in "/proc" are dangerous for daemons because if the kernel decides to drop them they just vanish leaving you with an invalid FILE * struct. That means that your strategy is the only correct one to treat a file in "/proc" (but no one is going to expect that "/proc/net/dev" is removed by the kernel during runtime).
In general (especially for files in "/proc/[PID]") one should open files in "/proc" before an operation and close them as soon as possible after the operation is done.
See this example code. It forks and reads the "/proc/[PID]/status" file of the child, once before the child has exited and once during the cleanup of the child.
#include <unistd.h>
#include <time.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
int main(int argc, char** argv){
pid_t child=fork();
if(child==0){
sleep(1);
} else {
char path[256],buffer[256]; int status,read_length;
sprintf(path,"/proc/%i/status",child);
//do a read while the child is alive
FILE *fd=fopen(path,"r");
if(fd!=0){
read_length=fread(buffer,1,255,fd);
printf("Read: %i\n",read_length);
fclose(fd);
}
//repeat it while the child is cleaned up
fd=fopen(path,"r");
wait(&status);
if(fd!=0){
read_length=fread(buffer,128,1,fd);
printf("Read: %i\n",read_length);
fclose(fd);
}
}
}
The result is as follows
f5:~/tmp # ./a.out
Read: 255
Read: 0
So you see, you could easily get an unexpected result from files in "/proc" if they get deleted by the kernel during you program runtime.

Resources