How to make a child process use another terminal input and output? - c

I googled a lot but haven't found any real solution satisfying my needs.
I need the forked child process to use the stdin and stdout of another terminal rather than the one that called it here an example of what I want to do
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
int main()
{
pid_t pid;
printf("process parent pid is %d\n",getpid());
pid =fork();
printf("process child pid is %d\n",pid);
if(pid==0)
{
//int exit_status = system("gnome-terminal");
char a[20];
while(1)
{
scanf(" %s",a);
printf("child %s \n",a);
}
}
while(1)
{
char b[20];
scanf(" %s",b);
printf("parent %s \n",b);
}
}
I need the child for example to interact with the user through another terminal.

As I understand your question, you wish fork a child process that does all its interaction with the user through a virtual terminal, namely gnome-terminal.
Then, first of all, read this article about the difference between terminals and shells. Then read this article about Unix Pseudo Terminals. Then read this one about the /dev/pts filesystem. This will give you the background for my answer.
Here is one way to create a child process that connects to a different terminal, a virtual terminal. I admit, I've not done this in a long time, and then it was with a physical TTY, not a pseudo-terminal. But the background information should help you get past any hangups you may run into.
The overall approach is to fork two child processes. One for the virtual terminal and one for the worker process. You will then exec the virtual terminal process in such a way that it does not shut down when the shell exits. You really don't even want it to launch a shell or any program for that matter, because you are going to be supplying the running process that will interact with it. With gnome-terminal, is not very easy to tell it to hang around after the process exits. You should read this to get suggestions on how to keep it around. An alternative would be to use "xterm" which has the "--hold" option which appears suited for that purpose. "xterm" also has the "-S" option which sounds exactly like what you need. To use the "-S" options, you will need to read about PTS
Since XTERM has the options you need, I will describe the approach based upon XTERM instead of gnome-terminal.
In your child program you will need to open /dev/ptmx to get a master pseudo-terminal file descriptor. You then call ptsname() on the FD to get the name of the PTS. You need the name to inform XTERM what slave PTS to use. You have to grant access and unlock the FD by calling grantpt() and unlockpt() on the master FD. Next, fork another process and exec() XTERM with the -S option, which takes the PTS name and file descriptor number in the form of "-S/dev/pts/123/42" or equivalently "-S123/42". In this case, I don't think you need the "--hold", but add it if it turns out that you do. (Refer to the XTERM man page for more information on using -S)
This establishes the terminal as the User I/O device on your child process's master pseuo-terminal file descriptor. So, next you would dup() the file descriptor onto fd 0 and fd 1 (and fd 2 if you want stderr to go there).
I am sorry this is so general. The approach should work, but you may have to tweak it for your specific flavor of Linux/Unix. Please let me know how you do, and if I get a chance, I will get a Linux up and try it out myself.

stdin is file descriptor 0. To attach to another file or stream (or device), your child process must first close file descriptor 0, then open another file (device). Open will return fd 0 (if successful) because it is now the first available. You can use dup() to do this atomically.
You will need permission to open the device.
For example, let's say your input device of interest is /dev/input/tty1...
if(childpid == 0)
{
int fd = open("/dev/input/tty1", open_args);
if(fd >= 0)
{
/* Close stdin, duplicate the fd just opened to stdin */
dup2(0, fd);
// etc...

Related

C - Redirecting IO of Child Process

I am trying to redirect the IO of a child process (after fork()) into a file, and I can't figure out why it isn't working.
Here's what I've done:
if(fork() == 0){
execv(exe, (char*[]){ exe, "> temp.exe" });
...
And the executable runs, but it doesn't redirect to the file. I would appreciate it if anyone could explain what am I doing wrong, and how I should do it. I'm getting a feeling I need to redirect before the execv() but I have no idea how to.
Thanks in advance!
Shell redirections (like > file) are implemented by the shell. By using execve(), you are bypassing the shell; the child process will see "> temp.exe" in argv, and will attempt to process it as an argument.
If you want to redirect output to a file, the easiest approach will be to implement that redirection yourself by opening the file after forking and using dup2() to move its file descriptor to standard output:
if (fork() == 0) {
int fd = open("temp.exe", O_CREAT | O_WRONLY, 0666);
if (fd < 0) { handle error... exit(255); }
dup2(fd, 1);
close(fd);
execv(exe, ...);
}
The execX() family of calls does not have the same flexibility as, say system() or popen(). These latter methods call shell to do the interpretation of the command.
The arguments to the execX call are the exact path of the program you want to run and the arguments you want to give to that program. Any "shell" features such as redirection you have to implement yourself before calling execX.
Alternatively, you can let shell actually do the work, execp("sh","sh",myexe+" >test.txt");, but that is lazy, and then why not just use system anyway?
Two very useful methods are pipe() and dup2(): pipe allows you to create pipes to your host program; dup2 lets you set a scenario where the program being executed thinks that it is writing to stdout (1), or reading from stdin (0), but is actually writing or reading to a file or pipe that you created.
You will get a long way by reading the man pages for pipe and dup2, or in google looking for exec pipe and dup2, so I won't take your enjoyment away by writing a full implementation here.

Closing file child descriptors after killing parent?

How can I close my child file descriptors when killing the parent process?
I've created a program that does the following:
Fork 2 child processes.
Process 1 is a reader. It reads from STDIN_FILENO and writes to STDOUT_FILENO with scanf/printf. But I use dup2 to redirect to a named pipe (let's call it npipe).
Process 2 is a writer. I redirect STDIN_FILENO to npipe so that I can use scanf from that pipe. I redirect STDOUT_FILENO to another file called "output_file" so that I can use printf to write.
Reading / Writing is done in a while(1) loop;
while(1){
scanf("%s",word);
printf("%s",word);
}
If I use CTRL+C (send SIGTERM signal to the parent), any data that was written in "output_file" is lost. Nothing is inside there. I think it's because I kill the processes while the files are opened?
I used a word that stops both child processes when read; in this case, I do have everything in my "output_file".
How can I close the files when using SIGTERM? / Can I somehow force my process to write into "output_file" so that no data is lost when suddenly closing?
I tried closing and opening the file after each writing. Still losing all the written data? Is this because of the redirect?
void read_from_pipe(void)
{
char command[100];
int new_fd = open(pipe_name,O_RDONLY,0644);
int output_file = open("new_file",O_CREAT|O_WRONLY|O_APPEND,0644);
dup2(new_fd,0);
dup2(output_file,1);
while(1)
{
scanf("%s",command);
if(strcmp(command,"stop")==0)
{
close(output_file);
close(new_fd);
exit(0);
}
else
printf("%s ",command);
close(output_file);
output_file = open("new_file",O_CREAT|O_WRONLY|O_APPEND,0644);
}
}
Managed to attach correct signal handlers, yet data is still not written even though I close my files! Why is that?
You problem is (likely) not to do with closing the file descriptors, it's with flushing the FILE pointers. Note that FILE pointers and file descriptors are two different things.
You call printf, which does NOT write to STDOUT_FILENO (a file descriptor), it writes to stdout (a FILE pointer), which generally just buffers the data until it has a reason to write it to the underlying file descriptor (STDOUT_FILENO).
So you need to ensure your data that you write to stdout is actually written to STDOUT_FILENO. The easiest way to do that is to stick an fflush(stdout); after the printf call to manually flush it, giving you fine-grained control over when stuff is written. Alternately, you can call setbuf(stdout, 0); to make stdout unbuffered, so every write will be flushed immediately.

C: Child I/O with parent and keyboard

Shortened Question:
I have a parent process that creates a child process as seen below:
int me2them[2], them2me[2];
pipe(me2them);pipe(them2me);
if (!fork()){
close(0); dup2(me2them[0],0); close(me2them[0]);
close(1); dup2(them2me[1],1); close(them2me[1]);
char * cmds[] = {"wish", "myProg.tcl",NULL};
execvp(cmds[0], cmds);
fprintf(stderr, "Unable to exec 1\n");
exit(-1);
}
close(0); dup2(them2me[0],0); close(them2me[0]);
close(1); dup2(me2them[1],1); close(me2them[1]);
But, I need the child process to be able to recieve input from the user. With this method, the stdin for the child is changed from the keyboard to the stdout of the parent. How can I maintain communication with both the keyboard and the parent?
Also, the parent is the client of a server, and thus multiple parents can be running on the same or different machines, making a shared file between parent and child difficult because the child of any parent would be able to access any other parent's file.
NOTE: I'd prefer to keep the parent's stdout being mapped to the child's input because I did not write the c code and I want to re-route its printf statements to the child.
Original Version:
I am using tcl to make a GUI for a c code. The tcl is a child process of the c code and I use I/O redirection to make the stdout of the c to be the stdin of the tcl and the stdout of the tcl to be the stdin of the c. However, there is a part where the c requests the user's name and it sends the request via stdout to the stdin of the tcl code, no problems, then the tcl requests the name. The tcl name request presents two problems:
1) tcl is in effect sending the request to the c code, causing the c code to mistake the request as being the actual name (solved by sending the request to stderr instead of stdout)
2) When tcl attempts to get the user input for the name, it will be checking stdin, which is mapped to receive from the c code not the keyboard, and will not be able to read the response from the user.
Is there a way to specify to get the response from the keyboard? Or should I map the stdout of the c code to a different fd for the tcl? And if so, how do I specify to get from the keyboard/new fd.
Here is how I make the tcl a child process of the c code:
int me2them[2], them2me[2];
pipe(me2them);pipe(them2me);
if (!fork()){
close(0); dup2(me2them[0],0); close(me2them[0]);
close(1); dup2(them2me[1],1); close(them2me[1]);
char * cmds[] = {"wish", "myProg.tcl",NULL};
execvp(cmds[0], cmds);
fprintf(stderr, "Unable to exec 1\n");
exit(-1);
}
close(0); dup2(them2me[0],0); close(them2me[0]);
close(1); dup2(me2them[1],1); close(me2them[1]);
It sounds as if the child would have a conventional command-line interface, e.g.,line-buffered. I suggest these design changes:
modify the two-way pipe to the child to something other than its standard input and output (you can read/write on other streams)
it might be simplest to make that change within the child
you can use dup2, etc., within the child to modify the pipe. That leaves the question of how to get a usable keyboard interface for the child.
you can solve that problem by opening /dev/tty directly, and (again with dup2 and friends) making the file opened on /dev/tty into the child's standard input and output.
As an example, the dialog program has a feature for reading data via a pipe (at the shell level, that is its standard input), and in initialization, changing that into a different stream and opening /dev/tty for a "real" standard input. Your problem is a little more complicated (with both input and output pipes), but reading the dialog source may be helpful. For reference, that is the init_dialog function in util.c (source here).

Fork and dup2 - Child process is not terminating - Issues with file descriptors?

I am writing my own shell for a homework assignment, and am running into issues.
My shell program gets an input cat scores | grep 100 from the console and prints the output as expected but the grep command doesn't terminate and I can see it running infinitely from ps command.
EDIT - There was an error while closing fds. Now grep command is not executing and console output is -
grep: (standard input): Bad file descriptor
I am reading the number of commands from the console and creating necessary pipes and storing them in a two dimensional int array fd[][] before forking the first process.
fd[0][0] will contain read end of 1st pipe and fd[0][1] will contain write end of 1st pipe. fd[1][0] will contain read end of 2nd pipe and fd[1][1] will contain write end of 2nd pipe and so on.
Each new process duplicates its stdin with the read end of its pipe with the previous process and duplicates its stdout with the write end of its pipe with the next process.
Below is my function:
void run_cmds(char **args, int count,int pos)
{
int pid,status;
pid = fork();
if ( pid == 0 )
{
if(pos != 0) dup2(fd[pos-1][0],0); // not changing stdin for 1st process
if(pos != count) dup2(fd[pos][1],1); //not changing stdout for last process
close_fds(pos);
execvp(*args,args);
}
else
{
waitpid(pid,&status,0);
count--;
pos++;
//getting next command and storing it in args
if(count > 0)
run_cmds(args,count,pos);
}
}
}
args will contain the arguments for the command.
count is the number of commands I need to create.
pos is the position of the command in the input
I am not able to figure out the problem. I used this same approach for hard coded values before this and it was working.
What am I missing with my understanding/implementation of dup2/fork and why is the command waiting infinitely?
Any inputs would be greatly helpful. Struck with this for the past couple of days!
EDIT : close_fds() function is as below -
For any process , I am closing both the pipes linking the process.
void close_fds(int pos)
{
if ( pos != 0 )
{
close(fd[pos-1][0]);
close(fd[pos-1][1]);
}
if ( pos != count)
{
close(fd[pos][0]);
close(fd[pos][1]);
}
}
First diagnosis
You say:
Each new process duplicates its stdin with the read end of its pipe with the previous process and duplicates its stdout with the write end of its pipe with the next process.
You don't mention the magic word close().
You need to ensure that you close both the read and the write end of each pipe when you use dup() or dup2() to connect it to standard input. That means with 2 pipes you have 4 calls to close().
If you don't close the pipes correctly, the process that is reading won't get EOF (because there's a process, possibly itself, that could write to the pipe). It is crucial to have enough (not too few, not too many) calls to close().
I am calling close_fds() after dup2 calls. The function will go through the fd[][2] array and do a close() call for each fd in the array.
OK. That is important. It means my primary diagnosis probably wasn't spot on.
Second diagnoses
Several other items:
You should have code after the execvp() that reports an error and exits if the execvp() returns (which means it fails).
You should not immediately call waitpid(). All the processes in a pipeline should be allowed to run concurrently. You need to launch all the processes, then wait for the last one to exit, cleaning up any others as they die (but not necessarily worrying about everything in the pipeline exiting before continuing).
If you do force the first command to execute in its entirety before launching the second, and if the first command generates more output than will fit into the pipe, you will have a deadlock — the first process can't exit because it is blocked writing, and the second process can't be started because the first hasn't exited. Interrupts and reboots and the end of the universe will all solve the problem somewhat crudely.
You decrement count as well as incrementing pos before you recurse. That might be bad. I think you should just increment pos.
Third diagnosis
After update showing close_fds() function.
I'm back to "there are problems with closing pipes" (though the waiting and error reporting problems are still problems). If you have 6 processes in a pipeline and all 5 connecting pipes are created before any processes are run, each process has to close all 10 pipe file descriptors.
Also, don't forget that if the pipes are created in the parent shell, rather than in a subshell that executes one of the commands in the pipeline, then the parent must close all the pipe descriptors before it waits for the commands to complete.
Please manufacture an MCVE (How to create a Minimal, Complete, and Verifiable Example?) or
SSCCE (Short, Self-Contained, Correct Example) — two names and links for the same basic idea.
You should create a program that manufactures the data structures that you're passing to the code that invokes run_cmds(). That is, you should create whatever data structures your parsing code creates, and show the code that creates the pipe or pipes for the 'cat score | grep 100' command.
I am no longer clear how the recursion works — or whether it is invoked in your example. I think it is unused, in fact in your example, which is probably as well since you would end up with the same command being executed multiple times, AFAICS.
Most probable reasons why grep doesn't terminate:
You don't call waitpid with the proper PID (even though there is such a call in your code, it may not get executed for some reason), so grep becomes a zombie process. Maybe your parent shell process is waiting for another process first (infinitely, because the other one never terminates), and it doesn't call waitpid with the PID of grep. You can find Z in the output of ps if grep is a zombie.
grep doesn't receive an EOF on its stdin (fd 0), some process is keeping the write end of its pipe open. Have you closed all file descriptors in the fd array in the parent shell process? If not closed everywhere, grep will never receive an EOF, and it will never terminate, because it will be blocked (forever) waiting for more data on its stdin.

capturing commandline output directly in a buffer

I want to execute a command using system() command or execl and want to capture the output directly in a buffer in C. Is ther any possibility to capture the output in a buffer using dup() system call or using pipe(). I dont want to use any file in between using mkstemp or any other temporary file. please help me in this.Thanks in advance.
I tried it with fork() creating two process and piping the output and it is working.However I dont want to use fork system call since i am going to run the module infinitely using seperate thread and it is invoking lot of fork() and system is running out of resources sometimes after.
To be clear about what i am doing is capturing an output of a shell script in a buffer processing the ouput and displaying it in a window which i have designed using ncurses.Thankyou.
Here is some code for capturing the output of program; it uses exec() instead of system(), but that is straightforward to accomodate by invoking the shell directly:
How can I implement 'tee' programmatically in C?
void tee(const char* fname) {
int pipe_fd[2];
check(pipe(pipe_fd));
const pid_t pid = fork();
check(pid);
if(!pid) { // our log child
close(pipe_fd[1]); // Close unused write end
FILE* logFile = fname? fopen(fname,"a"): NULL;
if(fname && !logFile)
fprintf(stderr,"cannot open log file \"%s\": %d (%s)\n",fname,errno,strerror(errno));
char ch;
while(read(pipe_fd[0],&ch,1) > 0) {
//### any timestamp logic or whatever here
putchar(ch);
if(logFile)
fputc(ch,logFile);
if('\n'==ch) {
fflush(stdout);
if(logFile)
fflush(logFile);
}
}
putchar('\n');
close(pipe_fd[0]);
if(logFile)
fclose(logFile);
exit(EXIT_SUCCESS);
} else {
close(pipe_fd[0]); // Close unused read end
// redirect stdout and stderr
dup2(pipe_fd[1],STDOUT_FILENO);
dup2(pipe_fd[1],STDERR_FILENO);
close(pipe_fd[1]);
}
}
A simple way is to use popen ( http://www.opengroup.org/onlinepubs/007908799/xsh/popen.html), which returns a FILE*.
You can try popen(), but your fundamental problem is running too many processes. You have to make sure your commands finish, otherwise you will end up with exactly the problems you're having. popen() internally calls fork() anyway (or the effect is as if it did).
So, in the end, you have to make sure that the program you want to run from your threads exits "soon enough".
You want to use a sequence like this:
Call pipe once per stream you want to create (eg. stdin, stdout, stderr)
Call fork
in the child
close the parent end of the handles
close any other handles you have open
set up stdin, stdout, stderr to be the appropriate child side of the pipe
exec your desired command
If that fails, die.
in the parent
close the child side of the handles
Read and write to the pipes as appropriate
When done, call waitpid() (or similar) to clean up the child process.
Beware of blocking and buffering. You don't want your parent process to block on a write while the child is blocked on a read; make sure you use non-blocking I/O or threads to deal with those issues.
If you are have implemented a C program and you want to execute a script, you want to use a fork(). Unless you are willing to consider embedding the script interpreter in your program, you have to use fork() (system() uses fork() internally).
If you are running out of resources, most likely, you are not reaping your children. Until the parent process get the exit code, the OS needs keeps the child around as a 'zombie' process. You need to issue a wait() call to get the OS to free up the final resources associated with the child.

Resources