Separate I/O for child process after fork() - c

I am trying to implement an application on Linux using C, and I have a requirement that I need to do I/O separately on my child & parent process. Here is what I am looking for
User runs the application, the parent process spawns 3 child processes.
Each of the child process will spawn a thread that waits for the user input.
There should be an intuitive method by which the user can specify which of the child process he is interacting with.
Ideally I would like if each of the child processes is executed on different terminal, that way it is very clear to the user with whom he is interacting.
I saw a similar question in Executing child process in new terminal, but the answer is not very clear regarding the steps involved. It seems to suggest that it can be done by execing the xterm like this xterm -e sh -c, but it is not confirmed. I would also want to setup some IPC between the parent <--> child & child <--> child process as well, so if I launch the child process in a new terminal by execing xterm, who is the child of my parent process? Is it xterm? If so, the code that I actually want to execute in my child process, will it get executed as a child of xterm?

Assume that you have already spawned the three child processes and that you run your parent on tty1.
tty1: Now contains all the diagnostics information
tty2: Child process 1
tty3: Child process 2
tty4: Child process 3
tty5: User input
So each child process will read from its tty as if it were a file (note: requires root permissions). To give input to, say, child process 2, go to tty5 and type in this command:
cat - >/dev/tty3
Then type in the input to your program, and then press Ctrl-D. Your child process 2 should now recieve that input.
EDIT You do not need to actually run the child processes on different ttys. It is only required to run them with root permissions and then read and write from those tty numbers, just as you would read from stdin and write to stdout. Sorry for the confusion.

Related

Why does vim crash when it becomes an orphan process?

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pid = fork();
if (pid) {
sleep(5);
// wait(NULL); // works fine when waited for it.
} else {
execlp("vim", "vim", (char *)NULL);
}
}
When I run this code, vim runs normally then crashes after the 5 seconds (i.e. when its parent exits). When I wait for it (i.e. not letting it become an orphan process), the code works totally fine.
Why does becoming an orphan process become a problem here? Is it something specific to vim?
Why is this even a thing that's visible to vim? I thought that only the parent knows when its children die. But here, I see that somehow, the child notices when it gets adopted, something happens and crashes somehow. Do the children processes get notified when their parent dies as well?
When I run this code, I get this output after the crash:
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
This actually happens because of the shell that is executing the binary that forks Vim!
When the shell runs a foreground command, it creates a new process group and makes it the foreground process group of the terminal attached to the shell. In bash 5.0, you can find the code that transfers this responsibility in give_terminal_to(), which uses tcsetpgrp() to set the foreground process group.
It is necessary to set the foreground process group of a terminal correctly, so that the program running in foreground can get signals from the terminal (for example, Ctrl+C sending an interrupt signal, Ctrl+Z sending a terminal stop signal to suspend the process) and also change terminal settings in ways that full-screen programs such as Vim typically do. (The subject of foreground process group is a bit out of scope for this question, just mentioning it here since it plays part in the response.)
When the process (more precisely, the pipeline) executed by the shell terminates, the shell will take back the foreground process group, using the same give_terminal_to() code by calling it with the shell's process group.
This is usually fine, because at the time the executed pipeline is finished, there's usually no process left on that process group, or if there are any, they typically don't hold on to the terminal (for example, if you're launching a background daemon from the shell, the daemon will typically close the stdin/stdout/stderr streams to relinquish access to the terminal.)
But that's not really the case with the setup you proposed, where Vim is still attached to the terminal and part of the foreground process group. When the parent process exits, the shell assumes the pipeline is finished and it will set the foreground process group back to itself, "stealing" it from the former foreground process group which is where Vim is. Consequently, the next time Vim tries to read from the terminal, the read will fail and Vim will exit with the message you reported.
One way to see by yourself that the parent processing exiting does not affect Vim by itself is running it through strace. For example, with the following command (assuming ./vim-launcher is your binary):
$ strace -f -o /tmp/vim-launcher.strace ./vim-launcher
Since strace is running with the -f option to follow forks, it will also start tracing Vim when it's launched. The shell will be executing strace (not vim-launcher), so its foreground pipeline will only end when strace stops running. And strace will not stop running until Vim exits. Vim will work just fine past the 5 seconds, even though it's been reparented to init.
There also used to be an fghack tool, part of daemontools, that accomplished the same task of blocking until all forked children would exit. It would accomplish that by creating a new pipe and have the pipe inherited by the process it spawned, in a way that would get automatically inherited by all other forked children. That way, it could block until all copies of that pipe file descriptor were closed, which typically only happens when all processes exit (unless a background process goes out of its way to close all inherited file descriptors, but that's essentially stating that they don't want to be tracked, and they would most probably have relinquished their access to the terminal by that point.)

How to open a new xterm window and run a command on that window using fork-exec calls ? ( C program only)

Basically, I am learning pipe and dup system calls.
I want to take a command-name (like ls,cd,mkdir etc) as input from the parent process and pass it on to the child process using pipe, the child process should open up a new xterm window and show the man page of the command recorded in the pipe in this new xterm window.
The problem is that exec changes the whole process image of the child, so any code written after is simply ignored (if exec was successful).So if I exec the child to "/bin/xterm", then any exec calls in the child block are removed, as the process image changed to xterm.
So how do I call /bin/man?
There is nothing stopping you calling fork again - a process can have any number of children, grandchildren etc...
In this particular instance you need the child to fork and create grandchildren that exec "/bin/xterm" with your man command. And also handle clean up of them as they finish.

Pipes and Forks, does not get displayed on stdout

I'm working on my homework which is to replicate the unix command shell in C.
I've implemented till single command execution with background running (&).
Now I'm at the stage of implementing pipes and I face this issue, For pipes greater than 1, the children commands with pipe are completed, but the final output doesn't get displayed on stdout (the last command's stdin is replaced with read of last pipe)
dup2(pipes[lst_cmd], 0);
I tried fflush(STDIN_FILENO) at the parent too.
The exit of my program is CONTROL-D, and when i press that, the output gets displayed (also exits since my operation on CONTROL-D is to exit(0)).
I think the output of pipe is in the stdout buffer but doesn't get displayed. Is there anyother means than fflush to get the stuff in the buffer to stdout?
Having seen the code (unfair advantage), the primary problem was the process structure combined with not closing pipes thoroughly.
The process structure for a pipeline ps | sort was:
main shell
- coordinator sub-shell
- ps
- sort
The main shell was creating N pipes (N = 1 for ps | sort). The coordinator shell was then created; it would start the N+1 children. It did not, however, wait for them to terminate, nor did it close its copy of the pipes. Nor did the main shell close its copy of the pipes.
The more normal process structure would probably do without the coordinator sub-shell. There are two mechanisms for generating the children. Classically, the main shell would fork one sub-process; it would do the coordination for the first N processes in the pipeline (including creating the pipes in the first place), and then would exec the last process in the pipeline. The main shell waits for the one child to finish, and the exit status of the pipeline is the exit status of the child (aka the last process in the pipeline).
More recently, bash provides a mechanism whereby the main shell gets the status of each child in the pipeline; it does the coordination.
The primary fixes (apart from some mostly minor compilation warnings) were:
main shell closes all pipes after forking coordinator.
main shell waits for coordinator to complete.
coordinator closes all pipes after forking pipeline.
coordinator waits for all processes in pipeline to complete.
coordinator exits (instead of returning to provide duelling dual prompts).
A better fix would eliminate the coordinator sub-shell (it would behave like the classical system described).

Fork and wait - how to wait for all grandchildren to finish

I am working on an assignment to build a simple shell, and I'm trying to add a few features that aren't required yet, but I'm running into an issue with pipes.
Once my command is parsed, I fork a process to execute them. This process is a subroutine that will execute the command, if there is only one left, otherwise it will fork. The parent will execute the first command, the child will process the rest. Pipes are set up and work correctly.
My main process then calls wait(), and then outputs the prompt. When I execute a command like ls -la | cat, the prompt is printed before the output from cat.
I tried calling wait() once for each command that should be executed, but the first call works and all successive calls return ECHILD.
How can I force my main thread to wait until all children, including children of children, exit?
You can't. Either make your child process wait for its children and don't exit until they've all been waited for or fork all the children from the same process.
See this answer how to wait() for child processes: How to wait until all child processes called by fork() complete?
There is no way to wait for a grandchild; you need to implement the wait logic in each process. That way, each child will only exit after all it's children have exited (and that will then include all grandchildren recusively).
Since you are talking about grandchilds, you are obviously spawning the childs in a cascading manner. Thats a possible way to implement a pipe.
But keep in mind that the returned value from your pipe (the one you get when doing echo $? in your terminal) is the one returned from the right-most command.
This means that you need to spawn childs from right to left in this cascading implementation. You dont want to lose that returned value.
Now assuming we are only talking about builtin commands for the sake of simplicity (no extra calls to fork() and execve() are made), an intersting fact is that in some shells like "zsh", the right-most command is not even forked. We can see that with a simple piped command like:
export stack=OVERFLOW | export overflow=STACK
Using then the command env, we can appreciate the persistance of the overflow=STACK in the environment variables. It shows that the right-most command was not executed in a subshell, whereas export stack=OVERFLOW was.
Note: This is not the case in a shell like "sh".
Now lets use a basic piped command to give a possible logic for this cascading implementation.
cat /dev/random | head
Note: Even though cat /dev/random is supposedly a never ending command, it will stop as soon as the command head is done reading the first line outputed by cat /dev/random. This is because stdin is closed when head is done, and the command cat /dev/random aborts because its writing in a broken pipe.
LOGIC:
The parent process (your shell) sees that there is a pipe to execute. It will then fork two processes. The parent stays your shell, it will wait for the child to return, and store the returned value.
In the context of the first generation child: (trying to execute the right-most command of the pipe)
It sees that the command is not the last command, it will fork() again (What i call "cascading implementation").
Now that the fork is done, the parent process is going to execute first of all its task (head -1), it will then close its stdin and stdout, then wait() for its child. This is really important to close firstly stdin and stdout, then call wait(). Closing stdout sends EOF to the parent, if reading on stdin. Closing stdin make sure the grand-children trying to write in the pipe aborts, with a "broken pipe" error.
In the context of the grand-children:
It sees that it is the last command of a pipe, it will just execute the command and return its value (it closes stdin and stdout).

Meaning of "Detaching after fork from child process 15***"?

when I use linux console to develop, I use gdb to trace the program's behavior, Always the console print "Detaching after fork from child process 15***." can any body help to explain the sentence in quotation mark? How and Who will do What jobs after Detaching from child process? Thanks first:)
When GDB is debugging a particular process, and the process forks off a child process, GDB can only follow one of the two processes, so it must detach (stop following) the other. This line informs you of this selective detachment. The child process will run without being debugged by GDB.
You can select which process to follow using the set follow-fork-mode command. Use set follow-fork-mode child to follow child processes, and set follow-fork-mode parent to return to the default behavior. For more details, see this page on the Apple development website.

Resources