I am attempting to write a shell. When a foreground process is run, the forked process pipeline is given its own process group id. The terminal is then given over to this process group id (using tcsetpgrp) and the shell waits for it to terminate before giving itself terminal control once again. This works perfectly fine.
The issue that arises is when I attempt to run a background process. Again, I give all of the processes in the pipeline a single process group id but this time I do not give terminal control to this group. Upon running, the output of a given background command is output to the terminal (before it is finished executing) and the terminal gives the user back the prompt at the same time. What should have happened is that the child process that attempts to write to the terminal should get a SIGTTOU and it should stop, but this clearly doesn't happen. I verified that the forked processes all have the same process group id and that this id is different from the shell's.
Upon exiting the shell (via ctrl-c) and returning to the standard bash shell that ran it, because I did not reap the background process upon shell termination, the background process continues running (which is excepted). What is weird though is that this process continues writing output to the bash shell even though it is not the foreground process. This leads me to conclude that either this background process is not getting any SIGTTOUs because of a POSIX bug (unlikely), it is handling them (causing the default action of stopping to be ignored), or the background process is ignoring SIGTTOUs.
Is there a way to, before exec'ing a forked process, ensure that it will stop upon receiving a SIGTTOU (assuming that the exec binary does not change anything)?
SIGTTOU is sent to a background process which tries to write to the terminal only if the termios flag TOSTOP is set for that terminal. By default, it is generally not set, in which case the background process can happily write to the terminal. (The TOSTOP flag does not affect read permissions. If the process tries to read, it will be sent a SIGTTIN.)
So, yes, there is something the foreground process can do: use tcsetattr to set TOSTOP
The solution was to make the forked process execute the following before calling exec:
struct termios term;
if (tcgetattr(STDIN_FILENO, &term) < 0)
printf("ERROR\n");
term.c_lflag = TOSTOP;
if (tcsetattr(STDIN_FILENO,TCSANOW,&term)<0)
printf("ERROR\n");
Related
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pid = fork();
if (pid) {
sleep(5);
// wait(NULL); // works fine when waited for it.
} else {
execlp("vim", "vim", (char *)NULL);
}
}
When I run this code, vim runs normally then crashes after the 5 seconds (i.e. when its parent exits). When I wait for it (i.e. not letting it become an orphan process), the code works totally fine.
Why does becoming an orphan process become a problem here? Is it something specific to vim?
Why is this even a thing that's visible to vim? I thought that only the parent knows when its children die. But here, I see that somehow, the child notices when it gets adopted, something happens and crashes somehow. Do the children processes get notified when their parent dies as well?
When I run this code, I get this output after the crash:
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
This actually happens because of the shell that is executing the binary that forks Vim!
When the shell runs a foreground command, it creates a new process group and makes it the foreground process group of the terminal attached to the shell. In bash 5.0, you can find the code that transfers this responsibility in give_terminal_to(), which uses tcsetpgrp() to set the foreground process group.
It is necessary to set the foreground process group of a terminal correctly, so that the program running in foreground can get signals from the terminal (for example, Ctrl+C sending an interrupt signal, Ctrl+Z sending a terminal stop signal to suspend the process) and also change terminal settings in ways that full-screen programs such as Vim typically do. (The subject of foreground process group is a bit out of scope for this question, just mentioning it here since it plays part in the response.)
When the process (more precisely, the pipeline) executed by the shell terminates, the shell will take back the foreground process group, using the same give_terminal_to() code by calling it with the shell's process group.
This is usually fine, because at the time the executed pipeline is finished, there's usually no process left on that process group, or if there are any, they typically don't hold on to the terminal (for example, if you're launching a background daemon from the shell, the daemon will typically close the stdin/stdout/stderr streams to relinquish access to the terminal.)
But that's not really the case with the setup you proposed, where Vim is still attached to the terminal and part of the foreground process group. When the parent process exits, the shell assumes the pipeline is finished and it will set the foreground process group back to itself, "stealing" it from the former foreground process group which is where Vim is. Consequently, the next time Vim tries to read from the terminal, the read will fail and Vim will exit with the message you reported.
One way to see by yourself that the parent processing exiting does not affect Vim by itself is running it through strace. For example, with the following command (assuming ./vim-launcher is your binary):
$ strace -f -o /tmp/vim-launcher.strace ./vim-launcher
Since strace is running with the -f option to follow forks, it will also start tracing Vim when it's launched. The shell will be executing strace (not vim-launcher), so its foreground pipeline will only end when strace stops running. And strace will not stop running until Vim exits. Vim will work just fine past the 5 seconds, even though it's been reparented to init.
There also used to be an fghack tool, part of daemontools, that accomplished the same task of blocking until all forked children would exit. It would accomplish that by creating a new pipe and have the pipe inherited by the process it spawned, in a way that would get automatically inherited by all other forked children. That way, it could block until all copies of that pipe file descriptor were closed, which typically only happens when all processes exit (unless a background process goes out of its way to close all inherited file descriptors, but that's essentially stating that they don't want to be tracked, and they would most probably have relinquished their access to the terminal by that point.)
Why does echo hello > /dev/pts/xxx work (here xxx refers to another session's controlling terminal)?
With default setting, a process of the background process group of this session will get a signal SIGTTOU when it tries to write to stdout (here stdout refers to the controlling terminal), because the terminal driver will check whether this process belongs to the foreground process group.
So how does the terminal driver tolerate an output from another session's process? What happened there?
I am creating my own shell in C language. So far I implemented many features but the thing I am having problems with is CTRL-Z handling(SIGTSTP). Let me specify the problem over successful attempts:
When I execute a program in my shell (like gedit), and then press Ctrl-Z it executes kill(p_id, SIGTSTP) and stops that process. The shell also adds the process id in background_processes array so we can reach it in further. Then if I type "fg" in my shell, it brings the process to the foreground and executes kill(p_id, SIGCONT) so we can continue to use the program. Also the shell waits for the process to complete by executing waitpid function. We close the program by clicking X button or pressing Ctrl-C. Exact same thing in Linux shell. SUCCESFULL!!!
If I execute a program in my shell (like gedit) in background by specifying & (ampersand), it automatically starts this process in backgrounds by not waiting the process. But it adds the process id in background_processes array so we can reach it in further. Then when I type "fg" in my shell, it brings the process to the foreground. It actually waits for the process to complete by executing waitpid function. Also it doesn't matter if have more than one process in background, they will be bring to the foreground one by one. We close the programs by clicking X button or pressing Ctrl-C. Exact same thing in Linux shell. SUCCESFULL!!!
Lets execute a process in the foreground and then send it to the background by Ctrl-Z, and execute a process in background. We have 2 processes in the background. If I type "fg" it brings the first background process to the foreground and waits it. If I press X button (close button) which closes the program the shell brings the second process to the foreground and waits for it. Going very well right, thats what we want. So this scenario also worked very well.
The problem scenario is the same as the previous scenario in creating processes. When I type "fg" it brings the first background process to the foreground and waits it. But then if I press Ctrl-C it closes both processes!!!!!! It should only closed the first process and should have wait for the second process!!!
I searched everywhere, tried everything but couldn't figure it out. But the problem seems like with line 525. When I send SIGCONT signal it closes the process. But if comment that line it doesn't close but also I can't use the process since it is stopped!!!
I have the code in my GitHub repo here : https://github.com/EmreKumas/Myshell
Thanks for reading...
It seems like the problem is caused because of process groups. I did only create different process groups for background jobs but since you cannot change the process group of a child after it executed exec command, you better do it at the beginning before exec call. Now, the problem is solved thanks to "#that other guy" and "#John Bollinger".
I am trying to write a program , which does a fork and exec a child process and executes it in the back ground .
One approach I would see is to redirect the output to /dev/NULL file and come back to my main program . Any other ideas ?
After a process is started, shell has no more control on process file descriptors so you can not silence it by a shell command i.e. terminal has its stdin, stdout and stderr bound to the terminal and you cannot do anything about it without re-gaining control over that terminal.
There is a tool called retty how you can use it can be seen at this link retty this tool is used to attach processes running on terminals
Beside you can also use the built in disown command to disown the process which will prevent from sending a SIGHUP signal to the program when the shell exits
This link can be helpful Link to a similar problem
I'm working on my homework which is to replicate the unix command shell in C.
I've implemented till single command execution with background running (&).
Now I'm at the stage of implementing pipes and I face this issue, For pipes greater than 1, the children commands with pipe are completed, but the final output doesn't get displayed on stdout (the last command's stdin is replaced with read of last pipe)
dup2(pipes[lst_cmd], 0);
I tried fflush(STDIN_FILENO) at the parent too.
The exit of my program is CONTROL-D, and when i press that, the output gets displayed (also exits since my operation on CONTROL-D is to exit(0)).
I think the output of pipe is in the stdout buffer but doesn't get displayed. Is there anyother means than fflush to get the stuff in the buffer to stdout?
Having seen the code (unfair advantage), the primary problem was the process structure combined with not closing pipes thoroughly.
The process structure for a pipeline ps | sort was:
main shell
- coordinator sub-shell
- ps
- sort
The main shell was creating N pipes (N = 1 for ps | sort). The coordinator shell was then created; it would start the N+1 children. It did not, however, wait for them to terminate, nor did it close its copy of the pipes. Nor did the main shell close its copy of the pipes.
The more normal process structure would probably do without the coordinator sub-shell. There are two mechanisms for generating the children. Classically, the main shell would fork one sub-process; it would do the coordination for the first N processes in the pipeline (including creating the pipes in the first place), and then would exec the last process in the pipeline. The main shell waits for the one child to finish, and the exit status of the pipeline is the exit status of the child (aka the last process in the pipeline).
More recently, bash provides a mechanism whereby the main shell gets the status of each child in the pipeline; it does the coordination.
The primary fixes (apart from some mostly minor compilation warnings) were:
main shell closes all pipes after forking coordinator.
main shell waits for coordinator to complete.
coordinator closes all pipes after forking pipeline.
coordinator waits for all processes in pipeline to complete.
coordinator exits (instead of returning to provide duelling dual prompts).
A better fix would eliminate the coordinator sub-shell (it would behave like the classical system described).