Writing a C program to move a process to background - c

I am trying to write a program , which does a fork and exec a child process and executes it in the back ground .
One approach I would see is to redirect the output to /dev/NULL file and come back to my main program . Any other ideas ?

After a process is started, shell has no more control on process file descriptors so you can not silence it by a shell command i.e. terminal has its stdin, stdout and stderr bound to the terminal and you cannot do anything about it without re-gaining control over that terminal.
There is a tool called retty how you can use it can be seen at this link retty this tool is used to attach processes running on terminals
Beside you can also use the built in disown command to disown the process which will prevent from sending a SIGHUP signal to the program when the shell exits
This link can be helpful Link to a similar problem

Related

Why does vim crash when it becomes an orphan process?

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main()
{
int pid = fork();
if (pid) {
sleep(5);
// wait(NULL); // works fine when waited for it.
} else {
execlp("vim", "vim", (char *)NULL);
}
}
When I run this code, vim runs normally then crashes after the 5 seconds (i.e. when its parent exits). When I wait for it (i.e. not letting it become an orphan process), the code works totally fine.
Why does becoming an orphan process become a problem here? Is it something specific to vim?
Why is this even a thing that's visible to vim? I thought that only the parent knows when its children die. But here, I see that somehow, the child notices when it gets adopted, something happens and crashes somehow. Do the children processes get notified when their parent dies as well?
When I run this code, I get this output after the crash:
Vim: Error reading input, exiting...
Vim: preserving files...
Vim: Finished.
This actually happens because of the shell that is executing the binary that forks Vim!
When the shell runs a foreground command, it creates a new process group and makes it the foreground process group of the terminal attached to the shell. In bash 5.0, you can find the code that transfers this responsibility in give_terminal_to(), which uses tcsetpgrp() to set the foreground process group.
It is necessary to set the foreground process group of a terminal correctly, so that the program running in foreground can get signals from the terminal (for example, Ctrl+C sending an interrupt signal, Ctrl+Z sending a terminal stop signal to suspend the process) and also change terminal settings in ways that full-screen programs such as Vim typically do. (The subject of foreground process group is a bit out of scope for this question, just mentioning it here since it plays part in the response.)
When the process (more precisely, the pipeline) executed by the shell terminates, the shell will take back the foreground process group, using the same give_terminal_to() code by calling it with the shell's process group.
This is usually fine, because at the time the executed pipeline is finished, there's usually no process left on that process group, or if there are any, they typically don't hold on to the terminal (for example, if you're launching a background daemon from the shell, the daemon will typically close the stdin/stdout/stderr streams to relinquish access to the terminal.)
But that's not really the case with the setup you proposed, where Vim is still attached to the terminal and part of the foreground process group. When the parent process exits, the shell assumes the pipeline is finished and it will set the foreground process group back to itself, "stealing" it from the former foreground process group which is where Vim is. Consequently, the next time Vim tries to read from the terminal, the read will fail and Vim will exit with the message you reported.
One way to see by yourself that the parent processing exiting does not affect Vim by itself is running it through strace. For example, with the following command (assuming ./vim-launcher is your binary):
$ strace -f -o /tmp/vim-launcher.strace ./vim-launcher
Since strace is running with the -f option to follow forks, it will also start tracing Vim when it's launched. The shell will be executing strace (not vim-launcher), so its foreground pipeline will only end when strace stops running. And strace will not stop running until Vim exits. Vim will work just fine past the 5 seconds, even though it's been reparented to init.
There also used to be an fghack tool, part of daemontools, that accomplished the same task of blocking until all forked children would exit. It would accomplish that by creating a new pipe and have the pipe inherited by the process it spawned, in a way that would get automatically inherited by all other forked children. That way, it could block until all copies of that pipe file descriptor were closed, which typically only happens when all processes exit (unless a background process goes out of its way to close all inherited file descriptors, but that's essentially stating that they don't want to be tracked, and they would most probably have relinquished their access to the terminal by that point.)

Disallowing printf in child process

I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line and the whole app gets messy.
Is it possible to disallow child process to print anything in cmd line from parent process? It would be very helpful to for example being able to define a command that allows or disallows printing by a child process.
There's the time-honoured tradition of just redirecting the output to the bit bucket(a), along the lines of:
system("runChild >/dev/null 2>&1");
Or, if you're doing it via fork/exec, simply redirect the file handles using dup2 between the fork and exec.
It won't stop a determined child from outputting to your standard output but it will have to be very tricky to do that.
(a) I'm not usually a big fan of that, just in case something goes wrong. I'd prefer to redirect it to a real file which can be examined later if need be (and deleted eventually if not).
Read Advanced Linux Programming then syscalls(2).
On recent Linux, every executable is in ELF format (except init or systemd; play with pstree(1) or proc(5)) is running in a process started by fork(2) (or clone(2)...) and execve(2).
You might use cleverly dup2(2) with open(2) to redirect STDOUT_FILENO to /dev/null (see null(4), stdout(3), fileno(3))
I've got a cmd line app in C under Linux that has to run another process, the problem is that the child process prints a lot in a comand line
I would instead provide a way to selectively redirect the child process' output. You could use program arguments or environment variables (see getenv(3) and/or environ(7)) to provide such an option to your user.
An example of such a command program starting and redirecting subprocesses and redirecting them is your GCC compiler (see gcc(1); it runs cc1 and as(1) and ld(1)...). Consider downloading and studying its source code.
Study also -for inspiration- the source code of some shell (e.g. sash), or write your own one.

Linux operating system - input-output piping from parent, child, parent processes

I have some C code called a c-shell that does the following. The parent c-shell reads in a Linux command line, and forks a child process to perform the command. The child does not exec the command until it receives a signal from the parent that it is ready for it to execute. It can handle input files for giving arguments to commands or it can just read them from the command line. It can handle sending output to output files rather than just printing the executed command output to stdout. The way that it sends the output to the output file is by the child redirecting it's stdout to a pipe, and the parent reads from this pipe once it receives the sig-child signal that the child process finished running. It can handle multiple commands (where you put a semi-colon between commands). It can handle piping output from the first command to a second command in the command line. However - and this is my question - it cannot handle a command where you pipe the output of one command to be the input of the second command, and then send the output of the second command to the output file. I'm baffled, given all the above cases work perfectly. I can redirect output from an executed child process to the parent when it finishes so it can complete it. I can redirect the output of the first command running to be the input to the second command running. But I cannot do this if I try to send the output to the second command to an output file. If this question does not make sense, I will post more specifics.
For example: if I enter into my c-shell the following command line: ls -l | grep lsOut (meaning, I do a detailed directory listing, and within that directory listing output, there are some files that contain the characters, "lsOut" (output files from the ls command), and the grep command should filter out all other files in the directory listing that do not contain those characters. That works just fine when it prints to stdout. When I do a command such as: ps > psOut, the output of the ps command writes to the psOut file with no problem. However, if I do the command: ls -l | grep lsOut > lsOutFile, what happens is baffling. It prints the first command, ls -l, to stdout and although I see in print statements that the second command, grep lsOut is being run, and should be receiving the output from ls -l as input to grep lsOut, it appears not to have any affect. The only output is the entire ls -l directory with no grep filtering, and although it says it writes it to the output file, it does not get there. If you want me to post a link to code, I can do that. Thank you very much! I spent hours trying to debug this problem.
The way that it sends the output to the output file is by the child
redirecting it's stdout to a pipe, and the parent reads from this pipe
once it receives the sig-child signal that the child process finished
running.
Hold it right here. As the saying goes: Do not pass "Go". Do not collect $200.
This part is already not quite right. If the child process starts spewing sufficient amount of output, you'll end up with both a hung parent and a hung child process, here.
Pipe buffers are not unlimited in size. Pipe buffers have a fixed, upper, maximum internal size. My recollection is that the default pipe buffer size is 8,192 bytes. It might actually be something else, but the actual size doesn't matter. Whatever the pipe buffer size is, once the buffer fills up, the process that's writing to the pipe buffer is put to sleep, until the reading process starts emptying the pipe by reading from it. As long as the reader and the writer processes work independently, one's reading, one's writing, everything runs smoothly. If the writer is writing faster than the reader is reading, once the number of unread characters reaches the pipe's maximum size, the kernel quietly puts the writer process to sleep, inside write(), until the reader catches up.
If your parent process waits for the child process to exit before it starts reading from the stdout pipe, and the child process writes more than 8,192 (or whatever the actual size is) bytes, the child process will be paused inside it's write() call, until the pipe's read from. And since the parent process isn't going to read from the pipe until the child process terminates, both processes will wait for each other, forever.
So, we already know that your application isn't handling this situation correctly. Although you've described a slightly different problem with your application, given that the application is not handling inter-process pipe semantics correctly, it's fairly likely that your actual problem, if not this, is closely related.
You must completely re-engineer how your applications implements inter-process piping, correctly.

Terminal Access Control issues

I am attempting to write a shell. When a foreground process is run, the forked process pipeline is given its own process group id. The terminal is then given over to this process group id (using tcsetpgrp) and the shell waits for it to terminate before giving itself terminal control once again. This works perfectly fine.
The issue that arises is when I attempt to run a background process. Again, I give all of the processes in the pipeline a single process group id but this time I do not give terminal control to this group. Upon running, the output of a given background command is output to the terminal (before it is finished executing) and the terminal gives the user back the prompt at the same time. What should have happened is that the child process that attempts to write to the terminal should get a SIGTTOU and it should stop, but this clearly doesn't happen. I verified that the forked processes all have the same process group id and that this id is different from the shell's.
Upon exiting the shell (via ctrl-c) and returning to the standard bash shell that ran it, because I did not reap the background process upon shell termination, the background process continues running (which is excepted). What is weird though is that this process continues writing output to the bash shell even though it is not the foreground process. This leads me to conclude that either this background process is not getting any SIGTTOUs because of a POSIX bug (unlikely), it is handling them (causing the default action of stopping to be ignored), or the background process is ignoring SIGTTOUs.
Is there a way to, before exec'ing a forked process, ensure that it will stop upon receiving a SIGTTOU (assuming that the exec binary does not change anything)?
SIGTTOU is sent to a background process which tries to write to the terminal only if the termios flag TOSTOP is set for that terminal. By default, it is generally not set, in which case the background process can happily write to the terminal. (The TOSTOP flag does not affect read permissions. If the process tries to read, it will be sent a SIGTTIN.)
So, yes, there is something the foreground process can do: use tcsetattr to set TOSTOP
The solution was to make the forked process execute the following before calling exec:
struct termios term;
if (tcgetattr(STDIN_FILENO, &term) < 0)
printf("ERROR\n");
term.c_lflag = TOSTOP;
if (tcsetattr(STDIN_FILENO,TCSANOW,&term)<0)
printf("ERROR\n");

Running program in background

Ive got my program in C, 6 source files, and the aim is to copy those files to any other Linux OS computer, and (probably compile, im newbie, so not sure what is needed here) run this program in background. Something like:
user#laptop:~$ program
Program is running in a background. In order to stop Program, type
XXX.
Any tips on this?
Thanks in advance!
Put a daemon(0,0); call in your C program.
stopping it is a bit trickier, I suppose there is only one copy of the program running. Put the program's PID in a file, write another utility (XXX) which reads the PID from the file and kills it.
Important: daemon forks, get the PID of the program after calling daemon.
But maybe you are too newby and just want to execute your program with program& and later kill it.
I completely missunderstood the question. You need shell scripting for this.
For file copying you can use scp. Execute command on the other host with ssh. It should be something like (not tested):
pid=`ssh user#host "make >/dev/null 2>&1; nohup ./program; echo $!`
later you can stop it with
ssh user#host "kill $pid"
First, you should fork().
In parent, you should just exit, in child process - you should handle SIGHUP signal.
In such way - you have daemon.

Resources