Current scenario, I launch a process that forks, and after a while it aborts().
The thing is that both the fork and the original process print to the shell, but after the original one dies, the shell "returns" to the prompt.
I'd like to avoid the shell returning to the prompt and keep as if the process didn't die, having the child handle the situation there.
I'm trying to figure out how to do it but nothing yet, my first guess goes somewhere around tty handling, but not sure how that works.
I forgot to mention, the shell takeover for the child could be done on fork-time, if that makes it easier, via fd replication or some redirection.
I think you'll probably have to go with a third process that handles user interaction, communicating with the "parent" and "child" through pipes.
You can even make it a fairly lightweight wrapper, just passing data back and forth to the parent and terminal until the parent dies, and then switching to passing to/from the child.
To add a little further, as well, I think the fundamental problem you're going to run into is that the execution of a command by the shell just doesn't work that way. The shell is doing the equivalent of calling system() -- it's going to wait for the process it just spawned to die, and once it does, it's going to present the user with a prompt again. It's not really a tty issue, it's how the shell works.
bash (and I believe other shells) have the wait command:
wait: wait [n]
Wait for the specified process and report its termination status. If
N is not given, all currently active child processes are waited for,
and the return code is zero. N may be a process ID or a job
specification; if a job spec is given, all processes in the job's
pipeline are waited for.
Have you considered inverting the parent child relationship?
If the order in which the new processes will die is predictable, run the code that will abort in the "child" and the code that will continue in the parent.
Related
I am trying to implement simple shell in C language and i am having a hard time implementing job control. Everything online seems complicated enough and i think some simplicity is always good. So let me ask this ... After fork() is called can i handle Ctrl-Z signal with just 2 function and just with the pid ?
I want to call a function e.x. put_background(pid_t pid) when i hit Ctrl-Z and make process with pid = pid to run background and finally call another function e.x. put_foreground(pid_t pid) when i write fg and i want the process with pid = pid to go to foreground again.
So, is this possible? Any help is appreciated.. code more however.
I am trying to implement simple shell in C language and i am having a
hard time implementing job control. Everything online seems
complicated enough and i think some simplicity is always good.
So let
me ask this ... After fork() is called can i handle Ctrl-Z signal with
just 2 function and just with the pid ?
Note that Ctrl-Z is meaningful primarily to the terminal driver. It causes a SIGTSTP to be sent to the foreground process group of the terminal in which that character was typed -- that is, the process group that has that terminal as its controlling one, and has permission to read from it. By default, this causes the processes in that group to stop, but that's it. You don't need to do anything to achieve that.*
I want to call a function e.x. put_background(pid_t pid) when i hit
Ctrl-Z and make process with pid = pid to run background and finally
call another function e.x. put_foreground(pid_t pid) when i write fg
and i want the process with pid = pid to go to foreground again.
By definition and design, at most one process group has control of a given terminal at any particular time. Thus, to move a foreground job to the background, all you need to do is move a different one to the foreground. That can be the shell itself or some other job under its control. The tcsetpgrp() library function accomplishes this. Unless it's the shell itself, you would also want to send a SIGCONT to that process group in case it was stopped.
You additionally need a mechanism to resume a stopped background job, but that's easy: just send that process group a SIGCONT.
So, is this possible? Any help is appreciated.. code more however.
Well sure, you could write one function for moving a job to the foreground and resuming it, and one for resuming a background job. The only information these functions need about the jobs they operate on is their process group IDs (which is the same as the process IDs of their initial processes).
But you also need to maintain some bookkeeping of the current active jobs, and you need to take some care about starting new jobs, and you need to monitor current jobs -- especially the foreground job -- so as to be able to orchestrate all of the transitions appropriately.
The GLIBC manual has an entire chapter on job control, including a substantial section specifically on implementing a job-control shell. This would probably be useful to you even if you are not writing for a GLIBC-based system. The actual code needed is not all that complicated, but getting it right requires a good understanding of a fairly wide range of concepts.
*But you do need to ensure that your shell puts commands it launches into process groups different from its own, else a Ctrl-Z will stop it, too.
" Thus, the common method for launching a daemon involves forking once or twice, and making the parent processes die while the child process begins performing its normal function."
I was going through OS concepts and I didn't understand the above said lines.
Why the parent process will be made to exit( or parent dying ),in the process of creating a Daemon?
Can someone pls explain me.
Traditionally, a daemon process is defined as a process whose parent is the system's init process and which runs in the background. For instance, if you were to execute some program in your terminal, your shell would create a process (either in the foreground or background) and the program would run with your shell as its parent. This is an example of a non-daemon process because its parent is your shell process.
So how do you produce a process whose parent is the init process? Well, a process whose parent process dies before it (the child) has exited becomes an orphan process. An orphan process will in turn be re-parented to the init process. Voila, the process now meets the definition of a daemon.
Tying this back to your quote, if you were to fork once and then kill the parent, you achieve the desired effect. Likewise, if you fork once and then have that child fork another process, followed by killing the first child, you also achieve the desired effect while keeping the (now grandparent) process alive.
This is not a requirement, as any background process could be a daemon. Technically a daemon process in one that runs to operate some general non interactive task. In Unix environment, a daemon is generally set as a process that have some characteristics: no controlling terminal, no umask, particular working directory, etc. Forking twice is a common way to obtain the grandchild to be inherited by init process and have the former properties, in some way to get a process fully detached of any user control (except root of course).
This applies only if a standard user want to create a daemon. Some other standard daemons are created almost normally (see init, launchd, etc)
If the parent exits while the daemon continues running, the daemon is orphaned, and the init process typically adopts it (i.e. becomes the parent).
There are some exceptions, but it is normally expected that a daemon process will be descended from the init process (e.g. the init process will launch daemons during system startup). So, if another process launches a daemon and terminates, it achieves the desired effect.
Note that some other actions are also needed, such as disassociating the daemon from any tty window.
Other answers already explained what happens when parent dies i.e. child is adopted by init process.
But why above is required to make a process daemon? A daemon by definition is non-interacting program i.e. it should not be associated with a terminal. That ensures that daemon continues to work in background even when user sends signals by Control-C, hangup etc. Now, how to prevent a process from ever attaching to a terminal? Make init it's parent by killing original parent.
init is a special process because:
It's not attached to any terminal.
It's first process (pid 1) after booting OS, and that makes it leader of it's session. Note that every UNIX process belongs a process group and that in turn belongs to a session. First process in the session becomes session leader.
In UNIX, only session leader can attach to (or control) terminal. As soon as you make init parent of your process, it joins init's session. Since init is the session leader, your process can never be the leader and hence can never attach to a terminal. That's what we wanted, right?
There are other ways to detach terminal e.g. calling setsid but that's not part of this discussion.
When I call kill() on a process, it returns immediately, because it just send a signal. I have a code where I am checking some (foreign, not written nor modifiable by me) processes in a loop infinitely and if they exceed some limits (too much ram eaten etc) it kills them (and write to a syslog etc).
Problem is that when processes are heavily swapped, it takes many seconds to kill them, and because of that, my process executes the same check against same processes multiple times and attempts to send the signal many times to same process, and write this to syslog as well. (this is not done on purpose, it's just a side effect which I am trying to fix)
I don't care how many times it send a signal to process, but I do care how many times it writes to syslog. I could keep a list of PID's that were already sent the kill signal, but in theory, even if there is low probability, there could be another process spawned with same pid as previously killed one had, which might also be supposed to be killed and in this case, the log would be missing.
I don't know if there is unique identifier for any process, but I doubt so. How could I kill a process either synchronously, or keep track of processes that got signal and don't need to be logged again?
Even if you could do a "synchronous kill", you still have the race condition where you could kill the wrong process. It can happen whenever the process you want to kill exits by its own volition, or by third-party action, after you see it but before you kill it. During this interval, the PID could be assigned to a new process. There is basically no solution to this problem. PIDs are inherently a local resource that belongs to the parent of the identified process; use of the PID by any other process is a race condition.
If you have more control over the system (for example, controlling the parent of the processes you want to kill) then there may be special-case solutions. There might also be (Linux-specific) solutions based on using some mechanisms in /proc to avoid the race, though I'm not aware of any.
One other workaround may be to use ptrace on the target process as if you're going to debug it. This allows you to partially "steal" the parent role, avoiding invalidation of the PID while you're still using it and allowing you to get notification when the process terminates. You'd do something like:
Check the process info (e.g. from /proc) to determine that you want to kill it.
ptrace it, temporarily stopping it.
Re-check the process info to make sure you got the process you wanted to kill.
Resume the traced process.
kill it.
Wait (via waitpid) for notification that the process exited.
This will make the script wait for process termination.
kill $PID
while [ kill -0 $PID 2>/dev/null ]
do
sleep 1
done
kill -0 [pid] tests the existence of a process
The following solution works for most processes that aren't debuggers or processes being debugged in a debugger.
Use ptrace with argument PTRACE_ATTACH to attach to the process. This stops the process you want to kill. At this point, you should probably verify that you've attached to the right process.
Kill the target with SIGKILL. It's now gone.
I can't remember whether the process is now a zombie that you need to reap or whether you need to PTRACE_CONT it first. In either case, you'll eventually have to call waitpid to reap it, at which point you know it's dead.
If you are writing this in C you are sending the signal with the kill system call. Rather than repeatedly sending the terminating signal just send it once and then loop (or somehow periodically check) with kill(pid, 0); The zero value of signal will just tell you if the process is still alive and you can act appropriately. When it dies kill will return ESRCH.
when you spawn these processes, the classical waitpid(2) family can be used
when not used anywhere else, you can move the processes going to be killed into an own cgroup; there can be notifiers on these cgroups which get triggered when process is exiting.
to find out, whether process has been killed, you can chdir(2) into /proc/<pid> or open(2) this directory. After process termination, the status files there can not be accessed anymore. This method is racy (between your check and the action, the process can terminate and a new one with the same pid be spawned).
I have a fork occurring in a loop, and above the fork I prompt for a user's input. In my forked process, there's also some printing. Because there's no guarantee to the order the processes will run in, I often (or always) get lines from the child process printing between my prompt to the user and the place where they can enter information.
I.e., I get something like this:
Enter info: <OUTPUT FROM CHILD>
_
(where the _ indicates that the user is free to enter an input.)
Since I'm trying to allow my parent process to fork many children process (each based on piece of information given by the user) that run simultaneously, I can't wait for the child to end before letting the parent continue. Is there a way to make the parent wait for part of the child to complete before moving on?
A lot depends on what you're really trying to do, but you can't use waitpid() or wait() to wait for part of a process to finish. The wait family of functions wait on moribund processes, or processes that have been stopped due to a signal (SIGSTOP, SIGTTIN, SIGTTOU, etc).
Some questions:
Should the output from the child processes be sent to the screen, which leads to this confusion, or should it be sent to a file?
Or, should the program have a pipe from each child so that it can read the output from the child and display it on an appropriate portion of the screen when it is convenient?
Or, in a windowing environment, should the children's messages be sent to a different window (like the console window)?
Or should the children write to the syslog daemon?
Or should the children be made to hang on a SIGTTOU signal?
A lot depends on the purpose of the messages, and the importance of immediate display of the messages.
The other answers are definitely more general, and the proper way to solve this problem would involve some kind of pipe, but my case was actually very simple, and just needed the parent to wait for a while, so I added a usleep() line, to make the parent wait a few milliseconds for the child to finish printing. It's definitely not perfect, but it worked.
In C, is it possible to have the forked() process alive indefinitely even after the parent exits?
The idea of what I am trying to do is, Parent process forks a child, then exits, child keeps running in background until another process sends it a kill signal.
Yes, it is definitely possible to keep the child alive. The other responders are also correct; this is how a "daemon" or background process runs in a Linux environment.
Some call this the "fork off and die" approach. Here's a link describing how to do it:
http://wiki.linuxquestions.org/wiki/Fork_off_and_die
Note that more than just fork()-ing is done. File descriptors are closed to keep the background process from tying up system resources, etc.
Kerrek is right, this exactly the way how every daemon is implemented. So, your idea is perfect.
There is a daemon library function which is very easy to use for that.
The daemon() function call is not without limitations if you want to
write a well-behaved daemon. See On Starting Daemons
for an explanation.
Briefly: A good daemon should only background when it is ready to field requests, but do its setup under its own PID and print startup errors