I'm trying to write a mock-shell in c on linux, and got stuck on this problem:
I need to run some processes in the background, and some processes in the foreground.
To prevent the foreground processes from becoming zombies, I can use wait(), but how do I prevent the background processes from becoming zombies?
You cannot prevent any process from becoming a zombie, but you can limit the time that it remains one. A process is a zombie from the time it terminates to the time its parent collects it via a call to wait() or waitpid() or another function serving that purpose. That time can be made very short indeed, for instance if the parent process is already waiting when the child terminates, but termination and subsequent collection are not synchronous.
The distinction between background and foreground processes is primarily about control of a terminal; it has little to do with a parent shell managing child processes. You collect child processes belonging to background jobs via wait(), etc., exactly the same way you collect child processes belonging to foreground jobs. You can can collect already-terminated children without waiting for unterminated ones by using waitpid() with the W_NOHANG flag, as #Someprogrammerdude already described. It remains to insert such waits at an appropriate time, and it seems common for interactive shells to schedule that around reading commands from the user.
You can poll for the, using waitpid with the W_NOHANG flag. Or you could add a SIGCHLD handler which will be invoked each time a child-process ends (or have other status changes).
Related
Well, I'm learning about processes using the C language, and I have seen that when you call the exit function a process is terminated and without waiting for it, it will become a zombie process. My question is, if the first process created when executing the program is a process itself, is there a 0S routine that wait for it after an exit() call, avoiding that it becomes a zombie process? I'm curious about it.
For Unix systems at least (and I expect Windows is similar), when the system boots, it creates one special first process. Every process after that is created by some existing process.
When you log into a windowed desktop interface, there is some desktop manager process (that has been created by the first process or one of its descendants) managing windows. When you start a program by clicking on it, that desktop manager or one of its children (maybe some file manager software) creates a process to run the program. When you start a program by executing a command in a terminal window, there is a command line shell process that is interpreting the things you type, and it creates a process to run the program.
So, in all cases, your user program has a parent process, either a command-line shell or some desktop software.
If a child process creates another child (even as the first instruction) then the parent also has to wait for it or it becomes a zombie.
Basically processes always become zombie until they are removed from the process table, the OS (via the process init) will handle and wait() for orphans (zombies without parents), it does that periodically so normally you won't have orphans running for very long.
On Linux, the top most (parent) process is init. This is the only process, which has no parent. Any other process (without any exception) do have a parent and hence is a child of another process.
See:
init
Section NOTES on wait
A child that terminates, but has not been waited for becomes a
"zombie". The kernel maintains a minimal set of information
about the zombie process (PID, termination status, resource usage
information) in order to allow the parent to later perform a wait
to obtain information about the child. As long as a zombie is
not removed from the system via a wait, it will consume a slot in
the kernel process table, and if this table fills, it will not be
possible to create further processes. If a parent process
terminates, then its "zombie" children (if any) are adopted by
init(1), ... init(1) automatically performs a wait to remove the
zombies.
I am just going to post pseudo code,
but my question is I have a loop like such
for(i<n){
createfork();
if(child)
/*
Exit so I can control exact amount of forks
without children creating more children
*/
exit
}
void createfork(){
fork
//execute other methods
}
Does my fork create a process do what it is suppose to do and exit then create another process and repeat? And if so what are some ways around this, to get the processes running concurrently?
Your pseudocode is correct as written and does not need to be modified.
The processes are already executing in parallel, all six of them or however many you spawn. As written, the parent process does not wait for the children to finish before spawning more children. It calls fork(), checks if (child) (which is skipped), then immediately proceeds to the next for loop iteration and forks again.
Notably, there's no wait() call. If the parent were to call wait() or waitpid() to wait for each child to finish then that would introduce the serialism you're trying to avoid. But there is no such call, so you're good.
When a process successfully performs a POSIX fork(), that process and the new child process are initially both eligible to run. In that sense, they will run concurrently until one or the other blocks. Whether there will be any periods of time when both are executing machine instructions (on different processing units) depends at least on details of hardware capabilities, OS scheduling, the work each process is performing, and what other processes there are in the system and what they are doing.
The parent certainly does not, in general, automatically wait for the child to terminate before it proceeds with its own work (there is a family of functions to make it wait when you want that), nor does the child process automatically wait for any kind of signal from the parent. If the next thing the parent does is fork another child, then that will under many circumstances result in the parent running concurrently with both (all) children, in the sense described above.
I cannot speak to specifics of the behavior of your pseudocode, because it's pseudocode.
I have just had a lecture that sums reaping as:
Reaping
Performed by parent on terminated child (using wait or waitpid)
Parent is given exit status informaton
Kernel then deletes zombie child process
So I understand that reaping is done by calling wait or waitpid from the parent process after which the kernel deletes the zombie process. If this actually is the case, that reaping is done only when calling wait or waitpid, why do the child processes actually go away after returning in theor entry function - I mean that indeed does seem as if the child processes have been reaped and thus no resources are wasted even though the parent process may not be waiting.
So is "reaping" only possible when calling wait or waitpid? Is processes are "reaped" as long as they return and exit from their entry function (which I assume all processes do) - what is the point of talking about "reaping" as if it was something special?
The child process does not fully "go away" when it exits. It ceases to exist as a running process, and most/all of its resources (memory, open files, etc.) are released, but it still remains in the process table. It remains in the process table because that's where its exit status is stored, so that the parent can retrieve it by calling one of the wait variants. If the parent fails to call wait, the process table entry sticks around — and that's what makes it a "zombie".
I said that most/all of its resources are released, but the one resource that's definitely still consumed is that process table slot.
As long as the (dead) child's parent exists, the kernel doesn't know that the parent isn't going to call wait eventually, so the process table slot has to stay there, so that the eventual call to wait (if there is one) can return the proper exit status.
If the parent eventually exits (without ever calling wait), the child will be inherited by the grandparent, which is usually a "master" process like the shell, or init, that does routinely call wait and that will finally "reap" the poor young zombie.
So, yes, it really is true that the only way for the parent to properly "reap" the child is, just as was said in your lecture, to call one of the wait functions. (Or to exit, but that's not an option if the parent is long-running.)
Footnote: I said "the child will be inherited by the grandparent", but I think I was wrong, there. Under Unix and Linux, orphaned processes are generally always inherited by pid 1, aka init.
The purpose of the wait*() call is to allow the child process to report a status back to the parent process. When the child process exits, the operating system holds that status data in a little data structure until the parent reads it. Reaping in that sense is cleaning out that little data structure.
If the parent does not care about waiting for status from the child, the code could be written in a way to allow the parent to ignore the status, and so the reaping occurs semi-automatically. One way is to ignore the SIGCHLD signal.
Another way is to perform a double-fork to create a grandchild process instead. When doing this, the "parent" does a blocking wait() after a call to fork(). Then, the child performs another fork() to create the grandchild and then immediately exits, causing the parent to unblock. The grandchild now does the real work, and is automatically reaped by the init process.
I was trying to write a basic multiprocessing tcp-server, which forks a process for every new accept().
I don't need the parent process to wait on the child processes. I have come across two solutions- forking twice and daemonising.
What's the difference between the two?
Which is more suitable in this scenario?
What are the factors that are to be kept in mind for choosing one amongst these?
There is a subtle difference.
Forking twice: Intermediate child process can't become a zombie provided it has exited and has been waited for by Parent. Grandchild can't become a zombie either as it's parent (intermediate child process) has exited, so grandchild is an orphan. The orphan(grandchild) gets inherited by init and if it exits now, it is the responsibility of the system to clean it up. In this way, the parent process is releived of the responsibility of waiting to collect the exit status signal from child and also the parent can be busy doing some other work. This also enables the child to run for long time so that a shorttime parent need not wait for that amount of time.
Daemon: This is for programs wishing to detach themselves from the controlling terminal and run in the background as system daemons. Has no controlling terminal.
The decision of approach depends on the requirement/scenario in hand.
You do need the parent process to (eventually) wait() for each of its child processes, else the children will hang around until the parent exits. This is a form of resource leak.
Forking twice, with the intermediate process exiting immediately after forking, allows the original process to collect the child immediately (via wait()), and makes the grandchild process an orphan, which the system has responsibility for cleaning up. This is one way to avoid accumulating zombie processes. The grandchild remains in the same process group (and thus the same session) as the original process.
Daemonizing serves a somewhat different purpose. It puts the resulting (child) process in a new session (and new process group) with no controlling terminal. The same effect can be achieved by forking once, with the parent immediately calling _exit() and the child calling setsid().
A system service daemonizes to escape the session in which it was launched, so as not to be shut down when that session ends. This has little to do with multiprocessing, but a lot to do with process management. A process double-forks to avoid process management duties for the (grand)child processes; this has both multiprocessing and process management aspects.
Note, too, that double-forking doesn't just pass off process-management responsibilty, it also gives up process-management ability. Whether that's a good trade-off is situation-dependent.
When I call kill() on a process, it returns immediately, because it just send a signal. I have a code where I am checking some (foreign, not written nor modifiable by me) processes in a loop infinitely and if they exceed some limits (too much ram eaten etc) it kills them (and write to a syslog etc).
Problem is that when processes are heavily swapped, it takes many seconds to kill them, and because of that, my process executes the same check against same processes multiple times and attempts to send the signal many times to same process, and write this to syslog as well. (this is not done on purpose, it's just a side effect which I am trying to fix)
I don't care how many times it send a signal to process, but I do care how many times it writes to syslog. I could keep a list of PID's that were already sent the kill signal, but in theory, even if there is low probability, there could be another process spawned with same pid as previously killed one had, which might also be supposed to be killed and in this case, the log would be missing.
I don't know if there is unique identifier for any process, but I doubt so. How could I kill a process either synchronously, or keep track of processes that got signal and don't need to be logged again?
Even if you could do a "synchronous kill", you still have the race condition where you could kill the wrong process. It can happen whenever the process you want to kill exits by its own volition, or by third-party action, after you see it but before you kill it. During this interval, the PID could be assigned to a new process. There is basically no solution to this problem. PIDs are inherently a local resource that belongs to the parent of the identified process; use of the PID by any other process is a race condition.
If you have more control over the system (for example, controlling the parent of the processes you want to kill) then there may be special-case solutions. There might also be (Linux-specific) solutions based on using some mechanisms in /proc to avoid the race, though I'm not aware of any.
One other workaround may be to use ptrace on the target process as if you're going to debug it. This allows you to partially "steal" the parent role, avoiding invalidation of the PID while you're still using it and allowing you to get notification when the process terminates. You'd do something like:
Check the process info (e.g. from /proc) to determine that you want to kill it.
ptrace it, temporarily stopping it.
Re-check the process info to make sure you got the process you wanted to kill.
Resume the traced process.
kill it.
Wait (via waitpid) for notification that the process exited.
This will make the script wait for process termination.
kill $PID
while [ kill -0 $PID 2>/dev/null ]
do
sleep 1
done
kill -0 [pid] tests the existence of a process
The following solution works for most processes that aren't debuggers or processes being debugged in a debugger.
Use ptrace with argument PTRACE_ATTACH to attach to the process. This stops the process you want to kill. At this point, you should probably verify that you've attached to the right process.
Kill the target with SIGKILL. It's now gone.
I can't remember whether the process is now a zombie that you need to reap or whether you need to PTRACE_CONT it first. In either case, you'll eventually have to call waitpid to reap it, at which point you know it's dead.
If you are writing this in C you are sending the signal with the kill system call. Rather than repeatedly sending the terminating signal just send it once and then loop (or somehow periodically check) with kill(pid, 0); The zero value of signal will just tell you if the process is still alive and you can act appropriately. When it dies kill will return ESRCH.
when you spawn these processes, the classical waitpid(2) family can be used
when not used anywhere else, you can move the processes going to be killed into an own cgroup; there can be notifiers on these cgroups which get triggered when process is exiting.
to find out, whether process has been killed, you can chdir(2) into /proc/<pid> or open(2) this directory. After process termination, the status files there can not be accessed anymore. This method is racy (between your check and the action, the process can terminate and a new one with the same pid be spawned).