child process got killed while parent was blocked on read - c

What happens when the child process gets killed while the parent is blocked on read() from a pipe? How should I handle this scenario in parent process?
For clarification, parent process has two threads. Lets say thread1 was reading from the pipe when thread2 killed the child.
Will read() return -1?
Will appreciate any help here.

Pipe behavior has nothing to do with process relationships. The same rules apply regardless of whether the reader is the parent, child, sibling, or some other distant relation of the writer. Or even if the reader and writer are the same process.
The short answer is that death of a writing process is just an EOF from the reader's point of view, not an error, and this doesn't depend on whether the writing process voluntarily called _exit() or was killed by a signal.
The whole cause and effect chain goes like this:
Process X dies -> all of process X's file descriptors are closed.
One of process X's file descriptors was the write end of a pipe
A pipe write file descriptor is closed -> was it the last one?
3a. There are other write file descriptors on the same pipe (e.g. inherited by fork and still open in another process), nothing happens. Stop.
3b. There are no more write file descriptors for this pipe -> the pipe has hit EOF.
Pipe hits EOF -> readers notice.
4a. All read file descriptors for the pipe become readable, waking up any process that was blocking on select or poll or read or another similar syscall.
4b. If there is any leftover data in the pipe buffer (written before the last write file descriptor was closed), that data is returned to the reader(s).
4c. repeat 4b until the pipe buffer is empty
4d. Finally, read() returns 0, indicating EOF.
The exit status of a child process is returned to the parent by the wait family of syscalls, and you have to check that if you want to know when your child processes have been killed by a signal.

Related

read() hangs on zombie process

I have a while loop that reads data from a child process using blocking I/O by redirecting stdout of the child process to the parent process. Normally, as soon as the child process exits, a blocking read() in this case will return since the pipe that is read from is closed by the child process.
Now I have a case where the read() call does not exit for a child process that finishes. The child process ends up in a zombie state, since the operating system is waiting for my code to reap it, but instead my code is blocking on the read() call.
The child process itself does not have any child processes running at the time of the hang, and I do not see any file descriptors listed when looking in /proc/<child process PID>/fd. The child process did however fork two daemon processes, whose purpose seems to be to monitor the child process (the child process is a proprietary application I do not have any control over, so it is hard to say for sure).
When run from a terminal, the child process I try to read() from exits automatically, and in turn the daemon processes it forked terminate as well.
Linux version is 4.19.2.
What could be the reason of read() not returning in this case?
Follow-up: How to avoid read() from hanging in the following situation?
The child process did however fork two daemon processes ... What could be the reason of read() not returning in this case?
Forked processes still have the file descriptor open when the child terminates. Hence read call never returns 0.
Those daemon processes should close all file descriptors and open files for logging.
A possible reason (the most common) for read(2) blocking on a pipe with a dead child, is that the parent has not closed the writing side of the pipe, so there's still an open (for writing) descriptor for that pipe. Close the writing side of the pipe in the parent process before reading from it. The child is dead (you said zombie) so it cannot be the process with the writing side of the pipe open. And don't forget to wait(2) for the child in the parent, or you'll get a system full of zombies :)
Remember, you have to do two closes in your code:
One in the parent process, to close the writing side of the pipe, leaving the parent process with only a reading descriptor.
One in the child process (just before exec(2)ing) closing the reading side of the pipe, leaving the child process only with a writing descriptor.
In case you want to use the pipe(2) to send information to the child, change the reading for writing and viceversa in the above two points.

Why parent process has to close all file descriptors of a pipe before calling wait( )?

I do not know why the parent process needs to close both the file descriptors of a pipe before calling wait()?
I have a C program which does:
Parent creates child_a, which executes ls -l using execvp, and writes to the pipe (after closing read end of pipe).
Parent creates another child (without closing any file descriptor for pipe), called child_b, which executes 'wc' by reading from pipe.(after closing write end of pipe).
Parent waits for both children to complete by calling wait() twice.
I noticed that program is blocked if parent does not close both file descriptors of the pipe before calling the wait() syscall. Also after reading few questions already posted online it looks like this is the general rule and needs to be done. But I could not find the reason why this has to be done?
Why does wait() not return if the parent does not close the file descriptors of the pipe?
I was thinking that, in the worst case, if the parent does not close the file descriptor of pipe, then the only consequence would be that the pipe would keep existing (which is a waste of resource). But I never thought this would block the execution of child process (as can be seen because wait() does not return).
Also remember, parent is not using the pipe at all. It is child_a writing in the pipe, and child_b reading from the pipe.
If the parent process doesn't close the write ends of the pipes, the child processes never get EOF (zero bytes read) because there's a process that might (but won't) write to the pipe. The child process must also close the write end of the pipe for the same reason — if it doesn't, there's a process (itself) that might (but won't) write to the pipe, so the read won't return EOF.
If you duplicate one end of a pipe to standard output or standard error, you should close both ends of that pipe. It is a common mistake not to have enough calls to close() in multiprocess code using pipes. Occasionally, you get away with being sloppy, but the details vary by case and usually you don't.

Will a process writing to a pipe block if the pipe is full?

I'm currently diving into the Win32 API and writing myself a wrapper class for CreateProcess and CreatePipe. I was just wondering what will happen if a process that I opened writes too much output for the pipe buffer to hold. Will the process wait until I read from the other end of the pipe? The Remark of the CreatePipe function suggests so:
When a process uses WriteFile to write to an anonymous pipe, the write operation is not completed until all bytes are written. If the pipe buffer is full before all bytes are written, WriteFile does not return until another process or thread uses ReadFile to make more buffer space available.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365152%28v=vs.85%29.aspx
Let's assume I open a process with CreateProcess, then use WaitForSingleObject to wait until the process exits. Will the process ever exit if it exceeds the buffer size of its standard output pipe?
WaitForSingleObject on a process with redirected output is indeed a deadlock. You need to keep the output pipe drained in order to let the child process run to completion.
Generally you would use overlapped I/O on the pipe and then WaitForMultipleObjects on the handle pair1 (process handle, pipe read event handle) in a loop until the process handle becomes signaled.
Raymond Chen wrote about the scenario when the input is also piped:
Be careful when redirecting both a process's stdin and stdout to pipes, for you can easily deadlock
1 As Hans commented, there can be more than one output stream. stdout, stderr are typical, even more are possible through handle inheritance. Drain all the pipes coming out of the process.

Why should you close a pipe in linux?

When using a pipe for process-process communication, what is the purpose of closing one end of the pipe?
For example: How to send a simple string between two programs using pipes?
Notice that one side of the pipe is closed in the child and parent processes. Why is this required?
If you connect two processes - parent and child - using a pipe, you create the pipe before the fork.
The fork makes the both processes have access to both ends of the pipe. This is not desirable.
The reading side is supposed to learn that the writer has finished if it notices an EOF condition. This can only happen if all writing sides are closed. So it is best if it closes its writing FD ASAP.
The writer should close its reading FD just in order not to have too many FDs open and thus reaching a maybe existing limit of open FDs. Besides, if the then only reader dies, the writer gets notified about this by getting a SIGPIPE or at least an EPIPE error (depending on how signals are defined). If there are several readers, the writer cannot detect that "the real one" went away, goes on writing and gets stuck as the writing FD blocks in the hope, the "unused" reader will read something.
So here in detail what happens:
parent process calls pipe() and gets 2 file descriptors: let's call it rd and wr.
parent process calls fork(). Now both processes have a rd and a wr.
Suppose the child process is supposed to be the reader.
Then
the parent should close its reading end (for not wasting FDs and for proper detection of dying reader) and
the child must close its writing end (in order to be possible to detect the EOF condition).
The number of file descriptors that can be open at a given time is limited. If you keep opening pipes and not closing them pretty soon you'll run out of FDs and can't open anything anymore: not pipes, not files, not sockets, ...
Another reason why it can be important to close the pipe is when the closing itself has a meaning to the application. For example, a common use of pipes is to send the errno from a child process to the parent when using fork and exec to launch an external program:
The parent creates the pipe, calls fork to create a child process, closes its writing end, and tries to read from the pipe.
The child process attempts to use exec to run a different program:
If exec fails, for example because the program does not exist, the child writes errno to the pipe, and the parent reads it and knows what went wrong, and can tell the user.
If exec is successful the pipe is closed without anything being written. The read function in the parent returns 0 indicating the pipe was closed and knows the program was successfully started.
If the parent did not close its writing end of the pipe before trying to read from the pipe this would not work because the read function would never return when exec is successful.
Closing unused pipe file descriptor is more than a matter of ensuring that a process doesn't exhaust its limited set of file descriptor-it is essential to the correct use of pipes. We now consider why the unused file descriptors for both the read and write ends of the pipe must be closed.
The process reading from the pipe closes its write descriptor for the pipe, so that, when the other process completes its output and closes its write descriptor, the read sees end-of-file (once it has ready any outstanding data in the pipe).
If the reading process doesn't close the write end of the pipe, then after the other process closes its write descriptor, the reader won't see end-of-file, even after it has read all data from the pipe. Instead, a read() would block waiting for data, because the kernel knows that there is still at least one write descriptor open for the pipe.That this descriptor is held open by the reading process itself is irrelevant; In theory, that process could still write to the pipe, even if it is blocked trying to read.
For example, the read() might be interrupted by a signal handler that writes data to the pipe.
The writing process closes its read descriptor for the pipe for a different reason.
When a process tries to write to a pipe for which no process has an open read descriptor, the kernel sends the SIGPIPE signal to the writing process. By default, this signal kills a process. A process can instead arrange to catch or ignore this signal, in which case the write() on the pipe fails with the error EPIPE (broken pipe). Receiving the SIGPIPE signal or getting the EPIPE error is useful indication about the status of the pipe, and this is why unused read descriptors for the pipe should be closed.
If the writing process doesn't close the read end of the pipe, then even after the other process closes the read end of the pipe, the writing process will fill the pipe, and a further attempt to write will block indefinitely.
One final reason for closing unused file descriptor is that only after it all file descriptor are closed that the pipe is destroyed and its resources released for reuse by other processes. At this point, any unread data in the pipe is lost.
~ Micheal Kerrisk , the Linux programming interface

I am using fork and pipe.Child process is not getting killed when parent process is aborted abruptly(ex.segmentation fault)

I'am using fork to create child and parent processes and used pipe to send and receive messages between them.I'm running parent process with some inputs and it calls child process.If parent process is executed successfully, child process is getting killed automatically but If i press ctrl+c when parent process is running or if there is any segmentation fault, parent process is getting killed but child process is not getting killed.
Can anybody post me the logic to kill child process when parent process is abrupted ?
Since you already use a pipe for communication, this should be easy.
Pipe reads block until there's data to read and if parent process is the writer and died, you get EOF immediately. If your parent process never closes it's write end, then you have a reliable way to detect death.
Pipe writes hit SIGPIPE and return EPIPE from the call if the signal is ignored when there are no readers.
In the child, select (if you can block) on the fd of pipe and kill the process at appropriate time. There is no SIGCHLD equivalent to parent dying.
man 7 pipe for a good overview. Excerpt:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from
the pipe will see end-of-file (read(2) will return 0). If all file descriptors referring to the read end of a
pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process. If
the calling process is ignoring this signal, then write(2) fails with the error EPIPE. An application that
uses pipe(2) and fork(2) should use suitable close(2) calls to close unnecessary duplicate file descriptors;
this ensures that end-of-file and SIGPIPE/EPIPE are delivered when appropriate.

Resources