Can a parent read from a pipe if a child dies? - c

If a parent and child process communicate via a pipe, what would happen if a parent reads from a pipe after a child process has written to it and died?

When the child dies, its end of the pipe is automatically closed. The parent will read EOF after reading everything that the child wrote before it died, just as if the client called close() explicitly.
Note that the parent can only read data that has actually been written to the pipe. If the child process is performing buffered output, as is the default when using stdio, all the data the application has written might not be in the pipe when it dies. Stdio buffers are automatically flushed when the process calls exit(), but if it dies due to a signal this will not be called.

Related

read() hangs on zombie process

I have a while loop that reads data from a child process using blocking I/O by redirecting stdout of the child process to the parent process. Normally, as soon as the child process exits, a blocking read() in this case will return since the pipe that is read from is closed by the child process.
Now I have a case where the read() call does not exit for a child process that finishes. The child process ends up in a zombie state, since the operating system is waiting for my code to reap it, but instead my code is blocking on the read() call.
The child process itself does not have any child processes running at the time of the hang, and I do not see any file descriptors listed when looking in /proc/<child process PID>/fd. The child process did however fork two daemon processes, whose purpose seems to be to monitor the child process (the child process is a proprietary application I do not have any control over, so it is hard to say for sure).
When run from a terminal, the child process I try to read() from exits automatically, and in turn the daemon processes it forked terminate as well.
Linux version is 4.19.2.
What could be the reason of read() not returning in this case?
Follow-up: How to avoid read() from hanging in the following situation?
The child process did however fork two daemon processes ... What could be the reason of read() not returning in this case?
Forked processes still have the file descriptor open when the child terminates. Hence read call never returns 0.
Those daemon processes should close all file descriptors and open files for logging.
A possible reason (the most common) for read(2) blocking on a pipe with a dead child, is that the parent has not closed the writing side of the pipe, so there's still an open (for writing) descriptor for that pipe. Close the writing side of the pipe in the parent process before reading from it. The child is dead (you said zombie) so it cannot be the process with the writing side of the pipe open. And don't forget to wait(2) for the child in the parent, or you'll get a system full of zombies :)
Remember, you have to do two closes in your code:
One in the parent process, to close the writing side of the pipe, leaving the parent process with only a reading descriptor.
One in the child process (just before exec(2)ing) closing the reading side of the pipe, leaving the child process only with a writing descriptor.
In case you want to use the pipe(2) to send information to the child, change the reading for writing and viceversa in the above two points.

Will write() fail in this case using pipes?

If you fork() and create two processes for reading and writing to a pipe and the child process is reading from the pipe and the parent process is writing to the pipe, will writing in the parent process fail if the child process was to close the pipe that writes before the parent process has a chance to write to the pipe?
The child process closing its write end of the pipe only removes its reference to the pipe, it doesn't cause the pipe to "shut down" or any such thing, and thus won't affect the parent's reference to it in any way. This is true for the close(2) call in general.
Further reading:
File descriptors

child process got killed while parent was blocked on read

What happens when the child process gets killed while the parent is blocked on read() from a pipe? How should I handle this scenario in parent process?
For clarification, parent process has two threads. Lets say thread1 was reading from the pipe when thread2 killed the child.
Will read() return -1?
Will appreciate any help here.
Pipe behavior has nothing to do with process relationships. The same rules apply regardless of whether the reader is the parent, child, sibling, or some other distant relation of the writer. Or even if the reader and writer are the same process.
The short answer is that death of a writing process is just an EOF from the reader's point of view, not an error, and this doesn't depend on whether the writing process voluntarily called _exit() or was killed by a signal.
The whole cause and effect chain goes like this:
Process X dies -> all of process X's file descriptors are closed.
One of process X's file descriptors was the write end of a pipe
A pipe write file descriptor is closed -> was it the last one?
3a. There are other write file descriptors on the same pipe (e.g. inherited by fork and still open in another process), nothing happens. Stop.
3b. There are no more write file descriptors for this pipe -> the pipe has hit EOF.
Pipe hits EOF -> readers notice.
4a. All read file descriptors for the pipe become readable, waking up any process that was blocking on select or poll or read or another similar syscall.
4b. If there is any leftover data in the pipe buffer (written before the last write file descriptor was closed), that data is returned to the reader(s).
4c. repeat 4b until the pipe buffer is empty
4d. Finally, read() returns 0, indicating EOF.
The exit status of a child process is returned to the parent by the wait family of syscalls, and you have to check that if you want to know when your child processes have been killed by a signal.

Read STDIN in child process after exiting parent process

I'm looking for a way to read from STDIN(console) in child process, after closing parent process.
My program should be like:
parent process forks and creates child process.
After creating child process, parent MUST be closed. I can not use functions such as wait() etc.
The thing is, when I exit from parent process, I can not read from console anymore. Is there any way to 'pass the control' to child process, instead of passing it back to shell ?
Instructions:
Process 1: reads data (sigle lines) from standard input stream and passes it with ipc message queue to process 2.
Process 2: receives data send by process 1 and prints it in standard output stream.
Two processes should be executed automatically from 1 initiative process. After executing child processes, initiative process should immediatelly close.

I am using fork and pipe.Child process is not getting killed when parent process is aborted abruptly(ex.segmentation fault)

I'am using fork to create child and parent processes and used pipe to send and receive messages between them.I'm running parent process with some inputs and it calls child process.If parent process is executed successfully, child process is getting killed automatically but If i press ctrl+c when parent process is running or if there is any segmentation fault, parent process is getting killed but child process is not getting killed.
Can anybody post me the logic to kill child process when parent process is abrupted ?
Since you already use a pipe for communication, this should be easy.
Pipe reads block until there's data to read and if parent process is the writer and died, you get EOF immediately. If your parent process never closes it's write end, then you have a reliable way to detect death.
Pipe writes hit SIGPIPE and return EPIPE from the call if the signal is ignored when there are no readers.
In the child, select (if you can block) on the fd of pipe and kill the process at appropriate time. There is no SIGCHLD equivalent to parent dying.
man 7 pipe for a good overview. Excerpt:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from
the pipe will see end-of-file (read(2) will return 0). If all file descriptors referring to the read end of a
pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process. If
the calling process is ignoring this signal, then write(2) fails with the error EPIPE. An application that
uses pipe(2) and fork(2) should use suitable close(2) calls to close unnecessary duplicate file descriptors;
this ensures that end-of-file and SIGPIPE/EPIPE are delivered when appropriate.

Resources