what if tail fails while reading from pipe - c

distinguish stdout from stderr on pipe
So, related to the link above, I have a child who is executing tail and parent is reading its out put via a pipe.
dup2(pipefd[1], STDOUT_FILENO);
dup2(pipefd[1], STDERR_FILENO);
My question is, if somehow tail fails, what happens to the pipe from which I am reading? Do I get anything on stderr? Does tail terminate itself? or it may hang in there as defunct?

The kernel will send a SIGPIPE signal to the other process on the pipe when tail has terminated. The default action for this signal (if a handler is not installed) is to terminate the process.
If you don't want to deal with signals, you can ignore SIGPIPE in the parent (so it doesn't terminate when tail has terminated), and instead check whether the value of errno is EPIPE after each read. Additionally, you'll have to call wait or waitpid from the parent to reap the zombie child.

you don't get EPIPE when reading, only write will return EPIPE. You'll get EOF, indicated by read returning 0, and since you read stderr, you'll get the error message as well (before the EOF).
The process will become a zombie, and you can use wait/waitpid to get the exit status, which will be non-zero if there was an error.

If tail fails, any read on the read end of the pipe will return EOF. If tail fails, it has already terminated, the definition of "fail" being that it terminated with a non-zero exit status. It will remain in the process table (ie, "defunct") until the parent waits for it.
But why are you having tail use the same pipe for both stderr and stdout? Why not just make two pipes? It seems that this would eliminate the problem of distinguishing between the two output streams.

Related

popen()ed pipe closed from other extreme kills my program

I have a pipe which I opened with FILE *telnet = popen("telnet server", "w". If telnet exits after a while because server is not found, the pipe is closed from the other extreme.
Then I would expect some error, either in fprintf(telnet, ...) or fflush(telnet) calls, but instead, my program suddenly dies at fflush(telnet) without reporting the error. Is this normal behaviour? Why is it?
Converting (expanded) comments into an answer.
If you write to a pipe when there's no process at the other end of the pipe to read the data, you get a SIGPIPE signal to let you know, and the default behaviour for SIGPIPE is to exit (no core dump, but exit with prejudice).
If you examine the exit status in the shell, you should see $? is 141 (128 + SIGPIPE, which is normally 13).
If you don't mind that the process exits, you need do nothing. Alternatively, you can set the signal handler for SIGPIPE to SIG_IGN, in which case your writing operation should fail with an error, rather than terminating the process. Or you can set up more elaborate signal handling.
Note that one of the reasons you need to be careful to close unused file descriptors from pipes is that if the current process is writing to a pipe but also has the read end of the pipe open, it won't get SIGPIPE — but it might get blocked because it can't write more information to the pipe until some process reads from the pipe, but the only process that can read from the pipe is the one that's trying to write to it.

child process got killed while parent was blocked on read

What happens when the child process gets killed while the parent is blocked on read() from a pipe? How should I handle this scenario in parent process?
For clarification, parent process has two threads. Lets say thread1 was reading from the pipe when thread2 killed the child.
Will read() return -1?
Will appreciate any help here.
Pipe behavior has nothing to do with process relationships. The same rules apply regardless of whether the reader is the parent, child, sibling, or some other distant relation of the writer. Or even if the reader and writer are the same process.
The short answer is that death of a writing process is just an EOF from the reader's point of view, not an error, and this doesn't depend on whether the writing process voluntarily called _exit() or was killed by a signal.
The whole cause and effect chain goes like this:
Process X dies -> all of process X's file descriptors are closed.
One of process X's file descriptors was the write end of a pipe
A pipe write file descriptor is closed -> was it the last one?
3a. There are other write file descriptors on the same pipe (e.g. inherited by fork and still open in another process), nothing happens. Stop.
3b. There are no more write file descriptors for this pipe -> the pipe has hit EOF.
Pipe hits EOF -> readers notice.
4a. All read file descriptors for the pipe become readable, waking up any process that was blocking on select or poll or read or another similar syscall.
4b. If there is any leftover data in the pipe buffer (written before the last write file descriptor was closed), that data is returned to the reader(s).
4c. repeat 4b until the pipe buffer is empty
4d. Finally, read() returns 0, indicating EOF.
The exit status of a child process is returned to the parent by the wait family of syscalls, and you have to check that if you want to know when your child processes have been killed by a signal.

Why parent process has to close all file descriptors of a pipe before calling wait( )?

I do not know why the parent process needs to close both the file descriptors of a pipe before calling wait()?
I have a C program which does:
Parent creates child_a, which executes ls -l using execvp, and writes to the pipe (after closing read end of pipe).
Parent creates another child (without closing any file descriptor for pipe), called child_b, which executes 'wc' by reading from pipe.(after closing write end of pipe).
Parent waits for both children to complete by calling wait() twice.
I noticed that program is blocked if parent does not close both file descriptors of the pipe before calling the wait() syscall. Also after reading few questions already posted online it looks like this is the general rule and needs to be done. But I could not find the reason why this has to be done?
Why does wait() not return if the parent does not close the file descriptors of the pipe?
I was thinking that, in the worst case, if the parent does not close the file descriptor of pipe, then the only consequence would be that the pipe would keep existing (which is a waste of resource). But I never thought this would block the execution of child process (as can be seen because wait() does not return).
Also remember, parent is not using the pipe at all. It is child_a writing in the pipe, and child_b reading from the pipe.
If the parent process doesn't close the write ends of the pipes, the child processes never get EOF (zero bytes read) because there's a process that might (but won't) write to the pipe. The child process must also close the write end of the pipe for the same reason — if it doesn't, there's a process (itself) that might (but won't) write to the pipe, so the read won't return EOF.
If you duplicate one end of a pipe to standard output or standard error, you should close both ends of that pipe. It is a common mistake not to have enough calls to close() in multiprocess code using pipes. Occasionally, you get away with being sloppy, but the details vary by case and usually you don't.

I am using fork and pipe.Child process is not getting killed when parent process is aborted abruptly(ex.segmentation fault)

I'am using fork to create child and parent processes and used pipe to send and receive messages between them.I'm running parent process with some inputs and it calls child process.If parent process is executed successfully, child process is getting killed automatically but If i press ctrl+c when parent process is running or if there is any segmentation fault, parent process is getting killed but child process is not getting killed.
Can anybody post me the logic to kill child process when parent process is abrupted ?
Since you already use a pipe for communication, this should be easy.
Pipe reads block until there's data to read and if parent process is the writer and died, you get EOF immediately. If your parent process never closes it's write end, then you have a reliable way to detect death.
Pipe writes hit SIGPIPE and return EPIPE from the call if the signal is ignored when there are no readers.
In the child, select (if you can block) on the fd of pipe and kill the process at appropriate time. There is no SIGCHLD equivalent to parent dying.
man 7 pipe for a good overview. Excerpt:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from
the pipe will see end-of-file (read(2) will return 0). If all file descriptors referring to the read end of a
pipe have been closed, then a write(2) will cause a SIGPIPE signal to be generated for the calling process. If
the calling process is ignoring this signal, then write(2) fails with the error EPIPE. An application that
uses pipe(2) and fork(2) should use suitable close(2) calls to close unnecessary duplicate file descriptors;
this ensures that end-of-file and SIGPIPE/EPIPE are delivered when appropriate.

Alternatives to popen/pclose?

I'm writing a program that has to execute other external processes; right now the program launches the processes' commandlines via popen, grabs any output, and then grabs the exit status via pclose.
What is happening, however, is that for fast-running processes (e.g. the launched process errors out quickly) the pclose call cannot get the exit status (pclose returns -1, errno is ECHILD).
Is there a way for me to mimic the popen/pclose type behavior, except in a manner that guarantees capturing the process end "event" and the resultant return code? How do I avoid the inherent race condition with pclose and the termination of the launched process?
fork/exec/wait
popen is just a wrapper to simplify the fork/exec calls. If you want to acquire the output of the child, you'll need to create a pipe, call fork, dup the child's file descriptors to the pipe, and then exec. The parent can read the output from the pipe and call wait to get the child's exit status.
You can use vfork() and execv().

Resources