C check before writing to closed pipe - c

Is there an easy way to check if a pipe is closed before writing to it in C? I have a child and parent process, and the parent has a pipe to write to the child. However, if the child closes the pipe and the parent tries to read - I get a broken pipe error.
So how can I check to make sure I can write to the pipe, so I can handle it as an error if I can't? Thanks!

A simple way to check would be to do a 0 byte write(2) to the pipe and check the return. If you're catching SIGPIPE or checking for EPIPE, you get the error. But that's just the same as if you go ahead and do your real write, checking for the error return. So, just do the write and handle an error either in a signal handler (SIGPIPE) or, if the signal is ignored, by checking the error return from write.

How about just try to write and deal with the error? The same way you would for a write to a file or a database. I see no value in the idiom:
check if *this* is going to work
do *this*
You merely introduce a smaller, and harder to catch in testing, window of opportunity:
check if *this* is going to work
child thinks "Ha, fooled you, I'm off now!"
do *this*, which now fails!

Related

popen()ed pipe closed from other extreme kills my program

I have a pipe which I opened with FILE *telnet = popen("telnet server", "w". If telnet exits after a while because server is not found, the pipe is closed from the other extreme.
Then I would expect some error, either in fprintf(telnet, ...) or fflush(telnet) calls, but instead, my program suddenly dies at fflush(telnet) without reporting the error. Is this normal behaviour? Why is it?
Converting (expanded) comments into an answer.
If you write to a pipe when there's no process at the other end of the pipe to read the data, you get a SIGPIPE signal to let you know, and the default behaviour for SIGPIPE is to exit (no core dump, but exit with prejudice).
If you examine the exit status in the shell, you should see $? is 141 (128 + SIGPIPE, which is normally 13).
If you don't mind that the process exits, you need do nothing. Alternatively, you can set the signal handler for SIGPIPE to SIG_IGN, in which case your writing operation should fail with an error, rather than terminating the process. Or you can set up more elaborate signal handling.
Note that one of the reasons you need to be careful to close unused file descriptors from pipes is that if the current process is writing to a pipe but also has the read end of the pipe open, it won't get SIGPIPE — but it might get blocked because it can't write more information to the pipe until some process reads from the pipe, but the only process that can read from the pipe is the one that's trying to write to it.

Disable SIGPIPE signal on write(2) call in library

Question
Is it possible to disable the raising of a signal (SIGPIPE) when writing to a pipe() FD, without installing my own signal handler or disabling/masking the signal globally?
Background
I'm working on a small library that occasionally creates a pipe, and fork()s a temporary child/dummy process that waits for a message from the parent. When the child process receives the message from the parent, it dies (intentionally).
Problem
The child process, for circumstances beyond my control, runs code from another (third party) library that is prone to crashing, so I can't always be certain that the child process is alive before I write() to the pipe.
This results in me sometimes attempting to write() to the pipe with the child process' end already dead/closed, and it raises a SIGPIPE in the parent process. I'm in a library other customers will be using, so my library must be as self-contained and transparent to the calling application as possible. Installing a custom signal handler could break the customer's code.
Work so far
I've got around this issue with sockets by using setsockopt(..., MSG_NOSIGNAL), but I can't find anything functionally equivalent for pipes. I've looked at temporarily installing a signal handler to catch the SIGPIPE, but I don't see any way to limit its scope to the calling function in my library rather than the entire process (and it's not atomic).
I've also found a similar question here on SO that is asking the same thing, but unfortunately, using poll()/select() won't be atomic, and there's the remote (but possible) chance that the child process dies between my select() and write() calls.
Question (redux)
Is there any way to accomplish what I'm attempting here, or to atomically check-and-write to a pipe without triggering the behavior that will generate the SIGPIPE? Additionally, is it possible to achieve this and know if the child process crashed? Knowing if it crashed lets me build a case for the vendor that supplied the "crashy" library, and lets them know how often it's failing.
Is it possible to disable the raising of a signal (SIGPIPE) when writing to a pipe() FD [...]?
The parent process can keep its copy of the read end of the pipe open. Then there will always be a reader, even if it doesn't actually read, so the condition for a SIGPIPE will never be satisfied.
The problem with that is it's a deadlock risk. If the child dies and the parent afterward performs a blocking write that cannot be accommodated in the pipe's buffer, then you're toast. Nothing will ever read from the pipe to free up any space, and therefore the write can never complete. Avoiding this problem is one of the purposes of SIGPIPE in the first place.
You can also test whether the child is still alive before you try to write, via a waitpid() with option WNOHANG. But that introduces a race condition, because the child could die between waitpid() and the write.
However, if your writes are consistently small, and if you get sufficient feedback from the child to be confident that the pipe buffer isn't backing up, then you could combine those two to form a reasonably workable system.
After going through all the possible ways to tackle this issue, I discovered there were only two venues to tackle this problem:
Use socketpair(PF_LOCAL, SOCK_STREAM, 0, fd), in place of pipes.
Create a "sacrificial" sub-process via fork() which is allowed to crash if SIGPIPE is raised.
I went the socketpair route. I didn't want to, since it involved re-writing a fair bit of pipe logic, but it's wasn't too painful.
Thanks!
Not sure I follow: you are the parent process, i.e. you write to the pipe. You do so to send a message after a certain period. The child process interprets the message in some way, does what it has to do and exits. You also have to have it waiting, you can't get the message ready first and then spawn a child to handle it. Also just sending a signal would not do the trick as the child has to really act on the content of the message, and not just the "do it" call.
First hack which comes to mind would be that you wont close the read side of the pipe in the parent. That allows you to freely write to the pipe, while not hurting child's ability to read from it.
If this is not fine, please elaborate on the issue.

read() from pipe guaranteed to provide all atomically written data before EOF?

I'm using a simple fork() parent-child example to have the child generate some data, and write() it for the parent. The child will atomically write less than 64kib (65536 bytes) of data atomically to the pipe.
The parent reads from the pipe, and when it receives EOF (ie: assuming that the remote side has been closed), it carries on with some processing logic and closes at its own convenience, and doesn't care how long it takes the child to terminate.
Is the parent guaranteed to be able to read all of the client data that was sent before EOF is encountered, or does any potential OS-level logic kick in to trigger the EOF early before all of the data is read?
I have found a very similar question on SO, but it didn't receive an authoritative/cited answer.
Thank you.
Yes, the parent will be able to read all the data. To put your mind at ease, try the following in a shell:
echo test | (sleep 1; cat)
The echo command is done immediately; the other side of the pipe will wait one second and then tries to read from it. This just works.
The child can also write more than 64 kiB without problems, as long as the parent will keep on reading in a loop, although then it won't be atomic any longer.

Pipe is not receiving all output from child process

I wanted to open up a pipe to a program and read output from it. My initial inclination was to use popen(), but the program takes a number of options, and rather that fighting with shell quoting/escaping, I decided to use a combination of pipe(), fork(), dup() to tie the ends of the pipe to stdin/stdout in the parent/child, and execv() to replace the child with an invocation of the program passed all of the options it expects as an array.
The program outputs many lines of data (and flushes stdout after each line). The parent code sets stdin to non-blocking and reads from it in a loop using fgets(). The loop runs while fgets() return non-NULL or stdin has an error condition that is EAGAIN or EWOULDBLOCK.
It receives most of the lines successfully, but towards the end it seems to drop off, with the last fgets() failing with an odd error of "No such file or directory."
Does anyone know what I might have done wrong here?
I found the problem. I stupidly was not resetting errno to zero each iteration. I guess I just assumed fgets() would take care of it or something... My stupid mistake. Now it is working fine. Always reset errno!
Thanks for the help anyway.
not sure, there is a cool function on linux called posix_spawn (example here http://www.opengroup.org/onlinepubs/000095399/xrat/xsh_chap03.html#tag_03_03_01_02) sometimes it makes it easier to do pipes... but sounds like a possible blocking issue or pipe....
Make sure you open a pipe to STDERR. Most programs write error data there instead of STDIN.

Find out if pipe's read end is currently blocking

I'm trying to find out if a child process is waiting for user input (without parsing its output). Is it possible, in C on Unix, to determine if a pipe's read end currently has a read() call blocking?
The thing is, I have no control over the programs exec'd in the child processes. They print all kinds of verbose garbage which I would usually want to redirect to /dev/null. Occasionally though one will prompt the user for something. (With the prompt having no reliable format.) So my idea was:
In a loop:
Drain child's stdout, append it to a temporary buffer.
Check (no idea how) if the child is asking for user input, in which case the buffer is printed to stdout.
When the child exits, throw away the buffer.
The thing is, I have no control over the programs exec'd in the child processes. They print all kinds of verbose garbage which I would usually want to redirect to /dev/null. Occasionally though one will prompt the user for something. (With the prompt having no reliable format.) So my idea was:
In a loop:
Drain child's stdout, append it to a temporary buffer.
Check (no idea how) if the child is asking for user input, in which case the buffer is printed to stdout.
When the child exits, throw away the buffer.
You have these options:
if you know that the child will need certain input (such as shell that will read a command), just write to a pipe
if you assume the child won't read anything usually, but may do it sometimes, you probably need something like job control in the shell (use a terminal for communication with the child, use process groups and TIOCSPGRP ioctl on the terminal to get the child to the background; the child will get SIGTTIN when it tries to read from the terminal, and you can wait() for that). This is how bash handles things like "(sleep 10; read a;)&"
if you don't know what to write, or you have more possibilities, you will have to parse the output
That sounds as if you were trying to supervise dpkg where occasionally some post-inst script queries the admin whether it may override some config file.
Anyway, you may want to look at how strace works:
strace -f -etrace=read your.program
Of course you need to keep track of which fds are the pipes you write about, but you probably need only stdin, anyway.
I don't think that's true: For example, right before calling read() on the reader side, the pipe would have a reader that isn't actually reading.
You would typically just write to the pipe, or use select or poll. If you need a handshake mechanism you can do that out of band various ways or come up with and in-band protocol.
I don't know if there is a built-in way to know if a reader on the other end is blocking. Why do you need to know this?
If I recall correctly, you can not have a pipe with no reader which means that you have either a read(2) or a select(2) syscal pending at all time.

Resources