I want to pipe the output of a child process to the parent's stdout. I know there are other ways of doing this, but why can't a pipe's read-end be duplicated to stdout? Why doesn't the program print what is written to the pipes write end?
Here i have a minimal example (without any subprocesses) of what I'm trying to do. Im expecting to see test in the output when running, but the program outputs nothing.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
int fds[2];
if(pipe(fds) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
if(write(fds[1], "test", 5) == -1) {
perror("write");
exit(EXIT_FAILURE);
}
if(dup2(fds[0], STDOUT_FILENO) == -1) {
perror("dup2");
exit(EXIT_FAILURE);
}
return 0;
}
A pipe is two “files” that share a buffer and some locking or control semantics. When you write into the pipe, the data is put into the buffer. When you read from a pipe, the data is taken from a buffer.
There is nothing in the pipe that moves data to some output device.
If you use dup2 to duplicate the read side of the pipe into the standard output file descriptor (number 1), then all you have is the read side of the pipe on file descriptor 1. That means you can issue read operations to file descriptor 1, and the system will give your program data from the pipe.
There is nothing “special” about file descriptor 1 in this regard. Putting any file on file descriptor 1 does not cause that file to be automatically sent anywhere. The way standard output works normally is that you open a terminal or some chosen output file or other device on file descriptor 1, and then you send things to that device or file by writing to file descriptor 1. The operating system does not automatically write things to file descriptor 1; you have to issue write operations.
int main (void) {
int rc=fork();
if(rc==0){
close(1); //close stdout BEFORE opening my file
open("./c.txt", O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU);
//execve "wc"
char *cmd[3];
cmd[0] = strdup("wc"); //file to execuable
cmd[1]=strdup("c.c"); //first arg to command 'wc' -> c.c
cmd[2]=NULL;
execvp(cmd[0], cmd);
}
If I close() stdout, then the output of execve ("wc"), will be in file c.txt but ONLY if I close stdout BEFORE open()ing. If I call it AFTER
open("./c.txt", O_CREAT|O_WRONLY|O_TRUNC, S_IRWXU);
close(1);
then -> wc: write error: Bad file descriptor.
I have read, that for open() (probably in my case for wc output) is OS reaching file descriptor from 0, and so it first find 1 as stdout to printf() to screen. So I need to close() it, in order to use file descriptor from open("./c.txt") for wc. But If that is correct (I do not know where I have understood it correctly), then it would not matter whether I close stdout before or after open() call, does it? Once it is closed, OS has no other FD to use as output. I maybe does not understand it clearly.
question: why must be fd1 closed first in order to make redirection to c.txt?
A few concepts to establish first.
stdout is one of the streams automatically opened as part of program startup. The program startup code uses file descriptor 1 for stdout).
execve creates a new process with the same open file descriptors as the parent/calling process (there are exceptions and naunces which can be read from the execve man page.
open will look for the lowest available file descriptor to use.
Ok, so now to your code.
Case 1 - close, open, execve
In this case the following sequence of events happens:
Program starts with stdout=>fd 1.
close(1) makes fd 1 available.
open("c.txt") returns 1 which effectively redirects stdout to the file.
execve creates a new process which has 1 open and redirected to the file.
wc writes to fd 1 which now ends up in the file.
Case 2 - open,close,execve
In this case the following sequence of events happens:
Program starts with stdout=>fd 1.
open("c.txt")is called but fd 1 is not available so it returns 2.
close(1) means there is now effectively no stdout.
execve creates a new process which has no open stream on fd 1 (ie no stdout).
wc tries to write to fd 1 and gets a bad file descriptor error since fd 1 is not open.
I've boiled down my entire program to a short main that replicates the issue, so forgive me for it not making any sense.
input.txt is a text file that has a couple lines of text in it. This boiled down program should print those lines. However, if fork is called, the program enters an infinite loop where it prints the contents of the file over and over again.
As far as I understand fork, the way I use it in this snippet is essentially a no-op. It forks, the parent waits for the child before continuing, and the child is immediately killed.
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
enum { MAX = 100 };
int main(){
freopen("input.txt", "r", stdin);
char s[MAX];
int i = 0;
char* ret = fgets(s, MAX, stdin);
while (ret != NULL) {
//Commenting out this region fixes the issue
int status;
pid_t pid = fork();
if (pid == 0) {
exit(0);
} else {
waitpid(pid, &status, 0);
}
//End region
printf("%s", s);
ret = fgets(s, MAX, stdin);
}
}
Edit: Further investigation has only made my issue stranger. If the file contains <4 blank lines or <3 lines of text, it does not break. However, if there are more than that, it loops infinitely.
Edit2: If the file contains numbers 3 lines of numbers it will infinitely loop, but if it contains 3 lines of words it will not.
I am surprised that there is a problem, but it does seem to be a problem on Linux (I tested on Ubuntu 16.04 LTS running in a VMWare Fusion VM on my Mac) — but it was not a problem on my Mac running macOS 10.13.4 (High Sierra), and I wouldn't expect it to be a problem on other variants of Unix either.
As I noted in a comment:
There's an open file description and an open file descriptor behind each stream. When the process forks, the child has its own set of open file descriptors (and file streams), but each file descriptor in the child shares the open file description with the parent. IF (and that's a big 'if') the child process closing the file descriptors first did the equivalent of lseek(fd, 0, SEEK_SET), then that would also position the file descriptor for the parent process, and that could lead to an infinite loop. However, I've never heard of a library that does that seek; there's no reason to do it.
See POSIX open() and fork() for more information about open file descriptors and open file descriptions.
The open file descriptors are private to a process; the open file descriptions are shared by all copies of the file descriptor created by an initial 'open file' operation. One of the key properties of the open file description is the current seek position. That means that a child process can change the current seek position for a parent — because it is in the shared open file description.
neof97.c
I used the following code — a mildly adapted version of the original that compiles cleanly with rigorous compilation options:
#include "posixver.h"
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
enum { MAX = 100 };
int main(void)
{
if (freopen("input.txt", "r", stdin) == 0)
return 1;
char s[MAX];
for (int i = 0; i < 30 && fgets(s, MAX, stdin) != NULL; i++)
{
// Commenting out this region fixes the issue
int status;
pid_t pid = fork();
if (pid == 0)
{
exit(0);
}
else
{
waitpid(pid, &status, 0);
}
// End region
printf("%s", s);
}
return 0;
}
One of the modifications limits the number of cycles (children) to just 30.
I used a data file with 4 lines of 20 random letters plus a newline (84 bytes total):
ywYaGKiRtAwzaBbuzvNb
eRsjPoBaIdxZZtJWfSty
uGnxGhSluywhlAEBIXNP
plRXLszVvPgZhAdTLlYe
I ran the command under strace on Ubuntu:
$ strace -ff -o st-out -- neof97
ywYaGKiRtAwzaBbuzvNb
eRsjPoBaIdxZZtJWfSty
uGnxGhSluywhlAEBIXNP
plRXLszVvPgZhAdTLlYe
…
uGnxGhSluywhlAEBIXNP
plRXLszVvPgZhAdTLlYe
ywYaGKiRtAwzaBbuzvNb
eRsjPoBaIdxZZtJWfSty
$
There were 31 files with names of the form st-out.808## where the hashes were 2-digit numbers. The main process file was quite large; the others were small, with one of the sizes 66, 110, 111, or 137:
$ cat st-out.80833
lseek(0, -63, SEEK_CUR) = 21
exit_group(0) = ?
+++ exited with 0 +++
$ cat st-out.80834
lseek(0, -42, SEEK_CUR) = -1 EINVAL (Invalid argument)
exit_group(0) = ?
+++ exited with 0 +++
$ cat st-out.80835
lseek(0, -21, SEEK_CUR) = 0
exit_group(0) = ?
+++ exited with 0 +++
$ cat st-out.80836
exit_group(0) = ?
+++ exited with 0 +++
$
It just so happened that the first 4 children each exhibited one of the four behaviours — and each further set of 4 children exhibited the same pattern.
This shows that three out of four of the children were indeed doing an lseek() on standard input before exiting. Obviously, I have now seen a library do it. I have no idea why it is thought to be a good idea, though, but empirically, that is what is happening.
neof67.c
This version of the code, using a separate file stream (and file descriptor) and fopen() instead of freopen() also runs into the problem.
#include "posixver.h"
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
enum { MAX = 100 };
int main(void)
{
FILE *fp = fopen("input.txt", "r");
if (fp == 0)
return 1;
char s[MAX];
for (int i = 0; i < 30 && fgets(s, MAX, fp) != NULL; i++)
{
// Commenting out this region fixes the issue
int status;
pid_t pid = fork();
if (pid == 0)
{
exit(0);
}
else
{
waitpid(pid, &status, 0);
}
// End region
printf("%s", s);
}
return 0;
}
This also exhibits the same behaviour, except that the file descriptor on which the seek occurs is 3 instead of 0. So, two of my hypotheses are disproven — it's related to freopen() and stdin; both are shown incorrect by the second test code.
Preliminary diagnosis
IMO, this is a bug. You should not be able to run into this problem.
It is most likely a bug in the Linux (GNU C) library rather than the kernel. It is caused by the lseek() in the child processes. It is not clear (because I've not gone to look at the source code) what the library is doing or why.
GLIBC Bug 23151
GLIBC Bug 23151 - A forked process with unclosed file does lseek before exit and can cause infinite loop in parent I/O.
The bug was created 2018-05-08 US/Pacific, and was closed as INVALID by 2018-05-09. The reason given was:
Please read
http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_05_01,
especially this paragraph:
Note that after a fork(), two handles exist where one existed before. […]
POSIX
The complete section of POSIX referred to (apart from verbiage noting that this is not covered by the C standard) is this:
2.5.1 Interaction of File Descriptors and Standard I/O Streams
An open file description may be accessed through a file descriptor, which is created using functions such as open() or pipe(), or through a stream, which is created using functions such as fopen() or popen(). Either a file descriptor or a stream is called a "handle" on the open file description to which it refers; an open file description may have several handles.
Handles can be created or destroyed by explicit user action, without affecting the underlying open file description. Some of the ways to create them include fcntl(), dup(), fdopen(), fileno(), and fork(). They can be destroyed by at least fclose(), close(), and the exec functions.
A file descriptor that is never used in an operation that could affect the file offset (for example, read(), write(), or lseek()) is not considered a handle for this discussion, but could give rise to one (for example, as a consequence of fdopen(), dup(), or fork()). This exception does not include the file descriptor underlying a stream, whether created with fopen() or fdopen(), so long as it is not used directly by the application to affect the file offset. The read() and write() functions implicitly affect the file offset; lseek() explicitly affects it.
The result of function calls involving any one handle (the "active handle") is defined elsewhere in this volume of POSIX.1-2017, but if two or more handles are used, and any one of them is a stream, the application shall ensure that their actions are coordinated as described below. If this is not done, the result is undefined.
A handle which is a stream is considered to be closed when either an fclose(), or freopen() with non-full(1) filename, is executed on it (for freopen() with a null filename, it is implementation-defined whether a new handle is created or the existing one reused), or when the process owning that stream terminates with exit(), abort(), or due to a signal. A file descriptor is closed by close(), _exit(), or the exec() functions when FD_CLOEXEC is set on that file descriptor.
(1) [sic] Using 'non-full' is probably a typo for 'non-null'.
For a handle to become the active handle, the application shall ensure that the actions below are performed between the last use of the handle (the current active handle) and the first use of the second handle (the future active handle). The second handle then becomes the active handle. All activity by the application affecting the file offset on the first handle shall be suspended until it again becomes the active file handle. (If a stream function has as an underlying function one that affects the file offset, the stream function shall be considered to affect the file offset.)
The handles need not be in the same process for these rules to apply.
Note that after a fork(), two handles exist where one existed before. The application shall ensure that, if both handles can ever be accessed, they are both in a state where the other could become the active handle first. The application shall prepare for a fork() exactly as if it were a change of active handle. (If the only action performed by one of the processes is one of the exec() functions or _exit() (not exit()), the handle is never accessed in that process.)
For the first handle, the first applicable condition below applies. After the actions required below are taken, if the handle is still open, the application can close it.
If it is a file descriptor, no action is required.
If the only further action to be performed on any handle to this open file descriptor is to close it, no action need be taken.
If it is a stream which is unbuffered, no action need be taken.
If it is a stream which is line buffered, and the last byte written to the stream was a <newline> (that is, as if a
putc('\n')
was the most recent operation on that stream), no action need be taken.
If it is a stream which is open for writing or appending (but not also open for reading), the application shall either perform an fflush(), or the stream shall be closed.
If the stream is open for reading and it is at the end of the file (feof() is true), no action need be taken.
If the stream is open with a mode that allows reading and the underlying open file description refers to a device that is capable of seeking, the application shall either perform an fflush(), or the stream shall be closed.
For the second handle:
If any previous active handle has been used by a function that explicitly changed the file offset, except as required above for the first handle, the application shall perform an lseek() or fseek() (as appropriate to the type of handle) to an appropriate location.
If the active handle ceases to be accessible before the requirements on the first handle, above, have been met, the state of the open file description becomes undefined. This might occur during functions such as a fork() or _exit().
The exec() functions make inaccessible all streams that are open at the time they are called, independent of which streams or file descriptors may be available to the new process image.
When these rules are followed, regardless of the sequence of handles used, implementations shall ensure that an application, even one consisting of several processes, shall yield correct results: no data shall be lost or duplicated when writing, and all data shall be written in order, except as requested by seeks. It is implementation-defined whether, and under what conditions, all input is seen exactly once.
Each function that operates on a stream is said to have zero or more "underlying functions". This means that the stream function shares certain traits with the underlying functions, but does not require that there be any relation between the implementations of the stream function and its underlying functions.
Exegesis
That is hard reading! If you're not clear on the distinction between open file descriptor and open file description, read the specification of open() and fork() (and dup() or dup2()). The definitions for file descriptor and open file description are also relevant, if terse.
In the context of the code in this question (and also for Unwanted child processes being created while file reading), we have a file stream handle open for reading only which has not yet encountered EOF (so feof() would not return true, even though the read position is at the end of the file).
One of the crucial parts of the specification is: The application shall prepare for a fork() exactly as if it were a change of active handle.
This means that the steps outlined for 'first file handle' are relevant, and stepping through them, the first applicable condition is the last:
If the stream is open with a mode that allows reading and the underlying open file description refers to a device that is capable of seeking, the application shall either perform an fflush(), or the stream shall be closed.
If you look at the definition for fflush(), you find:
If stream points to an output stream or an update stream in which the most recent operation was not input, fflush() shall cause any unwritten data for that stream to be written to the file, [CX] ⌦ and the last data modification and last file status change timestamps of the underlying file shall be marked for update.
For a stream open for reading with an underlying file description, if the file is not already at EOF, and the file is one capable of seeking, the file offset of the underlying open file description shall be set to the file position of the stream, and any characters pushed back onto the stream by ungetc() or ungetwc() that have not subsequently been read from the stream shall be discarded (without further changing the file offset). ⌫
It isn't exactly clear what happens if you apply fflush() to an input stream associated with a non-seekable file, but that isn't our immediate concern. However, if you're writing generic library code, then you might need to know whether the underlying file descriptor is seekable before doing a fflush() on the stream. Alternatively, use fflush(NULL) to have the system do whatever is necessary for all I/O streams, noting that this will lose any pushed-back characters (via ungetc() etc).
The lseek() operations shown in the strace output seem to be implementing the fflush() semantics associating the file offset of the open file description with the file position of the stream.
So, for the code in this question, it seems that fflush(stdin) is necessary before the fork() to ensure consistency. Not doing that leads to undefined behaviour ('if this is not done, the result is undefined') — such as looping indefinitely.
The exit() call closes all open file handles. After the fork, the child and parent have identical copies of the execution stack, including the FileHandle pointer. When the child exits, it closes the file and resets the pointer.
int main(){
freopen("input.txt", "r", stdin);
char s[MAX];
prompt(s);
int i = 0;
char* ret = fgets(s, MAX, stdin);
while (ret != NULL) {
//Commenting out this region fixes the issue
int status;
pid_t pid = fork(); // At this point both processes has a copy of the filehandle
if (pid == 0) {
exit(0); // At this point the child closes the filehandle
} else {
waitpid(pid, &status, 0);
}
//End region
printf("%s", s);
ret = fgets(s, MAX, stdin);
}
}
As /u/visibleman pointed out, the child thread is closing the file and messing things up in main.
I was able to work around it by checking if the program is in terminal mode with
!isatty(fileno(stdin))
And if stdin has been redirected, then it will read all of it into a linkedlist before doing any processing or forking.
Replace exit(0) with _exit(0), and all is fine. This is an old unix tradition, if you are using stdio, your forked image must use _exit(), not exit().
I am working with a multi-thread program.
First I redirect my stdout to a certain file. No problem there (I used dup2(fd, 1) where fd is the file descriptor for the file).
Afterwards, I need to redirect my stdout to the terminal again.
My first approach:
/*Declaration*/
fpost_t stream_sdout;
/*code*/
if ( fgetpos( stdout, &stream_sdout) == -1 )
perror(Error:);
It says illegal seek.
No idea why this is happening.
But if I get this to work, then I only need to use fsetpos(stdout, &stream_stdout) and it should work.
My second idea, was to to copy the stdout using dup2(stdout, 4) to the file descriptor table, at position 4. But that ain't working either.
How can I switch the standard output back to its original destination (terminal, pipe, file, whatever)?
#include <unistd.h>
...
int saved_stdout;
...
/* Save current stdout for use later */
saved_stdout = dup(1);
dup2(my_temporary_stdout_fd, 1);
... do some work on your new stdout ...
/* Restore stdout */
dup2(saved_stdout, 1);
close(saved_stdout);
Before you do the dup2(fd, STDOUT_FILENO), you should save the current open file descriptor for standard output by doing int saved_stdout = dup(STDOUT_FILENO); (letting dup() choose an available file descriptor number for you). Then, after you've finished with the output redirected to a file, you can do dup2(saved_stdout, STDOUT_FILENO) to restore standard output to where it was before you started all this (and you should close saved_stdout too).
You do need to worry about flushing standard I/O streams (fflush(stdout)) at appropriate times as you mess around with this. That means 'before you switch stdout over'.
If the program runs on a Linux environment, you can freopen ("/dev/stdout", "a", stdout).
But if you know that stdout was the terminal, freopen ("/dev/tty", "a", stdout) or the equivalent for other OSs—even Windows.
I know that dup, dup2, dup3 "create a copy of the file descriptor oldfd"(from man pages). However I can't digest it.
As I know file descriptors are just numbers to keep track of file locations and their direction(input/output). Wouldn't it be easier to just
fd=fd2;
Whenever we want to duplicate a file descriptor?
And something else..
dup() uses the lowest-numbered unused descriptor for the new descriptor.
Does that mean that it can also take as value stdin, stdout or stderr if we assume that we have close()-ed one of those?
Just wanted to respond to myself on the second question after experimenting a bit.
The answer is YES. A file descriptor that you make can take a value 0, 1, 2 if stdin, stdout or stderr are closed.
Example:
close(1); //closing stdout
newfd=dup(1); //newfd takes value of least available fd number
Where this happens to file descriptors:
0 stdin .--------------. 0 stdin .--------------. 0 stdin
1 stdout =| close(1) :=> 2 stderr =| newfd=dup(1) :=> 1 newfd
2 stderr '--------------' '--------------' 2 stderr
A file descriptor is a bit more than a number. It also carries various semi-hidden state with it (whether it's open or not, to which file description it refers, and also some flags). dup duplicates this information, so you can e.g. close the two descriptors independently. fd=fd2 does not.
Let's say you're writing a shell program and you want to redirect stdin and stdout in a program you want to run. It could look something like this:
fdin = open(infile, O_RDONLY);
fdout = open(outfile, O_WRONLY);
// Check for errors, send messages to stdout.
...
int pid = fork(0);
if(pid == 0) {
close(0);
dup(fdin);
close(fdin);
close(1);
dup(fdout);
close(fdout);
execvp(program, argv);
}
// Parent process cleans up, maybe waits for child.
...
dup2() is a little more convenient way to do it the close() dup() can be replaced by:
dup2(fdin, 0);
dup2(fdout, 1);
The reason why you want to do this is that you want to report errors to stdout (or stderr) so you can't just close them and open a new file in the child process. Secondly, it would be a waste to do the fork if either open() call returned an error.
The single most important thing about dup() is it returns the smallest integer available for a new file descriptor. That's the basis of redirection:
int fd_redirect_to = open("file", O_CREAT);
close(1); /* stdout */
int fd_to_redirect = dup(fd_redirect_to); /* magically returns 1: stdout */
close(fd_redirect_to); /* we don't need this */
After this anything written to file descriptor 1 (stdout), magically goes into "file".
Example:
close(1); //closing stdout
newfd=dup(1); //newfd takes value of least available fd number
Where this happens to file descriptors:
0 stdin .--------------. 0 stdin .--------------. 0 stdin
1 stdout =| close(1) :=> 2 stderr =| newfd=dup(1) :=> 1 newfd
2 stderr '--------------' '--------------' 2 stderr
A question arose again: How can I dup() a file descriptor that I already closed?
I doubt that you conducted the above experiment with the shown result, because that would not be standard-conforming - cf. dup:
The dup() function shall fail if:
[EBADF]
The fildes argument is not a valid open file descriptor.
So, after the shown code sequence, newfd must be not 1, but rather -1, and errno EBADF.
see this page, stdout can be aliased as dup(1)...
Just a tip about "duplicating standard output".
On some Unix Systems (but not GNU/Linux)
fd = open("/dev/fd/1", O_WRONLY);
it is equivalent to:
fd = dup(1);
dup() and dup2() system call
•The dup() system call duplicates an open file descriptor and returns the new file
descriptor.
•The new file descriptor has the following properties in common with
the original
file descriptor:
1. refers to the same open file or pipe.
2. has the same file pointer -- that is, both file descriptors share one file pointer.
3. has the same access mode, whether read, write, or read and write.
• dup() is guaranteed to return a file descriptor with the lowest integer value available.It is because of this feature of returning the lowest unused file descriptor available that processes accomplish I/O redirection.
int dup(file_descriptor)
int dup2(file_descriptor1, file_descriptor2)