I want to pipe the output of a child process to the parent's stdout. I know there are other ways of doing this, but why can't a pipe's read-end be duplicated to stdout? Why doesn't the program print what is written to the pipes write end?
Here i have a minimal example (without any subprocesses) of what I'm trying to do. Im expecting to see test in the output when running, but the program outputs nothing.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
int fds[2];
if(pipe(fds) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
if(write(fds[1], "test", 5) == -1) {
perror("write");
exit(EXIT_FAILURE);
}
if(dup2(fds[0], STDOUT_FILENO) == -1) {
perror("dup2");
exit(EXIT_FAILURE);
}
return 0;
}
A pipe is two “files” that share a buffer and some locking or control semantics. When you write into the pipe, the data is put into the buffer. When you read from a pipe, the data is taken from a buffer.
There is nothing in the pipe that moves data to some output device.
If you use dup2 to duplicate the read side of the pipe into the standard output file descriptor (number 1), then all you have is the read side of the pipe on file descriptor 1. That means you can issue read operations to file descriptor 1, and the system will give your program data from the pipe.
There is nothing “special” about file descriptor 1 in this regard. Putting any file on file descriptor 1 does not cause that file to be automatically sent anywhere. The way standard output works normally is that you open a terminal or some chosen output file or other device on file descriptor 1, and then you send things to that device or file by writing to file descriptor 1. The operating system does not automatically write things to file descriptor 1; you have to issue write operations.
Related
I'm trying to understand what is behind this behaviour in my parent process.
Basically, I create a child process and connect its stdout to my pipe. The parent process continuously reads from the pipe and does some stuff.
I noticed that when inserting the while loop in the parent the stdout seems to be lost, nothing appears on the terminal etc I thought that the output of stdout would somehow go to the pipe (maybe an issue with dup2) but that doesn't seem to be the issue. If I don't continuously fflush(stdout) in the parent process, whatever I'm trying to get to the terminal just won't show. Without a while loop in the parent it works fine, but I'm really not sure why it's happening or if the rest of my implementation is problematic somehow.
Nothing past the read system call seems to be going to the stdout in the parent process. Assuming the output of inotifywait in the pipe is small enough ( 30 > bytes ), what exactly is wrong with this program?
What I expect to happen is the stdout of inotifywait to go to the pipe, then for the parent to read the message, run strtok and print the file name (which only appears in stdout when I fflush)
Running the program with inotify installed and creating any file in the current directory of the program should be enough. Removing the while loop does print the created file's name (as expected).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <errno.h>
int main(void) {
char b[100];
int pipefd;
if (mkfifo("fifo", 0666) == -1) {
if (errno != EEXIST) {
perror("mkfifo");
exit(EXIT_FAILURE);
}
}
pid_t pid = fork();
if (pid < 0) {
perror("fork");
exit(1);
}
if ((pipefd = open("fifo", O_RDWR)) < 0) {
perror("open pipe");
exit(EXIT_FAILURE);
}
if (pid == 0) {
dup2(pipefd, 1);
const char* dir = ".";
const char* args[] = {"inotifywait", dir, "-m", "-e",
"create", "-e", "moved_to", NULL};
execvp("inotifywait", (char**)args);
perror("inotifywait");
} else {
while (1) {
fflush(stdout); // the output only appears in stdout with this here
if (read(pipefd, b, 30) < 0) {
perror("problem # read");
exit(1);
}
char filename[30];
printf("anything");
sscanf(b, "./ CREATE %s", filename);
printf("%s", filename);
}
}
}
The streams used by the C standard library are designed in such a way that they are normally buffered (except for the standard error stream stderr).
The standard output stream is normally line buffered, unless the output device is not an interactive device, in which case it is normally fully buffered. Therefore, in your case, it is probably line buffered.
This means that the buffer will only be flushed
when it is full,
when an \n character is encountered,
when the stream is closed (e.g. during normal program termination),
when reading input from an unbuffered or line-buffered stream (in certain situations), or
when you explicitly call fflush.
This explains why you are not seeing the output, because none of the above are happening in your infinite loop (when you don't call fflush). Although you are reading input, you are not doing this from a C standard library FILE * stream. Instead, you are bypassing the C runtime library (e.g. glibc) by using the read system call directly (i.e. you are using a file descriptor instead of a stream).
The simplest solution to your problem would probably be to replace the line
printf("%s", filename);
with:
printf("%s\n", filename);
If stdout is line-buffered (which should be the case if it is connected to a terminal), then the input should automatically be flushed after every line and an explicit call to fflush should no longer be necessary.
Ok guys, there are a billion demos relating to dup, dup2, fcntl, pipe and all kinds of stuff that are wonderful when multiple processes exist. However, I have yet to see one very basic thing that I think will help explain the behavior of pipe and its relationship to standard out and in.
My goal is to simply (in the same process) reroute standard output through a pipe back to standard output directly. I have already accomplished this
with intermediate stages which redirect the pipe output to a file or write into a buffer... and then put standard output back to where it started. At that point, of course I can write the buffer back to stdout, but I don't want to do this.
Since I moved standard output to another location in the file table, I'd like to direct the output of the pipe to feed directly into the new standard output position and have it print like it normally would.
I feel like there is some kind of layer surrounding the file table that I am not understanding.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main() {
int pipeEnds_arr1[2];
char str1[] = "STRING TO FEED INTO PIPE \n"; // make a string array
pipe(pipeEnds_arr1);
printf("File Descriptor for pipe ends from array\nPOSITION out 0 : %d\nPOSITION in 1 : %d\n", pipeEnds_arr1[0], pipeEnds_arr1[1]);
/* now my goal is to shift the input of the pipe into the position of
* standard output, so that the print command feeds the pipe, then I
* would like to redirect the other end of the pipe to standard out.
*/
int someInt = dup(1); // duplicates stdout to next available file table position
printf ("Some Int FD: %d\n", someInt); // print out the fd for someInt just for knowing where it is
/* This is the problem area. The out end of the pipe never
* makes it back to std out, and I see no way to do so.
* Stdout should be in the file table position 5, but when
* I dup2 the output end of the pipe into this position ,
* I believe I am actually overwriting std out completely.
* But I don't want to overwrite it, i want to feed the output
* of the pipe into std out. I think I am fundamentally
* misunderstanding this issue.
*/
dup2(pipeEnds_arr1[1], 1); //put input end of pipe into std out position
//dup2(pipeEnds_arr1[0], 5); // this will not work
//and other tests I have conducted do not work
printf("File Descriptor for pipe ends from array\nPOSITION out 0 : %d\nPOSITION in 1 : %d\n", pipeEnds_arr1[0], pipeEnds_arr1[1]);
fflush(stdout);
close(pipeEnds_arr1[0]);
close(pipeEnds_arr1[1]);
return 0;
}
EDIT*********
OK, what I know is that somehow std out takes information from commands like printf and then routs it into a buffer that is then flushed to the shell.
What I believe is that there must be a way to rout the "read" or output end of the pipe to that same buffer that then gets to the shell. I have figured out how to rout the pipe output into a string, and then I can do as I please. In the example code I post below, I will first rout the pipe out to a string and then open a file and write the string to the open file descriptor of that file...
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main() {
/* Each pipe end array has to have 2 positions in it. The array
* position represents the two pipe ends with the 0 index
* position representing the output of the pipe (the place you want
* read your data from), and 1 index position representing the
* input file descriptor of the pipe (the place you want to write
* your data).
*/
int pipeEnds_arr1[2];
char str1[] = "Hello, we are feeding this into the pipe that we are through stdout into a pipe and then reading from the pipe and then feeding that output into a file \n"; // make a string array
/* Here we want to actually do the pipe command. We feed it the array
* with the 2 positions in it which will now hold file descriptors
* attached to the current process which allow for input and output
* through the new pipe. At this point, we don't know what the
* exact file decriptors are, but we can look at them by printing
*/
pipe(pipeEnds_arr1);
printf("File Descriptor for pipe ends from array\nPOSITION out 0 : %d\nPOSITION in 1 : %d\n", pipeEnds_arr1[0], pipeEnds_arr1[1]);
/* now my goal is to shift the input of the pipe into the position of
* standard output, so that the print command feeds the pipe, then we
* will try to read from the pipe and redirect the output to the std
* or in this test case out to a file.
*/
int someInt = dup(1); // we moved what was stdout into someInt;
/* put the write end of the pipe in the old stdout position by
* using dup2 so we will print directly into the pipe
*/
dup2(pipeEnds_arr1[1], 1);
/* this is where id like to re-rout the pipe back to stdout but
* im obviously not understanding this correctly
*/
//dup2(someInt, 3);
/* since std out has now been replaced by the pipe write end, this
* printf will print into the pipe
*/
printf("%s", str1);
/* now we read from the pipe into a new string we make */
int n;
char str2[strlen(str1)];
n = read(pipeEnds_arr1[0], str2, sizeof(str2)-1);
str2[n] = 0;
/* open a file and then write into it from the output of the pipe
* that we saved into the str2
*/
int fd = open("tmp.out", O_WRONLY | O_CREAT | O_TRUNC, 0644);
write(fd, str2, strlen(str2));
/* not sure about these last commands and their relevance */
fflush(stdout);
close(pipeEnds_arr1[0]);
close(pipeEnds_arr1[1]);
close(fd);
return 0;
}
Pipes aren't between file descriptors. They are between processes. So it doesn't make any sense to "reroute standard out through a pipe".
What you can do is modify a process's file descriptor table so that its stdout (fd 1) is the write side of a pipe. And you can modify another process's file descriptor table so that some file descriptor, perhaps even stdin (fd 0) is the read side of the same pipe. That allows you to pass data through the pipe between the two processes. (You can set up a pipe between two fds in the same process, if you want to; it's occasionally useful but watch out for deadlocking.)
stdout is not some sort of magical entity. It's just entry 1 in the fd table, and it might refer to any "file", in the Unix sense of the word, which includes regular files, devices (including the console and the pseudoterminal your shell is communicating with), sockets, pipes, FIFOs, and whatever else the operating system feels worthy of allowing streaming access to.
Normally, when the shell starts a running a command-line utility, it first clones fds 0, 1 and 2 (stdin, stdout and stderr) from its own fd 0, 1, and 2, which are normally all the same device: the console, or more commonly these days, the pseudoterminal provided by the graphical console application you are using. But you can change those assignments with, for example, shell redirection operators, shell pipe operators, and some shell-provided special files.
Finally, pipes do have small buffers in the kernel, but the key is rhe word "small" -- the buffer might hold as little as 4096 bytes. If gets full, attempts to write to the pipe will hang until space becomes available, which only happens when data is read from the other sude. That's why it is so easy to deadlock if the same process is using both sides of the pipe: if the process is hanging waiting for the pileto be emptied, it wikk neverbe able to read the pipe.
I am putting together a server-like process, which receives data from a named pipe and returns some output.
As everybody knows, when the pipe is opened for reading it blocks the process until another process opens the pipe for writing. (Unless nonblock flag is set.)
When another process opens the pipe and writes to it, we can get input like this:
...
opened_pipe = fopen(argv[1], "r")
while(1)
{
if ( fgets(readbuf, FIFO_READLEN, opened_pipe) != NULL )
{ \\ process the input from the writer }
else
{
\\ this is the branch when the writer closed his end of the pipe and reader gets EOF
\\ usually one exits here
\\ but I would like to freeze the process and wait until another writer comes
\\ (like a server-like application would do)
}
}
But when the writer exits this while goes into meaningless loop.
It would be better if the reader returned to the initial state - the process is blocked until the pipe gets connected on the other end again. Is it possible to do so?
PS
I tried to create a dummy writer inside my program, which opens the same pipe as w and keeps it open in the loop at fgets all the time. But it didn't work for me. Maybe I did some mistake. Is it possible to pull this trick?
One also could constantly close and reopen the pipe inside the while. But I want to use either pipe or stdin as the input stream. It would be better to treat them the same way in the program. So, can one reopen the stdin stream via fopen with some "stdin" filename?
Just open your FIFO in a server process twice - first for reading, then for writing. Doing this (opening it for writing) will ensure that your process will not see EOF if all clients abandon the FIFO.
Here's the short demonstration:
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#define FIFO_NAME "/tmp/fifo"
int main()
{
if (mkfifo(FIFO_NAME, S_IRUSR | S_IWUSR | S_IWGRP) == -1)
{
perror("mkfifo");
exit(EXIT_FAILURE);
}
FILE *readFd = fopen(FIFO_NAME, "r");
FILE *writeFd = fopen(FIFO_NAME, "w");
char c;
for (;;)
{
c = fgetc(readFd);
fprintf(stdout, "Read char: %c\n", c);
}
}
Not sure I understand your question entirely, but in general, when reading from a pipe or FIFO (aka named pipe) that doesn't have a writing end opened, you will read EOF. When fgets() reads EOF this will result in the first byte in the buffer being 0. You could just check for that and in that case, close the FIFO and reopen it, re-entering your loop.
Something like (sticking with your pseudo-snippet):
while (1)
{
opened_pipe = fopen(argv[1], "r")
while(1)
{
if ( fgets(readbuf, FIFO_READLEN, opened_pipe) == NULL ) {...}
else if (!readbuf[0])
{
fclose(opened_pipe);
break;
}
}
}
edit: given your comment here, I get the impression you might want to use a Unix domain socket instead of a FIFO. Thus, you could accept() connections and handle them separately while still waiting for new connections.
I created and written to a named pipe in C under Linux. For how long the text that is written in there is saved in the named pipe?
From what I have done, and the bytes of the pipe file after my program is run I suppose that the text is not preserved in the pipe after the program ends. In the mkfifo manual there is no info about this. I know that ordinary pipes are destroyed after the process that have created them is closed. But what about named pipes, that are still in your file system after the program has finished?
This is the code I use to create a named pipe and to write/read from it.
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <string.h>
#include <sys/stat.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
int FIFOFileDescriptorID;
FIFOFileDescriptorID = mkfifo(argv[1], 0660);
int ProccesID = fork();
if (ProccesID == 0) {
int TempFileDescriptor = 0;
char buffer[512] = "Some random text goes here...";
TempFileDescriptor = open(argv[1], O_WRONLY);
write(TempFileDescriptor, &buffer, sizeof(buffer));
close(TempFileDescriptor);
} else {
int TempFileDescriptor = 0;
char buffer[512];
TempFileDescriptor = open(argv[1], O_RDONLY);
read(TempFileDescriptor, &buffer, sizeof(buffer));
close(TempFileDescriptor);
printf("Received string: %s\n", buffer);
}
return 0;
}
After I have run this program and created and use the pipe for write/read, I run another one – just to read the text from the given pipe. Indeed, there was no text there.
I will exam this thing better, because there is a good change, after I start the program do delete/create the pipe again.
It'll not save anything. When you read/write something to the named pipe, it the process will be blocked unless some other process writes/reads from the same named pipe.
The file stays in the file-system. But the content goes away when reading/writing finishes.
From linux manual,
Once you have created a FIFO special file in this way, any process
can open it for reading or writing, in the same way as an ordinary file.
However, it has to be open at both ends simultaneously before you can
proceed to do any input or output operations on it. Opening a FIFO for
reading normally blocks until some other process opens the same FIFO for
writing, and vice versa.
Here is some code I wrote up to test named pipes. I made sure to handle all errors:
cleanup in SIGPIPE
Look at Wikipedia: http://en.wikipedia.org/wiki/Named_pipe - named pipes persist beyond the lifetime of the process that created or used them, until they are explicitly deleted.
I know that dup, dup2, dup3 "create a copy of the file descriptor oldfd"(from man pages). However I can't digest it.
As I know file descriptors are just numbers to keep track of file locations and their direction(input/output). Wouldn't it be easier to just
fd=fd2;
Whenever we want to duplicate a file descriptor?
And something else..
dup() uses the lowest-numbered unused descriptor for the new descriptor.
Does that mean that it can also take as value stdin, stdout or stderr if we assume that we have close()-ed one of those?
Just wanted to respond to myself on the second question after experimenting a bit.
The answer is YES. A file descriptor that you make can take a value 0, 1, 2 if stdin, stdout or stderr are closed.
Example:
close(1); //closing stdout
newfd=dup(1); //newfd takes value of least available fd number
Where this happens to file descriptors:
0 stdin .--------------. 0 stdin .--------------. 0 stdin
1 stdout =| close(1) :=> 2 stderr =| newfd=dup(1) :=> 1 newfd
2 stderr '--------------' '--------------' 2 stderr
A file descriptor is a bit more than a number. It also carries various semi-hidden state with it (whether it's open or not, to which file description it refers, and also some flags). dup duplicates this information, so you can e.g. close the two descriptors independently. fd=fd2 does not.
Let's say you're writing a shell program and you want to redirect stdin and stdout in a program you want to run. It could look something like this:
fdin = open(infile, O_RDONLY);
fdout = open(outfile, O_WRONLY);
// Check for errors, send messages to stdout.
...
int pid = fork(0);
if(pid == 0) {
close(0);
dup(fdin);
close(fdin);
close(1);
dup(fdout);
close(fdout);
execvp(program, argv);
}
// Parent process cleans up, maybe waits for child.
...
dup2() is a little more convenient way to do it the close() dup() can be replaced by:
dup2(fdin, 0);
dup2(fdout, 1);
The reason why you want to do this is that you want to report errors to stdout (or stderr) so you can't just close them and open a new file in the child process. Secondly, it would be a waste to do the fork if either open() call returned an error.
The single most important thing about dup() is it returns the smallest integer available for a new file descriptor. That's the basis of redirection:
int fd_redirect_to = open("file", O_CREAT);
close(1); /* stdout */
int fd_to_redirect = dup(fd_redirect_to); /* magically returns 1: stdout */
close(fd_redirect_to); /* we don't need this */
After this anything written to file descriptor 1 (stdout), magically goes into "file".
Example:
close(1); //closing stdout
newfd=dup(1); //newfd takes value of least available fd number
Where this happens to file descriptors:
0 stdin .--------------. 0 stdin .--------------. 0 stdin
1 stdout =| close(1) :=> 2 stderr =| newfd=dup(1) :=> 1 newfd
2 stderr '--------------' '--------------' 2 stderr
A question arose again: How can I dup() a file descriptor that I already closed?
I doubt that you conducted the above experiment with the shown result, because that would not be standard-conforming - cf. dup:
The dup() function shall fail if:
[EBADF]
The fildes argument is not a valid open file descriptor.
So, after the shown code sequence, newfd must be not 1, but rather -1, and errno EBADF.
see this page, stdout can be aliased as dup(1)...
Just a tip about "duplicating standard output".
On some Unix Systems (but not GNU/Linux)
fd = open("/dev/fd/1", O_WRONLY);
it is equivalent to:
fd = dup(1);
dup() and dup2() system call
•The dup() system call duplicates an open file descriptor and returns the new file
descriptor.
•The new file descriptor has the following properties in common with
the original
file descriptor:
1. refers to the same open file or pipe.
2. has the same file pointer -- that is, both file descriptors share one file pointer.
3. has the same access mode, whether read, write, or read and write.
• dup() is guaranteed to return a file descriptor with the lowest integer value available.It is because of this feature of returning the lowest unused file descriptor available that processes accomplish I/O redirection.
int dup(file_descriptor)
int dup2(file_descriptor1, file_descriptor2)