I am confused as to how to properly use close to close pipes in C. I am fairly new to C so I apologize if this is too elementary but I cannot find any explanations elsewhere.
#include <stdio.h>
int main()
{
int fd[2];
pipe(fd);
if(fork() == 0) {
close(0);
dup(fd[0]);
close(fd[0]);
close(fd[1]);
} else {
close(fd[0]);
write(fd[1], "hi", 2);
close(fd[1]);
}
wait((int *) 0);
exit(0);
}
My first question is: In the above code, the child process will close the write side of fd. If we first reach close(fd[1]), then the parent process reach write(fd[1], "hi", 2), wouldn't fd[1] already been closed?
int main()
{
char *receive;
int[] fd;
pipe(fd);
if(fork() == 0) {
while(read(fd[0], receive, 2) != 0){
printf("got u!\n");
}
} else {
for(int i = 0; i < 2; i++){
write(fd[1], 'hi', 2);
}
close(fd[1]);
}
wait((int *) 0);
exit(0);
}
The second question is: In the above code, would it be possible for us to reach close(fd[1]) in the parent process before the child process finish receiving all the contents? If yes, then what is the correct way to communicate between parent and child. My understanding here is that if we do not close fd[1] in the parent, then read will keep being blocked, and the program won't exit either.
First of all note that, after fork(), the file descriptors fd would also get copied over to the child process. So basically, a pipe acts like a file with each process having its own references to the read and write end of the pipe. Essentially there are 2 read and 2 write file descriptors, one for each process.
My first question is: In the above code, the child process will close
the write side of fd. If we first reach close(fd[1]), then the parent
process reach write(fd[1], "hi", 2), wouldn't fd[1] already been
closed?
Answer: No. The fd[1] in parent process is the parent's write end. The child has forsaken its right to write on the pipe by closing its fd[1], which does not stop the parent from writing into it.
Before answering the second question, I fixed your code to actually run it and produce some results.
int main()
{
char receive[10];
int fd[2];
pipe(fd);
if(fork() == 0) {
close(fd[1]); <-- Close UNUSED write end
while(read(fd[0], receive, 2) != 0){
printf("got u!\n");
receive[2] = '\0';
printf("%s\n", receive);
}
close(fd[0]); <-- Close read end after reading
} else {
close(fd[0]); <-- Close UNUSED read end
for(int i = 0; i < 2; i++){
write(fd[1], "hi", 2);
}
close(fd[1]); <-- Close write end after writing
wait((int *) 0);
}
exit(0);
}
Result:
got u!
hi
got u!
hi
Note: We (seemingly) lost one hi because we are reading it into same array receive which essentially overrides the first hi. You can use 2D char arrays to retain both the messages.
The second question is: In the above code, would it be possible for us
to reach close(fd[1]) in the parent process before the child process
finish receiving all the contents?
Answer: Yes. Writing to a pipe() is non-blocking (unless otherwise specified) until the pipe buffer is full.
If yes, then what is the correct
way to communicate between parent and child. My understanding here is
that if we do not close fd[1] in the parent, then read will keep being
blocked, and the program won't exit either.
If we close fd[1] in parent, it will signal that parent has closed its write end. However, if the child did not close its fd[1] earlier, it will block on read() as the pipe will not send EOF until all the write ends are closed. So the child will be left expecting itself to write to the pipe, while reading from it simultaneously!
Now what happens if the parent does not close its unused read end? If the file had only one read descriptor (say the one with the child), then once the child closes it, the parent will receive some signal or error while trying to write further to the pipe as there are no readers.
However in this situation, parent also has a read descriptor open and it will be able to write to the buffer until it gets filled, which may cause problems to the next write call, if any.
This probably won't make much sense now, but if you write a program where you need to pass values through pipe again and again, then not closing unused ends will fetch you frustrating bugs often.
what is the correct way to communicate between parent and child[?]
The parent creates the pipe before forking. After the the fork, parent and child each close the pipe end they are not using (pipes should be considered unidirectional; create two if you want bidirectional communication). The processes each have their own copy of each pipe-end file descriptor, so these closures do not affect the other process's ability to use the pipe. Each process then uses the end it holds open appropriately for its directionality -- writing to the write end or reading from the read end.
When the writer finishes writing everything it intends ever to write to the pipe, it closes its end. This is important, and sometimes essential, because the reader will not perceive end-of-file on the read end of the pipe as long as any process has the write end open. This is also one reason why it is important for each process to close the end it is not using, because if the reader also has the write end open then it can block indefinitely trying to read from the pipe, regardless of what any other process does.
Of course, the reader should also close the read end when it is done with it (or terminate, letting the system handle that). Failing to do so constitutes excess resource consumption, but whether that is a serious problem depends on the circumstances.
Related
I've been working on this school assignment forever now, and I'm super close to finishing.
The assignment is to create a bash shell in C, which sounds basic enough, but it has to support piping, IO redirect, and flags within the piped commands. I have it all working except for one thing; the | piping child isn't getting any of the data written to the pipe by the user command process child. If I were to remove the child fork for pipechild, and have everything from if(pipe_cmd[0] != '\0') run as the parent, it would work just fine (minus ending the program because of execlp). If I were to use printf() inside the pipe section, the output would be in the right file or terminal, which just leaves the input from the user command process child not getting to where it needs to be as a culprit.
Does anyone see an issue on how I'm using the pipe? It all felt 100% normal to me, given the definition of a pipe.
int a[2];
pipe(a);
//assume file_name is something like file.txt
strcat(file_name, "file.txt");
strcat(pipe_cmd, "wc");
if(!fork())
{
if(pipe_cmd[0] != '\0') // if there's a pipe
{
close(1); //close normal stdout
dup(a[1]); // making stdout same as a[1]
close(a[0]); // closing other end of pipe
execlp("ls","ls",NULL);
}
else if(file_name[0] != '\0') // if just a bare command with a file redirect
{
int rootcmd_file = open(file_name, O_APPEND|O_WRONLY|O_CREAT, 0644);
dup2(rootcmd_file, STDOUT_FILENO);
execlp("ls","ls",NULL); // writes ls to the filename
}
// if no pipe or file name write...
else if(rootcmd_flags[0] != '\0') execlp("ls","ls",NULL)
else execlp("ls","ls",NULL);
} else wait(0);
if(pipe_cmd[0] != '\0') // parent goes here, if pipe.
{
pipechild = fork();
if(pipechild != 0) // *PROBLEM ARISES HERE- IF THIS IS FORKED, IT WILL HAVE NO INFO TAKEN IN.
{
close(0); // closing normal stdin
dup(a[0]); // making our input come from the child above
close(a[1]); // close other end of pipe
if(file_name[0] != '\0') // if a filename does exist, we must reroute the output to the pipe
{
close(1); // close normal stdout
int fileredir_pipe = open(file_name, O_APPEND|O_WRONLY|O_CREAT, 0644);
dup2(fileredir_pipe, STDOUT_FILENO); //redirects STDOUT to file
execlp("wc","wc",NULL); // this outputs nothing
}
else
{
// else there is no file.
// executing the pipe in stdout using execlp.
execlp("wc","wc",NULL); // this outputs nothing
}
}
else wait(0);
}
Thanks in advance. I apologize for some of the code being withheld. This is still an active assignment and I don't want any cases of academic dishonesty. This post was risky enough.
} else wait(0);
The shown code forks the first child process and then waits for it to terminate, at this point.
The first child process gets set up with a pipe on its standard output. The pipe will be connected to the second child process's standard input. The fatal flaw in this scheme is that the second child process isn't even started yet, and won't get started until the first process terminates.
Pipes have limited internal buffering. If the first process generates very little output chances are that its output will fit inside the tiny pipe buffer, it'll write its output and then quietly terminate, none the wiser.
But if the pipe buffer becomes full, the process will block and wait until something reads from the pipe and clears it. It will wait as long as it takes for that to happen. And wait, and wait, and wait. And since the second child process hasn't been started yet, and the parent process is waiting for the first process to terminate it will wait, in vain, forever.
This overall logic is fatally flawed for this reason. The correct logic is to completely fork and execute all child processes, close the pipe descriptors in the parent (this is also important), and then wait for all child processes to terminate. wait must be the very last thing that happens here, otherwise things will break in various amazing and mysterious ways.
I want to communicate with a child process like the following:
int main(int argc, char *argv[])
{
int bak, temp;
int fd[2];
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[0]);
dup2(STDOUT_FILENO, fd[1]);
fflush(stdout);
bak = dup(1);
temp = open("/dev/null", O_WRONLY);
dup2(temp, 1);
close(temp );
Mat frame;
std::vector<uchar> buf;
namedWindow( "Camera", WINDOW_AUTOSIZE );
VideoCapture cam(0 + CAP_V4L);
sleep(1);
if (!cam.isOpened())
{
cout << "\nCould not open reference " << 0 << endl;
return -1;
}
for (int i=0; i<30; i++)
{
cam>>frame;
}
//cout<<"\nCamera initialized\n";
/*Set the normal STDOUT back*/
fflush(stdout);
dup2(bak, 1);
close(bak);
imencode(".png",frame, buf);
cout<<buf.size()<<endl;
ssize_t written= 0;
size_t s = 128;
while (written<buf.size())
{
written += write(fd[1], buf.size()+written, s);
}
cout<<'\0';
return 0;
}
The process corresponding to the compilation of the source code above is called from the parent with popen.
Note that I am writing to the std out that has been duplicated with a pipe.
The parent will read the data and resend them to UDP socket.
If I do something like this:
#define BUFLEN 128
FILE *fp;
char buf[BUFLEN];
if ((fp = popen("path/to/exec", "r")) != NULL)
{
while((fgets(buf, BUFLEN, fp)!=NULL))
{
sendto(sockfd, buf, strlen(buf),0, addr, alen);
}
}
the program is working i.e. the receiver of sendto will receive the data.
I tried to use a pipe as done in the child process:
int fd[2];
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[1]);
dup2(STDIN_FILENO, fd[0]);
if ((fp = popen("path/to/exec", "r")) != NULL)
{
while((read(fd[0], buf, BUFLEN) > 0)
{
sendto(sockfd, buf, strlen(buf),0, addr, alen);
}
}
but with this are not sent.
So how to use pipe in this case to achieve the same behaviour of the first case? Should I do dup2(STDIN_FILENO, fd[0]); or dup2(STDOUT_FILENO, fd[0]);?
I am using the sandard(s) since the file descriptors are inherited by the child process so should not require any other effort. That is why I thought I can use pipe but is that so?
In the parent:
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[0]);
you get a pipe, and then immediately close one end of it. This pipe is now useless, because no-one will ever be able to recover the closed end, and so no data can flow through it. You have converted a pipe into a hollow cylinder sealed at one end.
Then in the child:
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[1]);
you create another unrelated pipe, and seal this at the other end. The two pipes are not connected, and now you have two separate hollow cyclinders, each sealed at one end. Nothing can flow through either of them.
If putting something in the first cylinder made it appear in the other, that'd be a pretty good magic trick. Without sleight of hand or cleverly arranged mirrors, the solution is to create one pipe, keep both ends open and push data through it.
The usual way to manually set up a pipe from which a parent process can read a child process's standard output has these general steps:
parent creates a pipe by calling pipe()
parent fork()s
parent closes (clarification: its copy of) the write end of the pipe
child dupes the write end of the pipe onto its standard output via dup2()
child closes the original file descriptor for the write end of the pipe
(optional) child closes (clarification: its copy of) the read end of the pipe
child execs the desired command, or else performs the wanted work directly
The parent can then read the child's output from the read end of the pipe.
The popen() function does all of that for you, plus wraps the parent's pipe end in a FILE. Of course, it can and will set up a pipe going in the opposite direction instead if that's what the caller requests.
You need to understand and appreciate that in the procedural scheme presented above, it is important which actions are performed by which process, and in what order relative to other actions in the same process. In particular, the parent must not close the write end of the pipe before the child is launched, because that renders the pipe useless. The child inherits the one-end-closed pipe, through which no data can be conveyed.
With respect to your latter example, note also that redirecting the standard input to the read end of the pipe is not part of the process for either parent or child. The fact that your pipe is half-closed, so that nothing can ever be read from it anyway, is just icing on the cake. Moreover, the parent clobbers its own standard input this way. That's not necessarily wrong, but the parent does not even rely on it.
Overall, however, there is a bigger picture that you seem not to appreciate. Even if you performed the redirection you seem to want in the parent, so that it could be inherited by the child, popen() performs its own redirection to a pipe of its own creation. The FILE * it returns is the means by which you can read the child's output. No previous output redirection you may have performed is relevant (clarification: of the child's standard output).
In principle, an approach similar to yours could be used to create a second redirection going the other way, but at that point the convenience factor of popen() is totally lost. It would be better go take the direct pipe / fork / dup2 / exec route all the way through if you want to redirect the child's input and output.
Applying all that to your first example, you have to appreciate that although a process can redirect its own standard streams, it cannot establish a pipe to its parent process that way. The parent needs to provide the pipe, else it has no knowledge of it. And when a process dupes one file descriptor onto another, that replaces the original with the new, closing the original if it is open. It does not redefine the original. And of course, in this case, too, a pipe is useless once either end is no longer open anywhere.
I'm trying to understand how pipes work. From my understanding, a kernel has a file descriptor table where each element points to things like files and pipes etc. So a process can write to or read from a pipe when the correct file descriptor is specified.
In the example I've found below, a file descriptor is made of an array and a pipe is created using that. The program then forks so that there's a child copy. This is where I get confused, the child closes fd[0] so that is cannot recieve information from the parent? It writes some data to fd[1]. The parent then closes fd[1] and reads from fd[0]. This seems wrong to me, the parent is reading from the wrong place?
int main(void)
{
int fd[2], nbytes;
pid_t childpid;
char string[] = "Hello, world!\n";
char readbuffer[80];
pipe(fd);
if((childpid = fork()) == -1)
{
perror("fork");
exit(1);
}
if(childpid == 0)
{
/* Child process closes up input side of pipe */
close(fd[0]);
/* Send "string" through the output side of pipe */
write(fd[1], string, (strlen(string)+1));
exit(0);
}
else
{
/* Parent process closes up output side of pipe */
close(fd[1]);
/* Read in a string from the pipe */
nbytes = read(fd[0], readbuffer, sizeof(readbuffer));
printf("Received string: %s", readbuffer);
}
return(0);
}
Am I wrong and actually both fd elements reference the same point in the kernel's table? Intuitively I thought it would be creating two pipes. If they are the same position in the table what is the structure of a pipe where it can interpret these different read and writes?
Apologies if this is being too vague, I'm having real trouble wrapping my head around it. Any help would be appreciated. Thanks in advance!
When you fork a new process, the child has an exact copy of the open file descriptors. How this is implemented can be considered "magic" or whatever as we don't really need to know how, only that it does work. They share them and if both tried reading from stdin (for example) you'd get unpredictable results because they're both reading from the same place. It's only when all processes close a file descriptor does it truly get closed.
So in the case of your pipe, the child and parent can close the end of the pipe they're not going to use without worrying about the end they do care about from closing unexpectedly. If one of them opens another file, it may re-use the same file descriptor id of the recently closed one.
Assuming I have a parent process that forks a child process, writes to the child, and then waits to read something from the child, can I implement this with one pipe? It would look something like:
int main(){
pid_t pid1;
int pipefd[2];
char data[]="some data";
char rec[20];
if(pipe(pipefd) == -1){
printf("Failed to pipe\n");
exit(0);
}
pid1 = fork();
if(pid1<0){
printf("Fork failed\n");
exit(0);
}else if(pid1==0){
close(pipefd[1]);
read(pipefd[0],rec,sizeof(rec));
close(pipefd[0]);
//do some work and then write back to pipe
write(pipefd[1],data,sizeof(data));
}else{
close(pipefd[0]);
write(pipefd[1],data,sizeof(data));
close(pipefd[1]);
//ignoring using select() for the moment.
read(pipedfd[0],rec,sizeof(rec));
}
When trying to learn more about this, the man pages state that pipes are unidirectional. Does this mean that when you create a pipe to communicate between a parent and child, the process that writes to the pipe can no longer read from it, and the process that reads from the pipe can no longer write to it? Does this mean you need two pipes to allow back and forth communication? Something like:
Pipe1:
P----read----->C
P<---write-----C
Pipe2:
P----write---->C
P<---read------C
No. Pipes by definition are one-way. The problem is, that without any synchronization you will have both processes reading from the same filedescriptor. If you, however, use semaphores you could do something like that
S := semaphore initiated to 0.
P writes to pipe
P tries down on S (it blocks)
P reads from pipe
C reads from pipe
C writes to pipe
C does up on S (P wakes up and continues)
The other way is to use two pipes - easier.
It is unspecified whether fildes[0] is also open for writing and whether fildes[1] is also open for reading.
That being said, the easiest way would be to use two pipes.
Another way would be to specify a file descriptor/name/path to the child process through the pipe. In the child process, instead of writing to filedes[1], you can write to the file descriptor/name/path specified in filedes[1].
Given the following code:
int main(int argc, char *argv[])
{
int pipefd[2];
pid_t cpid;
char buf;
if (argc != 2) {
fprintf(stderr, "Usage: %s \n", argv[0]);
exit(EXIT_FAILURE);
}
if (pipe(pipefd) == -1) {
perror("pipe");
exit(EXIT_FAILURE);
}
cpid = fork();
if (cpid == -1) {
perror("fork");
exit(EXIT_FAILURE);
}
if (cpid == 0) { /* Child reads from pipe */
close(pipefd[1]); /* Close unused write end */
while (read(pipefd[0], &buf, 1) > 0)
write(STDOUT_FILENO, &buf, 1);
write(STDOUT_FILENO, "\n", 1);
close(pipefd[0]);
_exit(EXIT_SUCCESS);
} else { /* Parent writes argv[1] to pipe */
close(pipefd[0]); /* Close unused read end */
write(pipefd[1], argv[1], strlen(argv[1]));
close(pipefd[1]); /* Reader will see EOF */
wait(NULL); /* Wait for child */
exit(EXIT_SUCCESS);
}
return 0;
}
Whenever the child process wants to read from the pipe, it must first close the pipe's side from writing. When I remove that line close(pipefd[1]); from the child process's if,
I'm basically saying that "okay, the child can read from the pipe, but I'm allowing the parent to write to the pipe at the same time"?
If so, what would happen when the pipe is open for both reading & writing? No mutual exclusion?
Whenever the child process wants to read from the pipe, it must first close the pipe's side from writing.
If the process — parent or child — is not going to use the write end of a pipe, it should close that file descriptor. Similarly for the read end of a pipe. The system will assume that a write could occur while any process has the write end open, even if the only such process is the one that is currently trying to read from the pipe, and the system will not report EOF, therefore. Further, if you overfill a pipe and there is still a process with the read end open (even if that process is the one trying to write), then the write will hang, waiting for the reader to make space for the write to complete.
When I remove that line close(pipefd[1]); from the child's process IF, I'm basically saying that "okay, the child can read from the pipe, but I'm allowing the parent to write to the pipe at the same time"?
No; you're saying that the child can write to the pipe as well as the parent. Any process with the write file descriptor for the pipe can write to the pipe.
If so, what would happen when the pipe is open for both reading and writing — no mutual exclusion?
There isn't any mutual exclusion ever. Any process with the pipe write descriptor open can write to the pipe at any time; the kernel ensures that two concurrent write operations are in fact serialized. Any process with the pipe read descriptor open can read from the pipe at any time; the kernel ensures that two concurrent read operations get different data bytes.
You make sure a pipe is used unidirectionally by ensuring that only one process has it open for writing and only one process has it open for reading. However, that is a programming decision. You could have N processes with the write end open and M processes with the read end open (and, perish the thought, there could be processes in common between the set of N and set of M processes), and they'd all be able to work surprisingly sanely. But you'd not readily be able to predict where a packet of data would be read after it was written.
fork() duplicates the file handles, so you will have two handles for each end of the pipe.
Now, consider this. If the parent doesn't close the unused end of the pipe, there will still be two handles for it. If the child dies, the handle on the child side goes away, but there's still the open handle held by the parent -- thus, there will never be a "broken pipe" or "EOF" arriving because the pipe is still perfectly valid. There's just nobody putting data into it anymore.
Same for the other direction, of course.
Yes, the parent/child could still use the handle to write into their own pipe; I don't remember a use-case for this, though, and it still gives you synchronization problems.
When the pipe is created it is having two ends the read end and write end. These are entries in the User File descriptor table.
Similarly there will be two entries in the File table with 1 as reference count for both the read end and the write end.
Now when you fork, a child is created that is the file descriptors are duplicated and thus the reference count of both the ends in the file table becomes 2.
Now "When I remove that line close(pipefd[1])" -> In this case even if the parent has completed writing, your while loop below this line will block for ever for the read to return 0(ie EOF). This happens since even if the parent has completed writing and closed the write end of the pipe, the reference count of the write end in the File table is still 1 (Initially it was 2) and so the read function still is waiting for some data to arrive which will never happen.
Now if you have not written "close(pipefd[0]);" in the parent, this current code may not show any problem, since you are writing once in the parent.
But if you write more than once then ideally you would have wanted to get an error (if the child is no longer reading),but since the read end in the parent is not closed, you will not be getting the error (Even if the child is no more there to read).
So the problem of not closing the unused ends become evident when we are continuously reading/writing data. This may not be evident if we are just reading/writing data once.
Like if instead of the read loop in the child, you are using only once the line below, where you are getting all the data in one go, and not caring to check for EOF, your program will work even if you are not writing "close(pipefd[1]);" in the child.
read(pipefd[0], buf, sizeof(buf));//buf is a character array sufficiently large
man page for pipe() for SunOS :-
Read calls on an empty pipe (no buffered data) with only one
end (all write file descriptors closed) return an EOF (end
of file).
A SIGPIPE signal is generated if a write on a pipe with only
one end is attempted.