I am trying to implement a simple two stage pipe in a shell.
When I don't do the second fork and just do the rest of the implementation of the pipe in the parent, it works fine but I exit the shell. That's why I want to do the second fork so I don't exit the shell. But for some reason nothing happens with the above code. Can you help me figure out what may be going wrong? I have a feeling it doesn't wait for both my processes to finish before exiting but I could be wrong.
Solution: close fd[0] and fd[1] in the parent.
In the twin fork model, which you want, your parent process (the shell) is keeping its copy of fd[1] open. With this open, the child pid2 will never see EOF on its standard input fd.
Comments:
both children should close their pipe fds after dup2'ing
the code after execvp, both above and in your pastie suggests that you think that execvp will return control under ordinary circumstances. It does not. For this code, at most you probably want to follow the execvp with a perror and exit.
Related
I do not know why the parent process needs to close both the file descriptors of a pipe before calling wait()?
I have a C program which does:
Parent creates child_a, which executes ls -l using execvp, and writes to the pipe (after closing read end of pipe).
Parent creates another child (without closing any file descriptor for pipe), called child_b, which executes 'wc' by reading from pipe.(after closing write end of pipe).
Parent waits for both children to complete by calling wait() twice.
I noticed that program is blocked if parent does not close both file descriptors of the pipe before calling the wait() syscall. Also after reading few questions already posted online it looks like this is the general rule and needs to be done. But I could not find the reason why this has to be done?
Why does wait() not return if the parent does not close the file descriptors of the pipe?
I was thinking that, in the worst case, if the parent does not close the file descriptor of pipe, then the only consequence would be that the pipe would keep existing (which is a waste of resource). But I never thought this would block the execution of child process (as can be seen because wait() does not return).
Also remember, parent is not using the pipe at all. It is child_a writing in the pipe, and child_b reading from the pipe.
If the parent process doesn't close the write ends of the pipes, the child processes never get EOF (zero bytes read) because there's a process that might (but won't) write to the pipe. The child process must also close the write end of the pipe for the same reason — if it doesn't, there's a process (itself) that might (but won't) write to the pipe, so the read won't return EOF.
If you duplicate one end of a pipe to standard output or standard error, you should close both ends of that pipe. It is a common mistake not to have enough calls to close() in multiprocess code using pipes. Occasionally, you get away with being sloppy, but the details vary by case and usually you don't.
Quick question, hoping someone can verify. After a fork, if you call close(2) in the parent, stderr in the child is unaffected. However, if you call close(2) in the child, stderr in the parent is closed. Does that seem right? I tested this in FreeBSD and it seems to be the case, but I'm not sure why. I would expect that either they both don't affect each other or they do, but not this.
Any insight?
After a fork, every open file descriptor in the parent gets dup'ed, so any close after the fork won't affect either the parent or the child.
Unless, you're doing it not properly (i.e. not checking the output of the fork() system call).
I have some code, where several processes are created by forking. Every process have popen() function to execute some shell command. Problem is that all of these processes use same input/output stream. This cause situation, when collision occurs because of processes write to one stream simultaneously.
Is there any way to resolve that problem, so that every forked process used it's own stream?
It is not allowed to do anything with forking in my case.
You'll have to close and reopen your stdin and stdout before or, if possible, right after the fork, in the child process.
When you call fork(), you inherit the file descriptors (stdin, stdout, etc) from the parent process. When you popen it's going to take the shared stdin/stdout and pipe it into the popened process. It sounds like you want to close any open file descriptors after forking, and reopen them.
Heres a breakdown of my code.
I have a program that forks a child (and registers the child's pid in a file) and then does its own thing. The child becomes any program the programmer has dignified with argv. When the child is finished executing, it sends a signal (using SIGUSR1) back to the parent processes so the parent knows to remove the child from the file. The parent should stop a second, acknowledge the deleted entry by updating its table, and continue where it left off.
pid = fork();
switch(pid){
case -1:{
exit(1);
}
case 0 :{
(*table[numP-1]).pid = getpid(); //Global that stores pids
add(); //saves table into a text file
freeT(table); //Frees table
execv(argv[3], &argv[4]); //Executes new program with argv
printf("finished execution\n");
del(getpid()); //Erases pid from file
refreshReq(); //Sends SIGUSR1 to parent
return 0;
}
default:{
... //Does its own thing
}
}
The problem is that the after execv successfully starts and finishes (A printf statement before the return 0 lets me know), I do not see the rest of the commands in the switch statement being executed. I am wondering if the execv has like a ^C command in it which kills the child when it finishes and thus never finishes the rest of the commands. I looked into the man pages but did not find anything useful on the subject.
Thanks!
execv replaces the currently executing program with a different one. It doesn't restore the old program once that new program is done, hence it's documented "on success, execv does not return".
So, you should see your message "finished execution" if and only if execv fails.
execv replaces the current process with a new one. In order to spawn a new process, you can use e.g. system(), popen(), or a combination of fork() and exec()
Other people have already explained what execv and similar functions do, and why the next line of code is never executed. The logical next question is, so how should the parent detect that the child is done?
In the simple cases where the parent should do absolutely nothing while the child is running, just use system instead of fork and exec.
Or if the parent will do something else before the child exits, these are the key points:
When the child exits, the parent will get SIGCHLD. The default handler for SIGCHLD is ignore. If you want to catch that signal, install a handler before calling fork.
After a child has exited, the parent should call waitpid to clean up the child and find out what its exit status was.
The parent can also call wait or waitpid in a blocking mode to wait until a child exits.
The parent can also call waitpid in a non-blocking mode to find out whether the child has exited yet.
What did you expect to happen? This is what execv does. Please read the documentation which says:
The exec() family of functions replaces the current process image with a new process image.
Perhaps you were after system or something, to ask the environment to spawn a new process in addition to the current one. Or.. isn't that what you already achieved through fork? It's hard to see what you want to accomplish here.
I've got a little C server that needs to accept a connection and fork a child process. I need the stderr of the child process to go to an already existing named pipe, the stdout of the child to go to the stdout of the parent, and the stdin of the child tp come from the same place as the stdin of the parent.
My initial attempts involved popen() but I could never seem to get quite what I wanted.
Finally, this particular solution only needs to work in Solaris. Thanks.
EDIT: Updated the question in hopes of more accurately portraying what I'm trying to accomplish. Thanks for being patient with me.
EDIT2: I also need the parent to get the return value of the child process and then do something with it if that makes any difference.
You might be using the wrong function - popen() is used when you want the invoking program either to write to the forked process's standard input or read from its standard output. It seems you want neither. It also takes two arguments.
Your requirements are also somewhat contradictory:
I want it to (ideally) inherit stdin and stdout from the parent
any input to the parent goes to the child and any output from the child goes back to the parent
but at a minimum, I'd like it to inherit stdin and write stdout to a named pipe
The first option is easy - it requires no special coding. Any data supplied to the stdin of the parent will also be available on the stdin of the child (but only one of the two processes will get to read it). The child's stdout will normally go to the same place as the parent's stdout. If you want the parent to read the child's stdout, then you do need a pipe - and popen() is then appropriate, but the 'at minimum' stuff is confusing.
So, let's define what you really want?
Option 1
The standard error of the child should go to a named pipe.
The standard output of the child should be read by the invoking process.
The standard input of the child should come from the same place as the standard input of the parent.
The named pipe already exists.
Hence:
FILE *fp = popen("/run/my/command -with arguments 2>/my/other/pipe", "r");
Note that the child will be hung until a process opens '/my/other/pipe' for reading; that in turn means that if the parent process reads from fp, it too will be hung until some other process opens '/my/other/pipe' for reading.
Option 2
The standard error of the child should go to a named pipe.
The standard output of the child should go to the standard output of the parent.
The standard input of the child should come from the same place as the standard input of the parent.
The named pipe already exists.
Now popen() is not appropriate, and we get into naked `fork & exec' code. What follows is more pseudo-code than operational C.
if ((pid = fork() < 0)
error
else if (pid > 0)
{
/* Parent - might wait for child to complete */
}
else
{
int fd = open("/my/other/pipe", O_WRONLY|O_NONBLOCK);
if (fd < 0)
error
dup2(fd, 2); /* There is a symbolic name for stderr too */
close(fd); /* Do not want this open any more */
char *cmd[4] = { "/bin/sh", "-c", "/run/my/command -with arguments", 0 };
execv(cmd[0], cmd);
error - if execv returns, it failed!
}
If you're totally confident no-one has pulled any stunts on you like closing stdout, you can avoid using dup2() by closing stderr (fd = 2) before calling open(). However, if you do that, you can't report any errors any more - because you closed stderr. So, I would do it as shown.
If you have a different requirement, state what you want to achieve.
As noted by p2vb, if you want the parent to wait for the child to finish, then simply using system() may be sufficient. If the parent should continue while the child is running, you might try system() where the command string ends with an ampersand (&) to put the child into the background, or you might use the code outlined in Option 2 above.
Using system(), the parent will have little chance to read the /my/other/pipe which gets the standard error from the child. You could easily deadlock if the child produces a lot.
Also, be careful with your FD_CLOEXEC flag - set it on files that you don't want the child modifying. On Linux, you can use the O_CLOEXEC flag on the open() call; with Solaris, you have to set it via fcntl() - carefully.