I'm trying to implement unix piping in c (i.e. execute ls | wc). I have found a related solution to my problem (C Unix Pipes Example) however, I am not sure why a specific portion of the solved code snippet works.
Here's the code:
/* Run WC. */
int filedes[2];
pipe(filedes);
/* Run LS. */
pid_t pid = fork();
if (pid == 0) {
/* Set stdout to the input side of the pipe, and run 'ls'. */
dup2(filedes[1], 1);
char *argv[] = {"ls", NULL};
execv("/bin/ls", argv);
} else {
/* Close the input side of the pipe, to prevent it staying open. */
close(filedes[1]);
}
/* Run WC. */
pid = fork();
if (pid == 0) {
dup2(filedes[0], 0);
char *argv[] = {"wc", NULL};
execv("/usr/bin/wc", argv);
}
In the child process that executes the wc command, though it attaches stndin to a file descriptor, it seems that we are not explicitly reading the output produced by ls in the first child process. Thus, to me it seems that ls is run independently and wc is running independently as we not explicitly using the output of ls when executing wc. How then does this code work (i.e. it executes ls | wc)?
The code shown just about works (it cuts a number of corners, but it works) because the forked children ensure that the the file descriptor that the executed process will write to (in the case of ls) and read from (in the case of wc) is the appropriate end of the pipe. You don't have to do any more; standard input is file descriptor 0, so wc with no (filename) arguments reads from standard input. ls always writes to standard output, file descriptor 1, unless it is writing an error message.
There are three processes in the code snippet; the parent process and two children, one from each fork().
The parent process should be closing both its ends of the pipe too; it only closes one.
In general, after you do a dup() or dup2() call on a pipe file descriptor, you should close both ends of the pipe. You get away with it here because ls generates data and terminates; you wouldn't in all circumstances.
The comment:
/* Set stdout to the input side of the pipe, and run 'ls'. */
is inaccurate; you're setting stdout to the output side of the pipe, not the input side.
You should have an error exit after the execv() calls; if they fail, they return, and the process can wreak havoc (for example, if the ls fails, you end up with two copies of wc running.
An SSCCE
Note the careful closing of both ends of the pipe in each of the processes. The parent process has no use for the pipe once it has launched both children. I left the code which closes filedes[1] early in place (but removed it from an explicit else block since the following code was also only executed if the else was executed). I might well have kept pairs of closes() in each of the three code paths where files need to be closed.
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
int main(void)
{
int filedes[2];
int corpse;
int status;
pipe(filedes);
/* Run LS. */
pid_t pid = fork();
if (pid == 0)
{
/* Set stdout to the output side of the pipe, and run 'ls'. */
dup2(filedes[1], 1);
close(filedes[1]);
close(filedes[0]);
char *argv[] = {"ls", NULL};
execv("/bin/ls", argv);
fprintf(stderr, "Failed to execute /bin/ls\n");
exit(1);
}
/* Close the input side of the pipe, to prevent it staying open. */
close(filedes[1]);
/* Run WC. */
pid = fork();
if (pid == 0)
{
/* Set stdin to the input side of the pipe, and run 'wc'. */
dup2(filedes[0], 0);
close(filedes[0]);
char *argv[] = {"wc", NULL};
execv("/usr/bin/wc", argv);
fprintf(stderr, "Failed to execute /usr/bin/wc\n");
exit(1);
}
close(filedes[0]);
while ((corpse = waitpid(-1, &status, 0)) > 0)
printf("PID %d died 0x%.4X\n", corpse, status);
return(0);
}
Example output:
$ ./pipes-14312939
32 32 389
PID 75954 died 0x0000
PID 75955 died 0x0000
$
Related
I want to execve a bash as a child process in a c program. The bash should essentially be controlled by the parent process: the parent process reads from stdin, stores the read input into a buffer and writes the content of the buffer to the bash through a pipe. The output of the bash is supposed to be passed through another pipe back to the parent process's stdout. For instance: the parent process reads "ls" and gives it to the bash through a pipe and receives the output of the bash through another pipe. I know this program doesn't make sense, because there are better ways to execute ls (or some other program) on behalf of the parent process. I'm actually just trying to understand how piping works and this is the first program that came into my mind. And i can't make this program work. That's what i have so far:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/wait.h>
int main() {
int pc[2];//"parent to child"-pipe
int cp[2];//"child to parent"-pipe
int status;
char buffer[256];
char eof = EOF;
if (pipe(pc) < 0 || pipe(cp) < 0) {
printf("ERROR: Pipes could not be created\n");
return -1;
}
pid_t child_pid = fork();
if (child_pid == 0) { //child has pid 0, child enters here
close(pc[1]);//close write end of pc
close(cp[0]);//close read end of cp
//redirecting file descriptors to stdin/stdout
dup2(cp[1], STDOUT_FILENO);
dup2(pc[0], STDIN_FILENO);
execve("/bin/bash",NULL,NULL);
} else {//parent enters here
close(cp[1]);//close write end of cp
close(pc[0]);//close read end of pc
//redirecting file descriptors to stdin/stdout
dup2(cp[0], STDOUT_FILENO);
while(1) {
read(STDIN_FILENO, buffer, 3);
write(pc[1], buffer, 3);
}
waitpid(child_pid, &status, 0);
}
return 0;
}
On execution: I type in ls, hit enter, nothing happens, hit enter again, output.
$ ./pipe
ls
bash: line 3: s: command not found
Why is only the character 's' delivered to the bash?
I have a very specific problem for which I am unable to find the answer after numerous searches. I have a linux program. It's job is to launch another secondary executable (via fork() and exec()) when it receives a specific message over the network. I do not have access to modify the secondary executable.
My program prints all its TTY to stdout, and I typically launch it via ./program > output.tty The problem I have is that this second executable is very verbose. It simultaneously prints to stdout while also putting the same TTY in a log file. So my output.tty file ends up containing both output streams.
How can I set things up such that the secondary executable's TTY gets redirected to /dev/null? I can't use system() because I can't afford to wait for the child process. I need to be able to fire and forget.
Thanks.
In child process use dup2() to redirect the output to a file.
int main(int argc, const char * argv[]) {
pid_t ch;
ch = fork();
int fd;
if(ch == 0)
{
//child process
fd = open("/dev/null",O_WRONLY | O_CREAT, 0666); // open the file /dev/null
dup2(fd, 1); // replace standard output with output file
execlp("ls", "ls",".",NULL); // Excecute the command
close(fd); // Close the output file
}
//parent process
return 0;
}
In the child process, before calling exec, you need to close the standard output stream.
pid_t pid =fork();
if (pid == 0) {
close(1);
// call exec
} else if (pid > 0) {
// parent
}
I just finished my shell interpretor but I think my pipe implementation is wrong.
It's working, basic things like ls | cat -e works but I am afraid of segmentation fault possibilities if the file descriptor is over 60 ko.
I found also a infinite loop when I do a cat of a file which is longer than 60 ko. For exemple if a do a cat foo | cat -e where foo is a long file, infinite loop happen.
Or other exemple when I do cat /dev/urandom | cat -e it don't show me any display so it execute first cat /dev/urandom and then cat -e.
This is my code :
int son(int *fd_in, int p[2], t_list *cmd, char **env)
{
(void)env;
dup2(*fd_in, 0);
if (cmd->act != ENDACT && cmd->act != LEFT && cmd->act != DLEFT)
dup2(p[1], 1);
close(p[0]);
execve(cmd->av[0], cmd->av, NULL);
return (-1);
}
t_list *execute_pipe(t_list *cmd, int *fd_in)
{
int p[2];
pid_t pid;
*fd_in = 0;
while (cmd->act != -1)
{
pipe(p);
if ((pid = fork()) == -1)
return (NULL);
else if (pid == 0)
son(fd_in, p, cmd, NULL);
else
{
wait(NULL);
close(p[1]);
*fd_in = p[0];
if (cmd->act != PIPE)
return (cmd);
cmd = cmd->next;
}
}
return (cmd);
}
Part of the idea of a shell pipeline is that the processes involved run concurrently (or may do). The code you presented actively prevents that from happening by wait()ing on each child process before launching the next one. Among other things, that runs the risk of filling the (OS-level) pipe's buffer before there's anything ready to drain it. That will deadlock or, if you're lucky, generate an error.
At a high level, the procedure should look like this:
[shell] Let C initially be the command for the first segment of the pipe, and set fd0 to be STDIN_FILENO
[shell] Prepare an output file descriptor:
If there are any subsequent commands, create a pipe(), and set fd1 to be the write end of that pipe;
otherwise, set fd1 to be STDOUT_FILENO
[shell] fork() a child in which to run command C. In it:
[child] if fd0 is different from STDIN_FILENO then dup2() fd0 onto STDIN_FILENO and close fd0
[child] if fd1 is different from STDOUT_FILENO then dup2() fd1 onto STDOUT_FILENO and close fd1
[child] exec command C
[shell] if fd0 is different from STDIN_FILENO then close fd0
[shell] if fd1 is different from STDOUT_FILENO then close fd1
[shell] If there are any more commands in the pipe, then
set C to be the next command
set fd0 to be the read end of the pipe from step (2), above
go to step 2 (Prepare an output file descriptor)
[shell] (At this point, all processes in the pipeline have been started.) wait() or waitpid() for all the child processes
Note that that works equally well for a pipeline containing any positive number of commands, including 1.
I need to create two child processes. One child needs to run the command "ls -al" and redirect its output to the input of the next child process, which in turn will run the command "sort -r -n -k 5" on its input data. Finally, the parent process needs to read that (data already sorted) and display it in the terminal. The final result in the terminal (when executing the program) should be the same as if I entered the following command directly in the shell: "ls -al | sort -r -n -k 5". For this I need to use the following methods: pipe(), fork(), execlp().
My program compiles, but I don't get the desired output to the terminal. I don't know what is wrong. Here is the code:
#include <sys/types.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main()
{
int fd[2];
pid_t ls_pid, sort_pid;
char buff[1000];
/* create the pipe */
if (pipe(fd) == -1) {
fprintf(stderr, "Pipe failed");
return 1;
}
/* create child 2 first */
sort_pid = fork();
if (sort_pid < 0) { // error creating Child 2 process
fprintf(stderr, "\nChild 2 Fork failed");
return 1;
}
else if(sort_pid > 0) { // parent process
wait(NULL); // wait for children termination
/* create child 1 */
ls_pid = fork();
if (ls_pid < 0) { // error creating Child 1 process
fprintf(stderr, "\nChild 1 Fork failed");
return 1;
}
else if (ls_pid == 0) { // child 1 process
close(1); // close stdout
dup2(fd[1], 1); // make stdout same as fd[1]
close(fd[0]); // we don't need this end of pipe
execlp("bin/ls", "ls", "-al", NULL);// executes ls command
}
wait(NULL);
read(fd[0], buff, 1000); // parent reads data
printf(buff); // parent prints data to terminal
}
else if (sort_pid == 0) { // child 2 process
close(0); // close stdin
dup2(fd[0], 0); // make stdin same as fd[0]
close(fd[1]); // we don't need this end of pipe
execlp("bin/sort", "sort", "-r", "-n", "-k", "5", NULL); // executes sort operation
}
return 0;
}
Your parent process waits for the sort process to finish before creating the ls process.
The sort process needs to read its input before it can finish. And its input is coming from the ls that won't be started until after the wait. Deadlock.
You need to create both processes, then wait for both of them.
Also, your file descriptor manipulations aren't quite right. In this pair of calls:
close(0);
dup2(fd[0], 0);
the close is redundant, since dup2 will automatically close the existing fd 0 if there is one. You should do a close(fd[0]) after ther dup2, so you only have one file descriptor tied to that end of the pipe. And if you want to be really robust, you should test wither fd[0]==0 already, and in that case skip the dup2 and close.
Apply all of that to the other dup2 also.
Then there's the issue of the parent process holding the pipe open. I'd say you should close both ends of the pipe in the parent after you've passed them on to the children, but you have that weird read from fd[0] after the last wait... I'm not sure why that's there. If the ls|sort pipeline has run correctly, the pipe will be empty afterward, so there will be nothing to read. In any case, you definitely need to close fd[1] in the parent, otherwise the sort process won't finish because the pipe won't indicate EOF until all writers are closed.
After the weird read is a printf that will probably crash, since the read buffer won't be '\0'-terminated.
And the point of using execlp is that it does the $PATH lookup for you so you don't have to specify /bin/. My first test run failed because my sort is in /usr/bin/. Why hardcode paths when you don't have to?
I am working on an assignment for my Operating System class (Posix & C), building a mini-shell, and I don't know how to solve the following problem:
My mini-shell has to accept two commands, for example ls | grep a. For that I create a pipe of size two and a child. The child closes all that it has to close and opens all that it has to open (standard/pipe's in & out). It then executes "ls," using execvp. I am sure this works well. After that, the parent shuts and opens inputs and outputs (I am sure I do it well), and then executes grep a.
Now, the problem is that the process grep a never finishes. Same for tail -1, e.g.. Yet it does work for head -1. I think that happens because grep and tail, which are executed by the parent, wait for more input, even though the child has finished its operation. ls | grep a produces the right output, displayed on the console (The pipe's output is set as default output), but, as I've said, grep a does not finish.
So, my question is: how can I inform the parent that the pipe has finished writing, so it can finish the execution of grep a for example?
Thank you.
Here's the code:
[fd is the pipe, it is initialized previously in the code. If you can see any incongruous thing, please let me know; I've cleaned the code a bit, and this is only the problematic part, as you can see.]
int fd[2];
pipe(fd);
if ((pid = fork()) != -1){
if(pid == 0){ /*Child executing */
close(fd[0]);
close(1);
dup(fd[1]);
close(fd[1]);
execvp(argvv[0][0], argvv[0]); /* Here's stored the first instruction */
} else{ /* Parent executing */
wait(&status);
close(fd[1]);
close(0);
dup(fd[0]);
close(fd[0]);
execvp(argvv[1][0], argvv[1]); /* Here's stored the second instruction */
}
}
If the grep continues to run after the ls has exited, that indicates that you have not closed all the pipes that you need to close.
In particular, the write end of the pipe whose read end is attached to the grep process is still open in another process. You will need to show your code to know more.
The code you have pasted works correctly (when expanded to a full program, as per the below). Both subprocesses exit just fine.
This means that you've eliminated the code that has the problem when you created your cut-down version here - perhaps you have another fork() between the pipe() call and this fork()?
#include <unistd.h>
#include <sys/wait.h>
int main()
{
pid_t pid;
char *argvv[2][3] = { { "ls", 0, 0}, { "grep", "a", 0 } };
int status;
int fd[2];
pipe(fd);
if ((pid = fork()) != -1) {
if(pid == 0){ /*Child executing */
close(fd[0]);
close(1);
dup(fd[1]);
close(fd[1]);
execvp(argvv[0][0], argvv[0]); /* Here's stored the first instruction */
} else{ /* Parent executing */
wait(&status);
close(fd[1]);
close(0);
dup(fd[0]);
close(fd[0]);
execvp(argvv[1][0], argvv[1]); /* Here's stored the second instruction */
}
}
return 0;
}