Program hangs after using pipe, fork and exec - c

I am using pipe fork and exec, to implement a generic pipe for any two shell programs. I am specifically using ls | grep to test it. It works, the data gets copied over to grep, grep searches for matches and then outputs them to stdout. However after that the program just hangs.
This is my code that is executed when a pipe is detected. I fork, and then fork again because I wish to have the parent process of the first fork continue to run after the exec calls. I believe due to debug code that after the exec() call that executes grep is made that nothing is happening.
if(pipeFlag == 1){
pipe(fd);
PID = fork();
if (PID == 0){//child process
fPID = fork();
if(fPID == 0){//child of child
printf("in child of child\n");
dup2(fd[1], 1);
execvp(command, argv);//needs error checking
printf("mysh: %s: command not found\n", argv[0]);
exit(EXIT_FAILURE);
}
if(fPID > 0){//parent of 2nd child
printf("in parent of 2nd child\n");
dup2(fd[0], 0);
execvp(command1, argv1);//needs error checking
printf("mysh: %s: command not found\n", argv[0]);
exit(EXIT_FAILURE);
}
if(PID == -1){
printf("ERROR:\n");
switch (errno){
case EAGAIN:
printf("Cannot fork process: System Process Limit Reached\n");
case ENOMEM:
printf("Cannot fork process: Out of memory\n");
}
return 1;
}
}
if(PID > 0){//parent
wait(PID, 0, 0);
printf("in outer parent\n");
}
if(PID == -1){
printf("ERROR:\n");
switch (errno){
case EAGAIN:
printf("Cannot fork process: System Process Limit Reached\n");
case ENOMEM:
printf("Cannot fork process: Out of memory\n");
}
return 1;
}
}

Below is my solution to the problem. I'm not sure if it's a permanent solution. I'm not even 100% sure if my reasoning for why this works and the previous code did not isn't. All I did was switch the command that is waiting for input from the pipe(grep) to the parent process, and the command writing output to the pipe(ls) to the child process.
My reasoning for why this works is thus: I was testing with ls | grep, ls was finished writing to the pipe before grep's child process ever got set up, and therefore never closed the pipe and grep never received EOF. By changing their position grep was ready and waiting for ls to write by the time the process running ls is set up. I believe that this is a very imperfect fix, so for anyone who reads this in the future, I hope you can provide a better answer. There are a number of conditions I can think of where this could still mess up, if my reasoning for why it works is correct.
if(pipeFlag == 1){
pipe(fd);
PID = fork();
if (PID == 0){//child process
fPID = fork();
if(fPID == 0){//child of child
printf("in child of child\n");
dup2(fd[0], 0);
execvp(command1, argv1);//needs error checking
printf("mysh: %s: command not found\n", argv[0]);
exit(EXIT_FAILURE);
}
if(fPID > 0){//parent of 2nd child
printf("in parent of 2nd child\n");
dup2(fd[1], 1);
execvp(command, argv);//needs error checking
printf("mysh: %s: command not found\n", argv[0]);
exit(EXIT_FAILURE);
}
if(PID == -1){
printf("ERROR:\n");
switch (errno){
case EAGAIN:
printf("Cannot fork process: System Process Limit Reached\n");
case ENOMEM:
printf("Cannot fork process: Out of memory\n");
}
return 1;
}
}
if(PID > 0){//parent
wait(PID, 0, 0);
printf("in outer parent\n");
}
if(PID == -1){
printf("ERROR:\n");
switch (errno){
case EAGAIN:
printf("Cannot fork process: System Process Limit Reached\n");
case ENOMEM:
printf("Cannot fork process: Out of memory\n");
}
return 1;
}
}

Related

Child Process Executing Print Statement But Nothing Afterwards

I have the following code:
pid_t childProcessID;
childProcessID = fork();
if (childProcessID == -1) {
printf("fork failed");
exit(EXIT_FAILURE);
}
if (childProcessID == 0) {
printf("reached child\n");
close(STDOUT_FILENO);
dup2(fd[1], STDOUT_FILENO);
close(fd[1]);
close(fd[0]);
printf("about to execute child command\n");
if (fork() == 0) {
execve(argv1[0], argv1, NULL);
printf("Command not found.");
exit(1);
}
else {
wait(NULL);
}
}
else {
printf("reached parent, but wait for child to finish\n");
waitpid(childProcessID, &child_status, NULL);
printf("child finished\n");
close(STDIN_FILENO);
dup2(fd[0], 0);
close(fd[1]);
printf("About to executed piped command\n");
if (fork() == 0) {
execve(argv2[0], argv2, NULL);
printf("Command not found");
exit(1);
}
else {
wait(NULL);
}
}
When I run my minishell and give it the input nl parse.c | wc -l, My minishell prints out:
reached parent, but wait for child to finish
reached child
And nothing underneath. The program is clearly still running but nothing is being printed. Why is my child not executing the rest of the code? argv1[] and argv[2] are initialized and work fine.
When you do :
dup2(fd[1], STDOUT_FILENO)
This code tells the process to redirect its STDOUT: Standard Output, i.e. the print commands and everything that gets printed on terminal to file descriptor fd[1].
So your child process executes and prints everything as it is supposed to,but not on the terminal. To see the output on terminal, comment the line :
//dup2(fd[1], STDOUT_FILENO)

Execution of UNIX command is being outputted after I exit the program

For some unknown reason, when I'm executing piped commands in my shell program, they're only outputting once I exit the program, anyone see why?
Code:
int execCmdsPiped(char **cmds, char **pipedCmds){
// 0 is read end, 1 is write end
int pipefd[2];
pid_t pid1, pid2;
if (pipe(pipefd) == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
pid1 = fork();
if (pid1 < 0) {
fprintf(stderr, "Fork Failure");
}
if (pid1 == 0) {
// Child 1 executing..
// It only needs to write at the write end
close(pipefd[0]);
dup2(pipefd[1], STDOUT_FILENO);
close(pipefd[1]);
if (execvp(pipedCmds[0], pipedCmds) < 0) {
printf("\nCouldn't execute command 1: %s\n", *pipedCmds);
exit(0);
}
} else {
// Parent executing
pid2 = fork();
if (pid2 < 0) {
fprintf(stderr, "Fork Failure");
exit(0);
}
// Child 2 executing..
// It only needs to read at the read end
if (pid2 == 0) {
close(pipefd[1]);
dup2(pipefd[0], STDIN_FILENO);
close(pipefd[0]);
if (execvp(cmds[0], cmds) < 0) {
//printf("\nCouldn't execute command 2...");
printf("\nCouldn't execute command 2: %s\n", *cmds);
exit(0);
}
} else {
// parent executing, waiting for two children
wait(NULL);
}
}
}
Output:
In this example of the output, I have used "ls | sort -r" as the example, another important note is that my program is designed to only handle one pipe, I'm not supporting multi-piped commands. But with all that in mind, where am I going wrong, and what should I do to fix it so that it's outputting within the shell, not outside it. Many thanks in advance for any and all advice and help given.
The reason would be your parent process file descriptors are not closed yet. When you wait for the second command to terminate, it hangs because the writing end is not closed so it wait until either the writing end is closed, or new data is available to read.
Try closing both pipefd[0] and pipefd[1] before waiting for process to terminate.
Also note that wait(NULL); will immediately return when one process has terminated, you would need a second one as to not generate zombies if your process still runs after that.

Learning pipes, exec, fork, and trying to chain three processes together

I'm learning to use pipes and following along with this code on pipes. The program makes two child processes using fork. The first child runs 'ls' command and outputs to pipe1. The second reads from pipe1 runs 'wc' and outputs to stdout.
I'm just trying to add a third process in the middle that reads from pipe1 and outputs to pipe2. Basically what I'm trying to do
ls | cat | wc -l
What I'm trying to do:
(ls)stdout -> pipe1 -> stdin(cat)stdout-> stdin(wc -l) -> stdout
Nothing ever prints to stdout and the program never exits.
Here's my code with the changes for process #3
int
main(int argc, char *argv[])
{
int pfd[2]; /* Pipe file descriptors */
int pfd2[2];
if (pipe(pfd) == -1) /* Create pipe */
perror("pipe");
if (pipe(pfd2) == -1) /* Create pipe */
perror("pipe");
/*
Fork process 1 and exec ls command
write to pfd[1], close pfd[0]
*/
switch (fork()) {
case -1:
perror("fork");
case 0:
if (close(pfd[0]) == -1)
perror("close 1");
// dup stdout on pfd[1]
if (pfd[1] != STDOUT_FILENO) {
if (dup2(pfd[1], STDOUT_FILENO) == -1)
perror("dup2 2");
if (close(pfd[1]) == -1)
perror("close 4");
}
execlp("ls", "ls", (char *) NULL);
perror("execlp ls");
default:
break;
}
/*
* Fork process 2 and exec wc command
read from pfd[0], close pfd[1]
write to pfd[1], close pfd2[0]
*/
switch (fork()) {
case -1:
perror("fork");
case 0:
// read from pfd[0]
if (close(pfd[1]) == -1)
perror("close 3");
if (pfd[0] != STDIN_FILENO) {
if (dup2(pfd[0], STDIN_FILENO) == -1)
perror("dup2 2");
if (close(pfd[0]) == -1)
perror("close 4");
}
if (pfd2[1] != STDOUT_FILENO) {
if (dup2(pfd2[1], STDOUT_FILENO) == -1)
perror("dup2 2");
if (close(pfd2[1]) == -1)
perror("close 4");
}
execlp("cat", "cat", (char *) NULL);
perror("execlp cat");
default:
break;
}
/*
* Fork process 3
*/
switch (fork()) {
case -1:
perror("fork");
case 0:
if (close(pfd2[1]) == -1)
perror("close 3");
if (pfd2[0] != STDIN_FILENO) {
if (dup2(pfd2[0], STDIN_FILENO) == -1)
perror("dup2 2");
if (close(pfd2[0]) == -1)
perror("close 4");
}
execlp("wc", "wc", "-l", (char *) NULL);
perror("execlp wc");
default:
break;
}
/* Parent closes unused file descriptors for pipe, and waits for children */
if (close(pfd[0]) == -1)
perror("close 5");
if (close(pfd[1]) == -1)
perror("close 6");
if (close(pfd2[0]) == -1)
perror("close 5");
if (close(pfd2[1]) == -1)
perror("close 6");
if (wait(NULL) == -1)
perror("wait 1");
if (wait(NULL) == -1)
perror("wait 2");
if (wait(NULL) == -1)
perror("wait 3");
exit(EXIT_SUCCESS);
}
The problem is that you did not close pfd[1] in process 3, add close(pfd[1]); after case 0 in that process 3 will fix it.
In process 3, that cat will read from pfd[0], however there are four pfd[1] in those processes:
process 0
this is the main process, pfd[1] in this process will be closed by that close before wait().
process 1
after ls finished, pfd[1] in this process will be closed automatically by the operating system.
process 2
pfd[1] has been closed before executing cat.
process 3
pfd[1] is open in this process while wc is running, and this is what happened at that moment:
in process 2, cat tries to read pfd[0] for data from pfd[1]
in process 3, wc tries to read pfd2[0] for data from pfd2[1]
because pdf[1] still open in process 3, and nothing will be written to it, reading from pfd[0] in process 2 (cat) will wait forever
because cat in process 3 still alive, reading from pfd2[0] in process 3 (wc) will wait (forever)
As you can see, you have a deadlock between process 2 (cat) and process 3 (wc) because of file descriptor leak. To break this deadlock, you just need to close pfd[1] in process 3 before you run wc, after that:
cat in process 2 will exit after ls in process 1 exits, because there is nothing left for it (cat) to read
after cat in process 2 exits, wc in process 3 will also exit, because there is nothing left for it (wc) to read
after that, the main process (parent process) will exit, and the program will finish.
It is possible that there are more than one write ends for the read end of a pipe, unless all these write ends are closed, end-of-file will not be delivered to the read end, and the reader will just wait for more data to come. If there is nothing to come, that reader will wait forever.

Third process "wc" won't work

I'm currently having a problem with the third process because it wont work every time when I run the program. And suggestions with the exit() part because is printing multiple child process! Any suggestions?
I would really APPRECIATE it a lot!
main(){
pid_t son;
int i;
for (i=0; i<3; i++){
switch (i){
case 0:
son = fork();
if (son<0){
fprintf(stderr, "Fork failed!");
//exit(-1);
}else if (son == 0){
execlp("/bin/cat", "cat", "wctrial.txt", NULL);
}else{
wait(NULL);
printf("Child process completed!");
//exit(0);
}
case 1:
son = fork();
if (son<0){
fprintf(stderr, "Fork failed!");
//exit(-1);
}else if (son == 0){
execlp("/bin/mkdir", "mkdir", "mydirectory", NULL);
}else{
wait(NULL);
printf("Child process completed!");
//exit(0);
}
case 2:
son = fork();
if (son<0){
fprintf(stderr, "Fork failed!");
//exit(-1);
}else if (son == 0){
execlp("/bin/wc","wc","wctrial.txt", NULL);
}else{
wait(NULL);
printf("Child process completed!");
//exit(0);
}
}
}
At least I don't see the break at the end of the each case.
In the case of 0 the program will run through all of your cases.
Actually break is the problem that if case 1 execute then 2,3 also will.(but this is not problem that wc not working)
Why wc is not working ?
Because of path of wc command!
In your system path for wc may not is: "/bin/wc"
Search tha path of wc command in your system like:
:~$ whereis wc
wc: /usr/bin/wc
and change
execlp("/bin/wc","wc","wctrial.txt", NULL);
^
as
execlp("/usr/bin/wc","wc","wctrial.txt", NULL);
^
// actually not exactly this but one that appears in your system.
Give it a try!!
Below are my suggestion ,
1st) suggestion would be the clean-up of child process once it is
done, as below,
}else if (son == 0){
execlp("/bin/mkdir", "mkdir", "mydirectory", NULL);
_exit(0);
}
2nd) do break after each switch statement
3rd) and also validate the path of executable by using "whereis"
command before feeding into execlp routine.

two forks and the use of wait

Currently am doing two forks to pipeline two process, but I think am doing my wait(&status) wrong because after the command my shell just hangs and does not return to my prompt. I know my pipe is working because I can see the result if I remove the wait.
Any tips?
pipe(mypipe);
pid1=fork();
if(pid1==0)
{
pid2=fork();
if(pid2==0)
{
close(0);
dup(mypipe[0]);
close(mypipe[1]);
execv(foundnode2->path_dir,arv2);
exit(0);
}
close(1);
dup(mypipe[1]);
close(mypipe[0]);
pid2 = wait(&status2);
execv(foundnode1->path_dir,arv1);
exit(0);
}
pid1 = wait(&status2);
Rule of Thumb: if you use dup() or dup2() to map one end of a pipe to standard input or standard output, you should close() both ends of the pipe itself. You're not doing that; your waits are waiting for the programs to finish but the programs will not finish because there is still a proess with the pipe open that could write to the pipe. Also, the process which created the pipe needs to close both ends of the pipe since it is not, itself, using the pipe (the child processes are using it). See also C MiniShell — Adding Pipelines.
Also, you should not be waiting for the first child to finish before launching the second (so the pid2 = wait(&status2); line is a bad idea). Pipes have a fairly small capacity; if the total data to be transferred is too large, the writing child may block waiting for the reading child to read, but the reading child hasn't started yet because it is waiting for the writing child to exit (and it takes a long time for this deadlock to resolve itself). You're seeing the output appear without the wait() calls because the second part of the pipeline executes and processes the data from the first part of the pipeline, but it is still waiting for more data to come from the shell.
Taking those tips into account, you might end up with:
pipe(mypipe);
pid1 = fork();
if (pid1 == 0)
{
pid2 = fork();
if (pid2 == 0)
{
close(0);
dup(mypipe[0]);
close(mypipe[1]);
close(mypipe[0]);
execv(foundnode2->path_dir, arv2);
fprintf(stderr, "Failed to exec %s\n", foundnode2->path_dir);
exit(1);
}
close(1);
dup(mypipe[1]);
close(mypipe[0]);
close(mypipe[1]);
execv(foundnode1->path_dir, arv1);
fprintf(stderr, "Failed to exec %s\n", foundnode1->path_dir);
exit(1);
}
close(mypipe[0]);
close(mypipe[1]);
pid1 = wait(&status1);
Notice the error reporting to standard error when the commands fail to execv(). Also, the exit status of 0 should be reserved for success; 1 is a convenient error exit status, or you can use EXIT_FAILURE from <stdlib.h>.
There is a lot of error checking omitted still; the fork() operations could fail; the pipe() might fail. One consequence is that if the second fork() fails, you still launch the second child (identified by foundnode1->path_dir).
And I note that you could save yourself a little work by moving the pipe creation into the first child process (the parent then does not need to — indeed, cannot — close the pipe):
int pid1 = fork();
if (pid1 == 0)
{
int mypipe[2];
pipe(mypipe);
int pid2 = fork();
if (pid2 == 0)
{
close(0);
dup(mypipe[0]);
close(mypipe[1]);
close(mypipe[0]);
execv(foundnode2->path_dir, arv2);
fprintf(stderr, "Failed to exec %s\n", foundnode2->path_dir);
exit(1);
}
close(1);
dup(mypipe[1]);
close(mypipe[0]);
close(mypipe[1]);
execv(foundnode1->path_dir, arv1);
fprintf(stderr, "Failed to exec %s\n", foundnode1->path_dir);
exit(1);
}
pid1 = wait(&status1);
If it's just a pipe with two processes, I wouldn't wait at all. Just fork and do an exec in parent and child.
int fd[2];
pipe(fd);
int pid = fork();
if (pid == -1) {
/* error handling */
} else if (pid == 0) {
dup2(fd[0], 0);
close(fd[1]);
execv(foundnode2->path_dir,arv2);
/* error handling for failed exec */
exit(1);
} else {
dup2(fd[1], 1);
close(fd[0]);
execv(foundnode1->path_dir,arv1);
/* error handling for failed exec */
exit(1);
}

Resources