How does one use the wait() function when forking multiple processes? - c

Learning to use the fork() command and how to pipe data between a parent and it's children. I am currently trying to write a simple program to test how the fork and pipe functions work. My problem seems to be the correct use/placement of the wait function. I want the parent to wait for both of its children to finish processing. Here is the code I have so far:
int main(void)
{
int n, fd1[2], fd2[2];
pid_t pid;
char line[100];
if (pipe(fd1) < 0 || pipe(fd2) < 0)
{
printf("Pipe error\n");
return 1;
}
// create the first child
pid = fork();
if (pid < 0)
printf("Fork Error\n");
else if (pid == 0) // child segment
{
close(fd1[1]); // close write end
read(fd1[0], line, 17); // read from pipe
printf("Child reads the message: %s", line);
return 0;
}
else // parent segment
{
close(fd1[0]); // close read end
write(fd1[1], "\nHello 1st World\n", 17); // write to pipe
// fork a second child
pid = fork();
if (pid < 0 )
printf("Fork Error\n");
else if (pid == 0) // child gets return value 0 and executes this block
// this code is processed by the child process only
{
close(fd2[1]); // close write end
read(fd2[0], line, 17); // read from pipe
printf("\nChild reads the message: %s", line);
}
else
{
close(fd2[0]); // close read end
write(fd2[1], "\nHello 2nd World\n", 17); // write to pipe
if (wait(0) != pid)
printf("Wait error\n");
}
if (wait(0) != pid)
printf("Wait error\n");
}
// code executed by both parent and child
return 0;
} // end main
Currently my output looks something along the lines of:
./fork2
Child reads the message: Hello 1st World
Wait error
Child reads the message: Hello 2nd World
Wait error
Where is the appropriate place to make the parent wait?
Thanks,
Tomek

That seems mostly ok (I didn't run it, mind you). Your logic error is in assuming that the children will end in some particular order; don't check the results of wait(0) against a particular pid unless you're sure you know which one you're going to get back!
Edit:
I ran your program; you do have at least one bug, your second child process calls wait(), which you probably didn't want to do. I recommend breaking some of your code out into functions, so you can more clearly see the order of operations you're performing without all the clutter.

i think its better to use something like this, in order to wait for all the childrens.
int stat;
while (wait(&stat) > 0)
{}

Related

Execution of UNIX command is being outputted after I exit the program

For some unknown reason, when I'm executing piped commands in my shell program, they're only outputting once I exit the program, anyone see why?
Code:
int execCmdsPiped(char **cmds, char **pipedCmds){
// 0 is read end, 1 is write end
int pipefd[2];
pid_t pid1, pid2;
if (pipe(pipefd) == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
pid1 = fork();
if (pid1 < 0) {
fprintf(stderr, "Fork Failure");
}
if (pid1 == 0) {
// Child 1 executing..
// It only needs to write at the write end
close(pipefd[0]);
dup2(pipefd[1], STDOUT_FILENO);
close(pipefd[1]);
if (execvp(pipedCmds[0], pipedCmds) < 0) {
printf("\nCouldn't execute command 1: %s\n", *pipedCmds);
exit(0);
}
} else {
// Parent executing
pid2 = fork();
if (pid2 < 0) {
fprintf(stderr, "Fork Failure");
exit(0);
}
// Child 2 executing..
// It only needs to read at the read end
if (pid2 == 0) {
close(pipefd[1]);
dup2(pipefd[0], STDIN_FILENO);
close(pipefd[0]);
if (execvp(cmds[0], cmds) < 0) {
//printf("\nCouldn't execute command 2...");
printf("\nCouldn't execute command 2: %s\n", *cmds);
exit(0);
}
} else {
// parent executing, waiting for two children
wait(NULL);
}
}
}
Output:
In this example of the output, I have used "ls | sort -r" as the example, another important note is that my program is designed to only handle one pipe, I'm not supporting multi-piped commands. But with all that in mind, where am I going wrong, and what should I do to fix it so that it's outputting within the shell, not outside it. Many thanks in advance for any and all advice and help given.
The reason would be your parent process file descriptors are not closed yet. When you wait for the second command to terminate, it hangs because the writing end is not closed so it wait until either the writing end is closed, or new data is available to read.
Try closing both pipefd[0] and pipefd[1] before waiting for process to terminate.
Also note that wait(NULL); will immediately return when one process has terminated, you would need a second one as to not generate zombies if your process still runs after that.

Issues Calling Execve In Forked Process

I'm trying to create a very basic telnet server to practice memory corruption exploits. When I try to issue a command, in the first iteration, nothing happens. Second iteration I am getting multiple bad file descriptor errors printing on my server side. On the client side, everything seems ok. I get all the required prompts. Here's my relevant code:
int piper[2];
pipe(piper);
...
while (1) {
n = write(newsockfd,"Enter a command...\n",21);
if (n < 0) error("ERROR writing to socket");
bzero(buffer,4096);
n = read(newsockfd,buffer,4095);
strcpy(command, buffer);
pid_t childpid;
childpid = fork();
if(childpid == -1) {
perror("Failed to fork");
return 1;
}
if(childpid == 0) { //child
printf("I am child %ld\n", (long)getpid());
if(dup2(piper[1], 1) < 0) {
perror("Failed to pipe in child process");
}
else {
close(piper[0]);
close(piper[1]);
char *args[] = {command, NULL};
execve(command, args, NULL);
}
}
else { // parent
if(dup2(piper[0], 0) < 0) {
perror("Failed to pipe in parent process");
}
else {
// read command output from child
while(fgets(command_out, sizeof(command_out), stdin)) {
printf("%s", command_out);
}
}
}
}
If I enter /bin/ls into my client, I get the following outputted onto my server:
I am child 26748
2nd time I do it, I get the following outputted to my server:
Failed to pipe in parent process: Bad file descriptor
0I am child 26749
Failed to pipe in child process: Bad file descriptor
There's a possibility that closing the pipe in the child process closes it in the parent process also. Consider moving your piper(pipe) in the beginning of the while loop. And to be safe, close the pipe at the end of the file loop not forgetting to test the return value of close.
Actually read puts a newline character at the end of input so your command could be for example testprog but in reality, when using read(), it is testprog\n so you have to get rid of the newline added or execve() will expect a program name with a newline in it.
#define STDIN 0
int n = read(STDIN, command, 4096);
command[n - 1] = '\0'; // get rid of newline
char *args = { command, NULL };
execve(buf, &args[0], NULL);

Why is this not printing to standard output (stdout)?

I'm currently creating my own command line shell, and I'm having problems when trying to take in pipes. My program starts with the parent process. It checks to see if the user puts in exit or history. If so it uses those commands, but this is not important. If the user puts in anything other than exit or history, then it creates a child who executes the current task.
However, if the user puts in a command that has a single pipe, then we start at the parent and it creates a child process, call this child, child 1. child 1 see's that there is a pipe and uses fork to create another child, call this child 2. Child 1 creates another child, call it child 3 (Note: child 2 and 3 use shared memory). Now, child 2 executes the first command, and child 3 executes the command using child 2's output. But I'm not getting any output for some reason.
I'm not sure if this is the most efficient way of executing my task, but this is what my professor said to do.
If you want to see all of my code here it is: http://pastebin.com/YNTVf3XP
Otherwise here is my code starting at child 1:
//Otherwise if we have a single pipe, then do
else if (numPipes == 1) {
char *command1[CMD_MAX]; //string holding the first command
char *command2[CMD_MAX]; //string holding the second command
int fds[2]; //create file descriptors for the parent and child
pid_t new_pid; //create new pid_t obj
onePipeSplit(tokens, command1, command2, numTokens); //Getting each command for pipe
if (pipe(fds) < 0) { //use the pipe command. If < 0
perror("Pipe failure."); //we have an error, so exit
exit(EXIT_FAILURE);
}
//Creating child 2
new_pid = fork();
//if pid < 0 then we have an error. Exit.
if (new_pid < 0) {
perror("Fork failure.");
exit(EXIT_FAILURE);
}
//Else we have the child (2) process
else if (new_pid == 0) {
close(fds[0]); //Close read
//sending stdin to the shared file descriptor
if (dup2(fds[1], STDOUT_FILENO)<0) {
perror("Can't dup");
exit(EXIT_FAILURE);
}
execvp(command1[0], command1); //Execute the next command
perror("Exec failure");
exit(EXIT_FAILURE);
}
//Else if we have the parent process
else {
pid_t child = fork(); //Creating child 3
if (new_pid < 0) { //if pid < 0, error, exit
perror("Fork failure.");
exit(EXIT_FAILURE);
}
//else if pid > 0 we have the parent wait until children execute
else if (new_pid > 0) {
wait(NULL);
}
//else we have the child (3)
else {
close(fds[1]); //Close write
//Sending stdout to the shared file descriptor
if (dup2(fds[0], STDIN_FILENO) < 0) {
perror("Can't dup");
exit(EXIT_FAILURE);
}
execvp(command2[0], command2); //Execute the command
perror("Exec failure");
exit(EXIT_FAILURE);
}
}
}
Here is an image that my professor gave me to show how it should work.
The problem is here:
pid_t child = fork(); //Creating child 3
if (new_pid < 0) {
... you keep checking `new_pid` here on down,
... but you should be checking `child` here on down...
Also in onePipeSplit you need to put a NULL at the end of both command lists because execvp needs that. After the first loop add:
command1[i] = NULL;
and after the second:
command2[i] = NULL;
OK, a few more fixes:
after each dup2() you need to close the original fd. One example:
if (dup2(fds[1], STDOUT_FILENO)<0) {
perror("Can't dup");
exit(EXIT_FAILURE);
}
close(fds[1]); /* ADD ME */
and in the parent process:
//else if pid > 0 we have the parent wait until children execute
else if (child > 0) {
close(fds[0]); /* we don't use */
close(fds[1]); /* the pipe */
wait(NULL);
exit(0); /* when 2 & 3 are done, we are too */
}

Process hangs after executing the second/last command in pipe in certain cases

I am using pipe, fork & exec to implement a user shell. The issue is that it does not work in certain cases. For eg it would work if I have ls | head but will not work for ls | cat. It will show the output of cat but will simply hang after that without returning to the prompt.
Referring to the code I have the input stored in c->args[0],for which I fork a child & execute it.
I understand that the second exec is still waiting for EOF, but closing file descriptors before that does not help.
Going through similar questions, I also tried closing file descriptors in the parent process before wait but after doing that even ls | head does not work.
I have posted the relevant function below.
void executeProcess(Cmd c,int noofcmds)
{
// printf("Will be entering fork procedure \n");
int cmdNo;
pipe(fd);
for (cmdNo = 0;cmdNo < noofcmds; cmdNo ++)
{
int processid = fork();
pid_t childpid;
// printf("Process id %d\n",processid);
if (processid == 0)
{
// if (noofcmds != 1)
// {
if (cmdNo == 0)
{
printf("Inside first child \n");
close(fd[0]);
dup2(fd[1], 1);
// close(fd[0]);
} else if (cmdNo == noofcmds-1)
{
close(fd[1]);
dup2(fd[0], 0);
// close(fd[0]);
}
// close(fd[1]);
// close(fd[0]);
if (execvp(c->args[0],c->args) < 1)
{ printf("Error\n");
}
} else
{
// printf("Waiting in parent\n");
// close(fd[0]);
// close(fd[1]);
int status;
int returnedpid;
wait(&status);
printf("Returned after waiting\n");
// close(fd[0]);
// close(fd[1]);
}
c = c->next;
// close(fd[0]);
// close(fd[1]);
} // end of for
}
Look at the sequence of events, with ls | cat this is what happens right now:
1) pipe is created in parent.
2) ls child is spawned
3) parent waits for ls to finish
4) cat child is spawned
5) parent waits for cat to finish
As you noticed, in 5) parent still has the pipe open so cat never finishes.
When you close it in the parent part of the code it gets closed ... before 3). So by the time cat starts the pipe doesn't exist anymore -> no output from cat.
What you need is to close it after 4) with something like:
...
else // in parent
{
// printf("Waiting in parent\n");
if (cmdNo == 1) // second child is spawned, can close the pipe now.
{
close(fd[0]);
close(fd[1]);
}
int status;
wait(&status);
printf("Returned after waiting\n");
}
Code will need more work to handle more than 2 commands in a pipe, but you get the idea...
Tip: find an editor which indents your code automatically, it'll make your life a lot easier !

Reading and writing about PIPE in linux

A simple multi-process program in linux.
Input some numbers like ./findPrime 10 20 30.
The program will create 3 child processes to find out all primes between 2-10, 10-20, 20-30.
Once a child process find a prime, it will write "2 is prime" through a pipe and send to the parent. Parent will print it on the screen.
THE PROBLEM here is that, I use a while loop to write message into the pipe and use another while loop on the parent side to receive the message, but with the code below, it only display the first message, so I am wondering what`s going on, how can i keep reading from that pipe? Did I miss someting? Thanks very much!
char readBuffer[100];
char outBuffer[15];
int pids[argc];
int fd[2];
pipe(fd);
for(i = 0; i < argc; i++)
{
if( i == 0)
bottom = 2;
else
bottom = args[i - 1];
top = args[i];
pids[i] = fork();
if(pids[i] == 0)
{
printf("Child %d: bottom=%d, top=%d\n", getpid(), bottom, top);
close(fd[0]);
j = bottom;
while(j <= top)
{
int res = findPrime(j);
if(res == 1)
{
sprintf(outBuffer, "%d is prime", j);
write(fd[1], outBuffer, (strlen(outBuffer)+1));
}
j++;
}
exit(0x47);
}
else if(pids[i] < 0)
{
fprintf(stderr, "fork failed! errno = %i\n", errno);
break;
}
else
{
close(fd[1]);
while((nbytes = read(fd[0], readBuffer, sizeof(readBuffer))) > 0 )
printf("%s\n", readBuffer);
int status = 0;
pid = waitpid(pids[i], &status, 0);
if(pid >= 0)
printf("Child %d exited cleanly\n", pid);
}
}
And these child process should run in the order that they were created, like when Process 1 is done, then Process 2 will run, and process 3 will after 2.
I also want the parent process display the message immediately when it receives one.
Parent/children share their file descriptors (as they presently are) at the time of the fork. Your immediate problem is that you close fd[1] in the parent. When the first child ends the fact that the process ends means that fd[1] will be automatically closed in the child. As the OS no longer has any valid references to the file descriptor it becomes invalid. So your pipe writes fail in all subsequent children.
So just don't close fd[1] in the parent.
But then you have other problems too. One that jumps out is that if one of your child processes doesn't find a prime it will never write to the pipe. The parent, however, will block forever waiting for something to read that is never going to arrive. By not closing fd[1] in the parent you won't see EOF - i.e. read() == 0 in the parent. So one solution is to pass a "done" message back via the pipe and have the parent parse that stuff out.
A better solution yet is to consider a redesign. Count the number of processes you are going to need by parsing the command line arguments right at the beginning of the program. Then dynamically allocate the space for the number of pipe descriptors you are going to need and give each process its own pair. That could avoid everything altogether and is a more standard way of doing things.

Resources