Inside a while(1) I'm trying to:
spawn a child process with fork();
redirect the child process stdout so that the parent process can see it
print the result in the terminal from the parent process
repeat
Strangely, the output from the child process seems to be printed twice
// parentToChild and childToParent are the pipes I'm using
while(1) {
int pid = fork();
if(pid < 0) {
// error, get out
exit(0);
} else if(pid != 0) {
// parent process
close(parentToChild[0]); // don't need read end of parentToChild
close(childToParent[1]); // don't need write end of childToParent
sleep(4);
char respBuffer[400];
int respBufferLength = read(childToParent[0], respBuffer, sizeof(respBuffer));
printf("before\n");
printf("parent tried to read something from its child and got: %s\n", respBuffer);
printf("after\n");
} else if (pid == 0) {
if(dup2(childToParent[1], STDOUT_FILENO) < 0) {
// printf("dup2 error");
};
close(childToParent[1]);
close(childToParent[0]);
close(parentToChild[1]); // write end of parentToChild not used
printf("child message");
// if we don't exit here, we run the risk of repeatedly creating more processes in a loop
exit(0);
}
}
I would expect the ouput of the following loop at each iteration to be:
before
parent tried to read something from its child and got: child message
after
But instead, at each iteration I get:
before
parent tried to read something from its child and got: child message
after
child message
What's the reason behind the second print of "child message"?
Flushing the stdout buffers before calling fork() doesn't seem to solve the issue
Interestingly, removing the while loop and keeping everything else intact seems to work fine
In the first iteration of the loop, you close childToParent[1] in the parent, and you don't recreate the pipes, so in the second iteration of the loop, its trying to reuse those closed pipes, so the child's dup2 call fails, so its printf goes to the terminal. Meanwhile, in the parent, the read call returns 0 without writing anything to the buffer, so you just print the old contents.
Related
I have a multiple processes in progs and I wish to pipe the output from one process to another sequentially. I believe I have already linked the stdin of a process to the read end of a previous process, and linked the stdout to the write end of the pipe. I still am not seeing an output. Am I missing something with the links here?
int pipeFds[numProg][2];
for (int i = 0; i < numProg; i++) {
if (pipe(pipeFds[i]) != 0) { // create pipe for each process
printf("Failed to create pipe!");
exit(1);
}
child_pid = fork();
if (child_pid == -1) {
printf("Error while creating fork!");
exit(1);
}
if (child_pid == 0) {
if (i == 0) {
close(pipeFds[i][READ_END]); // close STDIN first process since it does not read
} else {
// change stdin to read end of pipe for intermediary processes
close(pipeFds[i]);
dup2(pipeFds[i - 1][READ_END], STDIN_FILENO);
}
dup2(pipeFds[i][WRITE_END], STDOUT_FILENO); // change stdout to write end of pipe
execvp(progs[i][0], (char *const * )progs[i]);
} else {
// parent process stuff
}
}
// Close pipes except last pipe for EOF
for (int i = 0; i < numProg - 1; i++) {
close(pipeFds[i][READ_END]);
close(pipeFds[i][WRITE_END]);
}
Remember you need to close all pipes, in each process, for it to work.
Example:
If numProg=2 you create 2 pipes and 2 child processes. Adding the parent, there a 3 processes running, and in each of them you eventually need to close all pipes.
For child_pid==0 you close the [READ_END] but never the [WRITE_END].
For child_pid==1 you do a close(pipeFds[1]). You need to specify [READ_END] and [WRITE_END].
Then each child process exits via execvp which, however, may return control if it fails. From the man page:
The exec() family of functions replaces the current process image with a new process image. .. The exec() functions only return if an error has occurred.
So you may want to add a _exit(0); after execvp to ensure each child process exits properly even if execvp fails.
The parent process closes all pipes but the last. So in the example of NumProg=2, [READ_END] and [WRITE_END] of pipeFd[1] both are never closed.
Lastly, the parent process should wait for all child processes to close (using while(wait(NULL) != -1);) otherwise you may end up with zombie or orphan processes.
your code contains a stray close(pipeFds[i]);
you have to close the pipes within the // parent process stuff. With your code, every child keeps the pipeFds of the previous children open. E.g.
} else {
// parent process stuff
if (i > 0)
close(pipeFds[i - 1][READ_END]);
close(pipeFds[i - 1][READ_END]);
}
it might be more effective to have a single fds[2] pair instead of numProg ones.
I'm making a shell in C for a school project that is capable of running processes in parallel if it is commanded to do so.
This is the loop of the shell application that waits for commands:
while (1) {
action = parseShellArgs();
if (action == 1) {
printf("Exiting...\n");
break;
} else if (action == 0) {
int pid = fork();
if (pid < 0) {
printf("Failed to fork\n");
} else if (pid == 0) {
(*NUM_PROCESSES_RUNNING)++;
printf("There are %d processes running\n", *NUM_PROCESSES_RUNNING);
char * solverArgs[] = {"a", shellArgs[1], NULL}; // first element is placeholder for argv[0]
execv("CircuitRouter-SeqSolver", solverArgs);
exit(0);
} else if (pid > 0) {
if (*NUM_PROCESSES_RUNNING >= MAXCHILDREN) {
printf("All processes are busy\n");
continue;
}
int status, childpid;
wait(&status);
childpid = WEXITSTATUS(status);
(*NUM_PROCESSES_RUNNING)--;
printf("There are %d processes running\n", *NUM_PROCESSES_RUNNING);
(void)childpid; // suppress "unused variable" warning
} else {
printf("Wait what\n");
}
} else {
printf("Oops, bad input\n");
}
}
Please do disregard the constants being incremented and decremented.
Now, this only works partially. Whenever I give it a command to create another process and run another program (condition action == 0, this has been tested and works), the fork happens and the program is correctly executed.
However, I cannot fork multiple times. What I mean by this is: the program forks and the child executes as instructed in the execv call. The problem is that instead of the parent process then goes back to expecting input to possibly fork again, it waits for the child process to finish.
What I am trying to make this cycle do is for the parent to always be expecting input and forking as commanded, having multiple children if necessary. But as I explained above, the parent gets "stuck" waiting for the single child to finish and only then resumes activity.
Thank you in advance.
Edit: I have experimented multiple combinations of not waiting for the child process, using extra forks to expect input etc.
From man wait.2
The wait() system call suspends execution of the calling process until
one of its children terminates.
Your program gets stuck because that's what wait does. Use waitpid instead with WNOHANG.
waitpid(pid_child, &status, WNOHANG);
doesn't suspend execution of the calling process. You can read the waitpid man page to find out the return values and how to know if a child terminated.
Using a fairly standard fork process:
int pipe_to_child[2];
int pipe_from_child[2];
int child_exit_status = -1;
pid_t child_pid = fork();
if (child_pid == 0) {
close(pipe_from_child[0]); // close their read end
close(pipe_to_child[1]); // Close their write end
dup2(pipe_to_child[0], STDIN_FILENO); // Tie the in pipe to stdin
dup2(pipe_from_child[1], STDOUT_FILENO); // Tie stdout to the out pipe
// Run the child process
execve(file_to_run, argv_for_prog, env_for_prog);
}
else {
close(pipe_from_child[1]); // close their write end
close(pipe_to_child[0]); // Close their read end
if (input_to_prog != NULL) write(pipe_to_child[1], input_to_prog, strlen(input_to_prog)); // Send the stdin stuff
close(pipe_to_child[1]); // Done so send EOF
// Wait for the child to end
waitpid(child_pid, &child_exit_status, 0);
// Do post end-of-child stuff
}
This generally works as expected.
However, when the child process, a shell script, sets a further process off in the background. Even though the child process then exits (and is no longer listed by ps), the waitpid doesn't return.
The script is this case is meant to start inadyn-mt (a DDNS updater) running in the background.
#!/bin/sh
inadyn-mt --background
(If I put an & after inadyn-mt it makes no difference)
It turns out that the issue is that the pipes don't get closed. Although the child process exits fine, because it has spawned a further process, this process (even though it doesn't want them) is tied to the pipes to the child's stdin and stdout. The solution I used was to not set up the pipes when I was going to spin off a child from the child.
I'm trying to better understand pipes between a parent and multiple child processes, so I made a simple program that spawns two child processes, gives them a value (i), has them change that value, and then prints it out.
However it's not working, as the program prints i as if it was unaltered, and prints the altered i inside the children. I'm obviously not sending the i variable through correctly, so how should I fix this?
int main ( int argc, char *argv[] ){
int i=0;
int pipefd[2];
int pipefd1[2];
pipe(pipefd);
pipe(pipefd1);
pid_t cpid;
cpid=fork();
cpid=fork();
if (cpid ==0) //this is the child
{
close(pipefd[1]); // close write end of first pipe
close(pipefd1[0]); // close read end of second pipe
read(pipefd[0], &i, sizeof(i));
i=i*2;
printf("child process i= %d\n",i); //this prints i as 20 twice
write(pipefd1[1],&i, sizeof(i));
close(pipefd[0]); // close the read-end of the pipe
close(pipefd1[1]);
exit(EXIT_SUCCESS);
}
else
{
close(pipefd[0]); // close read end of first pipe
close(pipefd1[1]); // close write end of second pipe
i=10;
write(pipefd[1],&i,sizeof(i));
read (pipefd1[1], &i, sizeof (i));
printf("%d\n",i); //this prints i as 10 twice
close(pipefd[1]);
close(pipefd1[0]);
exit(EXIT_SUCCESS);
}
}
The main problem is that you're not creating two child processes. You're creating three.
cpid=fork();
cpid=fork();
The first fork results in a child process being created. At that point, both the child and the parent execute the next statement, which is also a fork. So the parent creates a new child and the first child also creates a child. That's why everything is printing twice.
You need to check the return value of fork immediately before doing anything else.
If you were to remove one of the fork calls, you'd still end up with the wrong value for i in the parent. That's because it's reading from the wrong end of the pipe.
The child is writing to pipefd1[1], but the parent is then trying to read from pipefd1[1] as well. It should be reading from pipefd1[0].
EDIT:
Removed erroneous sample code which assumed pipes are bidirectional, which they are not.
I am writing a program to measure the time it takes to perform a process switch. In order to do this I am having a parent write one byte messages to its child and the child read it.
My problem is in this loop:
for(i=0; i<2; i++)
{
if(fork()==0) //child process, read
{
close(pipefd[1]);
read(pipefd[0]);
close(pipefd[0]);
}
else //parent process
{
close(pipefd[0]);
write(pipefd[1]);
close(pipefd[0]);
}
}
In order to test to see how often the fork was hitting the parent and child I put a printf statement in, and I got around 15 statements printed to the screen. How this is possible considering the loop should only run twice?
This is because each child process is going to create other processes.
After the if block is executed, each child will return at the beginning of the loop and fork() again, until i == 2 in all child processes.
Edit :
In order to avoid that, I suggest to use something like this :
if(fork() == 0)
{
//Do stuff
break;
}
Maybe this is not the most elegant way, but it should work.