I have a multiple processes in progs and I wish to pipe the output from one process to another sequentially. I believe I have already linked the stdin of a process to the read end of a previous process, and linked the stdout to the write end of the pipe. I still am not seeing an output. Am I missing something with the links here?
int pipeFds[numProg][2];
for (int i = 0; i < numProg; i++) {
if (pipe(pipeFds[i]) != 0) { // create pipe for each process
printf("Failed to create pipe!");
exit(1);
}
child_pid = fork();
if (child_pid == -1) {
printf("Error while creating fork!");
exit(1);
}
if (child_pid == 0) {
if (i == 0) {
close(pipeFds[i][READ_END]); // close STDIN first process since it does not read
} else {
// change stdin to read end of pipe for intermediary processes
close(pipeFds[i]);
dup2(pipeFds[i - 1][READ_END], STDIN_FILENO);
}
dup2(pipeFds[i][WRITE_END], STDOUT_FILENO); // change stdout to write end of pipe
execvp(progs[i][0], (char *const * )progs[i]);
} else {
// parent process stuff
}
}
// Close pipes except last pipe for EOF
for (int i = 0; i < numProg - 1; i++) {
close(pipeFds[i][READ_END]);
close(pipeFds[i][WRITE_END]);
}
Remember you need to close all pipes, in each process, for it to work.
Example:
If numProg=2 you create 2 pipes and 2 child processes. Adding the parent, there a 3 processes running, and in each of them you eventually need to close all pipes.
For child_pid==0 you close the [READ_END] but never the [WRITE_END].
For child_pid==1 you do a close(pipeFds[1]). You need to specify [READ_END] and [WRITE_END].
Then each child process exits via execvp which, however, may return control if it fails. From the man page:
The exec() family of functions replaces the current process image with a new process image. .. The exec() functions only return if an error has occurred.
So you may want to add a _exit(0); after execvp to ensure each child process exits properly even if execvp fails.
The parent process closes all pipes but the last. So in the example of NumProg=2, [READ_END] and [WRITE_END] of pipeFd[1] both are never closed.
Lastly, the parent process should wait for all child processes to close (using while(wait(NULL) != -1);) otherwise you may end up with zombie or orphan processes.
your code contains a stray close(pipeFds[i]);
you have to close the pipes within the // parent process stuff. With your code, every child keeps the pipeFds of the previous children open. E.g.
} else {
// parent process stuff
if (i > 0)
close(pipeFds[i - 1][READ_END]);
close(pipeFds[i - 1][READ_END]);
}
it might be more effective to have a single fds[2] pair instead of numProg ones.
Related
Inside a while(1) I'm trying to:
spawn a child process with fork();
redirect the child process stdout so that the parent process can see it
print the result in the terminal from the parent process
repeat
Strangely, the output from the child process seems to be printed twice
// parentToChild and childToParent are the pipes I'm using
while(1) {
int pid = fork();
if(pid < 0) {
// error, get out
exit(0);
} else if(pid != 0) {
// parent process
close(parentToChild[0]); // don't need read end of parentToChild
close(childToParent[1]); // don't need write end of childToParent
sleep(4);
char respBuffer[400];
int respBufferLength = read(childToParent[0], respBuffer, sizeof(respBuffer));
printf("before\n");
printf("parent tried to read something from its child and got: %s\n", respBuffer);
printf("after\n");
} else if (pid == 0) {
if(dup2(childToParent[1], STDOUT_FILENO) < 0) {
// printf("dup2 error");
};
close(childToParent[1]);
close(childToParent[0]);
close(parentToChild[1]); // write end of parentToChild not used
printf("child message");
// if we don't exit here, we run the risk of repeatedly creating more processes in a loop
exit(0);
}
}
I would expect the ouput of the following loop at each iteration to be:
before
parent tried to read something from its child and got: child message
after
But instead, at each iteration I get:
before
parent tried to read something from its child and got: child message
after
child message
What's the reason behind the second print of "child message"?
Flushing the stdout buffers before calling fork() doesn't seem to solve the issue
Interestingly, removing the while loop and keeping everything else intact seems to work fine
In the first iteration of the loop, you close childToParent[1] in the parent, and you don't recreate the pipes, so in the second iteration of the loop, its trying to reuse those closed pipes, so the child's dup2 call fails, so its printf goes to the terminal. Meanwhile, in the parent, the read call returns 0 without writing anything to the buffer, so you just print the old contents.
Using a fairly standard fork process:
int pipe_to_child[2];
int pipe_from_child[2];
int child_exit_status = -1;
pid_t child_pid = fork();
if (child_pid == 0) {
close(pipe_from_child[0]); // close their read end
close(pipe_to_child[1]); // Close their write end
dup2(pipe_to_child[0], STDIN_FILENO); // Tie the in pipe to stdin
dup2(pipe_from_child[1], STDOUT_FILENO); // Tie stdout to the out pipe
// Run the child process
execve(file_to_run, argv_for_prog, env_for_prog);
}
else {
close(pipe_from_child[1]); // close their write end
close(pipe_to_child[0]); // Close their read end
if (input_to_prog != NULL) write(pipe_to_child[1], input_to_prog, strlen(input_to_prog)); // Send the stdin stuff
close(pipe_to_child[1]); // Done so send EOF
// Wait for the child to end
waitpid(child_pid, &child_exit_status, 0);
// Do post end-of-child stuff
}
This generally works as expected.
However, when the child process, a shell script, sets a further process off in the background. Even though the child process then exits (and is no longer listed by ps), the waitpid doesn't return.
The script is this case is meant to start inadyn-mt (a DDNS updater) running in the background.
#!/bin/sh
inadyn-mt --background
(If I put an & after inadyn-mt it makes no difference)
It turns out that the issue is that the pipes don't get closed. Although the child process exits fine, because it has spawned a further process, this process (even though it doesn't want them) is tied to the pipes to the child's stdin and stdout. The solution I used was to not set up the pipes when I was going to spin off a child from the child.
I'm having trouble with my program waiting for a child process (gzip) to finish and taking a very long time in doing so.
Before it starts waiting it closes the input stream to gzip so this should trigger it to terminate pretty quickly. I've checked the system and gzip isn't consuming any CPU or waiting on IO (to write to disk).
The very odd thing is the timing on when it stops waiting...
The program us using pthreads internally. It's processing 4 pthreads side by side. Each thread processes many units of work and for each unit of work one it kicks off a new gzip process (using fork() and execve()) to write the result. Threads hang when gzip doesn't terminate, but it suddenly does terminate when other threads close their instance.
For clarity, I'm setting up a pipeline that goes: my program(pthread) --> gzip --> file.gz
I guess this could be explained in part by CPU load. But when processes are kicked off minutes apart and the whole system ends up using only 1 core of 4 because of this locking issue, that seems unlikely.
The code to kick off gzip is below. The execPipeProcess is called such that the child writes direct to file, but reads from my program. That is:
execPipeProcess(&process, "gzip", -1, gzFileFd)
Any suggestions?
typedef struct {
int processID;
const char * command;
int stdin;
int stdout;
} ChildProcess;
void closeAndWait(ChildProcess * process) {
if (process->stdin >= 0) {
stdLog("Closing post process stdin");
if (close(process->stdin)) {
exitError(-1,errno, "Failed to close stdin for %s", process->command);
}
}
if (process->stdout >= 0) {
stdLog("Closing post process stdin");
if (close(process->stdout)) {
exitError(-1,errno, "Failed to close stdout for %s", process->command);
}
}
int status;
stdLog("waiting on post process %d", process->processID);
if (waitpid(process->processID, &status, 0) == -1) {
exitError(-1, errno, "Could not wait for %s", process->command);
}
stdLog("post process finished");
if (!WIFEXITED(status)) exitError(-1, 0, "Command did not exit properly %s", process->command);
if (WEXITSTATUS(status)) exitError(-1, 0, "Command %s returned %d not 0", process->command, WEXITSTATUS(status));
process->processID = 0;
}
void execPipeProcess(ChildProcess * process, const char* szCommand, int in, int out) {
// Expand any args
wordexp_t words;
if (wordexp (szCommand, &words, 0)) exitError(-1, 0, "Could not expand command %s\n", szCommand);
// Runs the command
char nChar;
int nResult;
if (in < 0) {
int aStdinPipe[2];
if (pipe(aStdinPipe) < 0) {
exitError(-1, errno, "allocating pipe for child input redirect failed");
}
process->stdin = aStdinPipe[PIPE_WRITE];
in = aStdinPipe[PIPE_READ];
}
else {
process->stdin = -1;
}
if (out < 0) {
int aStdoutPipe[2];
if (pipe(aStdoutPipe) < 0) {
exitError(-1, errno, "allocating pipe for child input redirect failed");
}
process->stdout = aStdoutPipe[PIPE_READ];
out = aStdoutPipe[PIPE_WRITE];
}
else {
process->stdout = -1;
}
process->processID = fork();
if (0 == process->processID) {
// child continues here
// these are for use by parent only
if (process->stdin >= 0) close(process->stdin);
if (process->stdout >= 0) close(process->stdout);
// redirect stdin
if (STDIN_FILENO != in) {
if (dup2(in, STDIN_FILENO) == -1) {
exitError(-1, errno, "redirecting stdin failed");
}
close(in);
}
// redirect stdout
if (STDOUT_FILENO != out) {
if (dup2(out, STDOUT_FILENO) == -1) {
exitError(-1, errno, "redirecting stdout failed");
}
close(out);
}
// we're done with these; they've been duplicated to STDIN and STDOUT
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execvp(words.we_wordv[0], words.we_wordv);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exitError(-1, errno, "could not run %s", szCommand);
} else if (process->processID > 0) {
wordfree(&words);
// parent continues here
// close unused file descriptors, these are for child only
close(in);
close(out);
process->command = szCommand;
} else {
exitError(-1,errno, "Failed to fork");
}
}
Child process inherits open file descriptors.
Every subsequent gzip child process inherits not only pipe file descriptors intended for communication with that particular instance but also file descriptors for pipes connected to previous child process instances.
It means that stdin pipe is still open when the main process performs close since there are some other file descriptors for the same pipe in a few child processes. Once those ones terminate the pipe is finally closed.
A quick fix is to prevent child processes from inheriting pipe file descriptors intended for the master process by setting close-on-exec flag.
Since there are multiple threads involved spawning child processes should be serialized to prevent child process from inheriting pipe fds intended for another child process.
You have not given us enough information to be sure, as the answer depends on how you use the functions presented. However, your closeAndWait() function looks a bit suspicious. It may be reasonable to suppose that that the child process in question will exit when it reaches the end of its stdin, but what is supposed to happen to data it has written or even may still write to its stdout? It is possible that your child processes hang because their standard output is blocked, and it is slow for them to recognize it.
I think this reflects a design problem. If you are capturing the child processes' output, as you seem at least to support doing, then after you close the parent's end of a child's input stream you'll want the parent to continue reading the child's output to its end, and performing whatever processing it intends to do on it. Otherwise you may lose some of it (which for a child performing gzip would mean corrupted data). You cannot do that if you make closing both streams part of the process of terminating the child.
Instead, you should to close the parent's end of the child's stdin first, continue processing its output until you reach its end, and only then try to collect the child. You can make closing the parent's end of the child's output stream part of the process of collecting that child if you like. Alternatively, if you really do want to discard any remaining output from the child, then you should drain its output stream between closing the input and closing the output.
I need to create two child processes. One child needs to run the command "ls -al" and redirect its output to the input of the next child process, which in turn will run the command "sort -r -n -k 5" on its input data. Finally, the parent process needs to read that (data already sorted) and display it in the terminal. The final result in the terminal (when executing the program) should be the same as if I entered the following command directly in the shell: "ls -al | sort -r -n -k 5". For this I need to use the following methods: pipe(), fork(), execlp().
My program compiles, but I don't get the desired output to the terminal. I don't know what is wrong. Here is the code:
#include <sys/types.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main()
{
int fd[2];
pid_t ls_pid, sort_pid;
char buff[1000];
/* create the pipe */
if (pipe(fd) == -1) {
fprintf(stderr, "Pipe failed");
return 1;
}
/* create child 2 first */
sort_pid = fork();
if (sort_pid < 0) { // error creating Child 2 process
fprintf(stderr, "\nChild 2 Fork failed");
return 1;
}
else if(sort_pid > 0) { // parent process
wait(NULL); // wait for children termination
/* create child 1 */
ls_pid = fork();
if (ls_pid < 0) { // error creating Child 1 process
fprintf(stderr, "\nChild 1 Fork failed");
return 1;
}
else if (ls_pid == 0) { // child 1 process
close(1); // close stdout
dup2(fd[1], 1); // make stdout same as fd[1]
close(fd[0]); // we don't need this end of pipe
execlp("bin/ls", "ls", "-al", NULL);// executes ls command
}
wait(NULL);
read(fd[0], buff, 1000); // parent reads data
printf(buff); // parent prints data to terminal
}
else if (sort_pid == 0) { // child 2 process
close(0); // close stdin
dup2(fd[0], 0); // make stdin same as fd[0]
close(fd[1]); // we don't need this end of pipe
execlp("bin/sort", "sort", "-r", "-n", "-k", "5", NULL); // executes sort operation
}
return 0;
}
Your parent process waits for the sort process to finish before creating the ls process.
The sort process needs to read its input before it can finish. And its input is coming from the ls that won't be started until after the wait. Deadlock.
You need to create both processes, then wait for both of them.
Also, your file descriptor manipulations aren't quite right. In this pair of calls:
close(0);
dup2(fd[0], 0);
the close is redundant, since dup2 will automatically close the existing fd 0 if there is one. You should do a close(fd[0]) after ther dup2, so you only have one file descriptor tied to that end of the pipe. And if you want to be really robust, you should test wither fd[0]==0 already, and in that case skip the dup2 and close.
Apply all of that to the other dup2 also.
Then there's the issue of the parent process holding the pipe open. I'd say you should close both ends of the pipe in the parent after you've passed them on to the children, but you have that weird read from fd[0] after the last wait... I'm not sure why that's there. If the ls|sort pipeline has run correctly, the pipe will be empty afterward, so there will be nothing to read. In any case, you definitely need to close fd[1] in the parent, otherwise the sort process won't finish because the pipe won't indicate EOF until all writers are closed.
After the weird read is a printf that will probably crash, since the read buffer won't be '\0'-terminated.
And the point of using execlp is that it does the $PATH lookup for you so you don't have to specify /bin/. My first test run failed because my sort is in /usr/bin/. Why hardcode paths when you don't have to?
I need a little help with my plumbing.
I'm trying to create a program that begins with a single process, spawns child processes based on a user-defined number, and then flow back into another (single) child process. The data flow looks something like this:
/-->-█->-\
█-->--█->--█
\-->-█->-/
I've got the first part of the process creation finished. The fork works well - I run it through a loop limited to the number specified by the user. It's the piping that's throwing me off.
For simplicity, I'm focusing on the first part (from parent to multiple children). So I create the pipe before forking the process - that much is given. Then I close the write-end of the child process, close stdin, dup so the child's 0 redirects to stdin, then close the child's 0. That should take care of the plumbing on the child-side, right?
So then back in the parent process, I need to plumb for the pipes going to the children. For whatever reason, this is the harder part for me. Would someone mind walking me through it a little?
Here's what I've got for this part of the code:
for (i = 0; i < numberOfChildren; ++i) {
(void) pipe(workFDs[i]); /* Creates a pipe before the fork */
if ((workPIDs[i] = fork()) < 0) {
perror("Error: failure when forking workPID #: \n");
exit(-1);
}
if (workPIDs[i] == 0) {
/* ************************* WORKER PROCESS *********************** */
close(workFDs[i][1]); /* Closes the write-end for worker proc */
close(0); /* Closes stdin */
dup(workFDs[i][0]); /* Redirects workFDs 0 to stdin */
close(workFDs[i][0]);
//Fgets to get from the pipe
//Exec sort stuff here
}
} else {
/* *********************** PARENT PROCESS *********************** */
assert(inputPID == getpid()); /* Just to be sure */
close(workFDs[i][0]);
//Fputs to put into the pipe
}
}