How to ignore empty pipes in CHILD process? - c

I am using a subroutine named read_from_pipe in my child process as below to read whatever is in the pipe and display it:
void read_from_pipe(int fileDescriptr)
{
FILE *stream;
int c;
if (fileDesc_Is_Valid(fileDescriptr) == TRUE)
{
stream = fdopen(fileDescriptr, "r");
while ((c = fgetc(stream)) != EOF)
putchar(c);
fclose(stream);
}
else
perror("Reading from pipe failed -->");
}
fileDesc_Is_Valid is another subroutine which checks the existance of file descriptor.
The problem is that because I have used waitpid(pid, &status, 0); statement in my parent to wait for child to finish off its tasks, the compiler gets stuck in the first cold run at while loop when pipe is actually empty. How can I AND another condition in my while to let compiler simply ignore empty pipes?

It's actually very simple, you just need to ignore the SIGPIPE signal, and it's done with a single function-call:
signal(SIGPIPE, SIG_IGN);
When the pipe is empty, instead of raising the SIGPIPE signal, reading from the pipe will return an end-of-file value (the underlying read system-call will return 0).

Related

C language pipe between 2 child processes blocks indefinitely

I am trying to make a program to takes a command including pipes and then executes it. This is a simplified version of it where I'm trying to pipe the ls and wc command:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include<fcntl.h>
int main(){
char* arglist1[] = {"ls", NULL}; // writing process
char* arglist2[] = {"wc", NULL}; // reading process
int pipefd[2];
pid_t p1, p2;
if (pipe(pipefd) < 0) {
printf("\nPipe could not be initialized");
return 0;
}
p1 = fork();
if (p1 < 0) {
printf("\nCould not fork");
return 0;
}
if (p1 == 0) { // Child 1 executing it needs to write at the write end
close(pipefd[0]);
dup2(pipefd[1], STDOUT_FILENO);
close(pipefd[1]);
if (execvp(arglist1[0], arglist1) < 0) {
printf("\nCould not execute command 1..");
exit(0);
}
} else { // Parent executing
p2 = fork();
if (p2 < 0) {
printf("\nCould not fork");
return 0;
}
if (p2 == 0) { // Child 2 executing it needs to read at the read end
close(pipefd[1]);
dup2(pipefd[0], STDIN_FILENO);
close(pipefd[0]);
if (execvp(arglist2[0], arglist2) < 0) {
printf("\nCould not execute command 2..");
exit(0);
}
} else { // parent executing, waiting for two children
wait(NULL);
wait(NULL);
}
}
printf("\n");
return 0;
}
Although there is error handling in the program, it neither shows anything nor ends. Where is it blocking?
Your problem is that the parent doesn't close both the pipe's file descriptors, and the wc process won't die until it gets EOF on the pipe, and that won't happen until every process that has the write end of the pipe open has closed it. You need to close both ends of the pipe in the parent before waiting for the children to die.
Rule of thumb: If you
dup2()
one end of a pipe to standard input or standard output, close both of the
original file descriptors returned by
pipe()
as soon as possible.
In particular, you should close them before using any of the
exec*()
family of functions.
The rule also applies if you duplicate the descriptors with either
dup()
or
fcntl()
with F_DUPFD or F_DUPFD_CLOEXEC.
If the parent process will not communicate with any of its children via
the pipe, it must ensure that it closes both ends of the pipe early
enough (before waiting, for example) so that its children can receive
EOF indications on read (or get SIGPIPE signals or write errors on
write), rather than blocking indefinitely.
Even if the parent uses the pipe without using dup2(), it should
normally close at least one end of the pipe — it is extremely rare for
a program to read and write on both ends of a single pipe.
Note that the O_CLOEXEC option to
open(),
and the FD_CLOEXEC and F_DUPFD_CLOEXEC options to fcntl() can also factor
into this discussion.
If you use
posix_spawn()
and its extensive family of support functions (21 functions in total),
you will need to review how to close file descriptors in the spawned process
(posix_spawn_file_actions_addclose(),
etc.).
Note that using dup2(a, b) is safer than using close(b); dup(a);
for a variety of reasons.
One is that if you want to force the file descriptor to a larger than
usual number, dup2() is the only sensible way to do that.
Another is that if a is the same as b (e.g. both 0), then dup2()
handles it correctly (it doesn't close b before duplicating a)
whereas the separate close() and dup() fails horribly.
This is an unlikely, but not impossible, circumstance.
Side notes:
Error messages should be written to stderr, not stdout, and should end with a newline. They don't normally need to start with a newline.
You don't need to test the return value from the exec*() family of functions. If they succeed, they don't return; if they return, they failed. But it is important to have code after the eec*() call to trap the error.
The program should exit with a non-zero status (e.g. EXIT_FAILURE) if the exec*() function fails. Exiting with status zero reports success.

Child not reading output from another child that put it in the pipe

I've been working on this school assignment forever now, and I'm super close to finishing.
The assignment is to create a bash shell in C, which sounds basic enough, but it has to support piping, IO redirect, and flags within the piped commands. I have it all working except for one thing; the | piping child isn't getting any of the data written to the pipe by the user command process child. If I were to remove the child fork for pipechild, and have everything from if(pipe_cmd[0] != '\0') run as the parent, it would work just fine (minus ending the program because of execlp). If I were to use printf() inside the pipe section, the output would be in the right file or terminal, which just leaves the input from the user command process child not getting to where it needs to be as a culprit.
Does anyone see an issue on how I'm using the pipe? It all felt 100% normal to me, given the definition of a pipe.
int a[2];
pipe(a);
//assume file_name is something like file.txt
strcat(file_name, "file.txt");
strcat(pipe_cmd, "wc");
if(!fork())
{
if(pipe_cmd[0] != '\0') // if there's a pipe
{
close(1); //close normal stdout
dup(a[1]); // making stdout same as a[1]
close(a[0]); // closing other end of pipe
execlp("ls","ls",NULL);
}
else if(file_name[0] != '\0') // if just a bare command with a file redirect
{
int rootcmd_file = open(file_name, O_APPEND|O_WRONLY|O_CREAT, 0644);
dup2(rootcmd_file, STDOUT_FILENO);
execlp("ls","ls",NULL); // writes ls to the filename
}
// if no pipe or file name write...
else if(rootcmd_flags[0] != '\0') execlp("ls","ls",NULL)
else execlp("ls","ls",NULL);
} else wait(0);
if(pipe_cmd[0] != '\0') // parent goes here, if pipe.
{
pipechild = fork();
if(pipechild != 0) // *PROBLEM ARISES HERE- IF THIS IS FORKED, IT WILL HAVE NO INFO TAKEN IN.
{
close(0); // closing normal stdin
dup(a[0]); // making our input come from the child above
close(a[1]); // close other end of pipe
if(file_name[0] != '\0') // if a filename does exist, we must reroute the output to the pipe
{
close(1); // close normal stdout
int fileredir_pipe = open(file_name, O_APPEND|O_WRONLY|O_CREAT, 0644);
dup2(fileredir_pipe, STDOUT_FILENO); //redirects STDOUT to file
execlp("wc","wc",NULL); // this outputs nothing
}
else
{
// else there is no file.
// executing the pipe in stdout using execlp.
execlp("wc","wc",NULL); // this outputs nothing
}
}
else wait(0);
}
Thanks in advance. I apologize for some of the code being withheld. This is still an active assignment and I don't want any cases of academic dishonesty. This post was risky enough.
} else wait(0);
The shown code forks the first child process and then waits for it to terminate, at this point.
The first child process gets set up with a pipe on its standard output. The pipe will be connected to the second child process's standard input. The fatal flaw in this scheme is that the second child process isn't even started yet, and won't get started until the first process terminates.
Pipes have limited internal buffering. If the first process generates very little output chances are that its output will fit inside the tiny pipe buffer, it'll write its output and then quietly terminate, none the wiser.
But if the pipe buffer becomes full, the process will block and wait until something reads from the pipe and clears it. It will wait as long as it takes for that to happen. And wait, and wait, and wait. And since the second child process hasn't been started yet, and the parent process is waiting for the first process to terminate it will wait, in vain, forever.
This overall logic is fatally flawed for this reason. The correct logic is to completely fork and execute all child processes, close the pipe descriptors in the parent (this is also important), and then wait for all child processes to terminate. wait must be the very last thing that happens here, otherwise things will break in various amazing and mysterious ways.

c execute external program multiple times

i'm trying to call an external program from my code with some arguments. As i'm trying to see how different parameters change it's output i have to run it multiple times (about 1000 times). Every time the external program runs i'm just interested in one line of its output although it is printing a lot of (for my purpose) useless stuff. The line i'm interested in is right above the special identifier("some_signal") in the output. So i thought i'll wait till this line appears and read the line above.
I tried the following:
pid_t pid = 0;
int pipefd[2];
FILE* output;
char line[256]; // pipe read buffer
char prev_line[256]; // prev. line pipe buffer
char signal[] = "some_signal\n";
int status = 0;
double obj_Val;
pipe (pipefd); //create pipe
pid = fork (); //span child process
if (pid == 0)
{
// redirect child's output to pipe
close (pipefd[0]);
dup2 (pipefd[1], STDOUT_FILENO);
dup2 (pipefd[1], STDERR_FILENO);
execl ("/some/path",
"some/path",
"some_argument", (char*) NULL);
}
else if (pid < (pid_t) 0)
{
printf("fork failed \n");
return EXIT_FAILURE;
}
else
{
// get child output to pipe
close (pipefd[1]);
output = fdopen (pipefd[0], "r");
while (fgets (line, sizeof(line), output), signal != NULL)
{
if(strcmp(line, signal) == 0)
{
obj_Val = atof (prev_line);
kill (pid, SIGTERM);
waitpid (pid, &status, WNOHANG);
kill (pid, SIGKILL);
waitpid (pid, &status, 0);
break;
}//end if
strcpy (prev_line, line);
}//end while
}
This works fine for like 100 runs or so and then one out of two errors occurs. The first one is a segmentation fault. The second one is the calling program printing out all the output of the called program (without the line containing the wanted signal) and goes into an infinite loop (my guess is, since the signal is missing the while loop won't terminate).
Maybe someone can provide a hint or a link where to look and what to look for, or preferably tell me what i'm doing wrong.
Your while condition is broken: signal != NULL is always true, as signal is an array; and because of the comma , operator the entire condition is always true.
And because of the following, you're not checking the return value of fgets, which means that nothing was read and the buffer is uninitialized.
Also prev_line is not initialized before you do atof on it for the first time.
In any case, compile with -Wall -Werror and fix the remaining errors.

Why does closing a pipe take so long to terminate a child process?

I'm having trouble with my program waiting for a child process (gzip) to finish and taking a very long time in doing so.
Before it starts waiting it closes the input stream to gzip so this should trigger it to terminate pretty quickly. I've checked the system and gzip isn't consuming any CPU or waiting on IO (to write to disk).
The very odd thing is the timing on when it stops waiting...
The program us using pthreads internally. It's processing 4 pthreads side by side. Each thread processes many units of work and for each unit of work one it kicks off a new gzip process (using fork() and execve()) to write the result. Threads hang when gzip doesn't terminate, but it suddenly does terminate when other threads close their instance.
For clarity, I'm setting up a pipeline that goes: my program(pthread) --> gzip --> file.gz
I guess this could be explained in part by CPU load. But when processes are kicked off minutes apart and the whole system ends up using only 1 core of 4 because of this locking issue, that seems unlikely.
The code to kick off gzip is below. The execPipeProcess is called such that the child writes direct to file, but reads from my program. That is:
execPipeProcess(&process, "gzip", -1, gzFileFd)
Any suggestions?
typedef struct {
int processID;
const char * command;
int stdin;
int stdout;
} ChildProcess;
void closeAndWait(ChildProcess * process) {
if (process->stdin >= 0) {
stdLog("Closing post process stdin");
if (close(process->stdin)) {
exitError(-1,errno, "Failed to close stdin for %s", process->command);
}
}
if (process->stdout >= 0) {
stdLog("Closing post process stdin");
if (close(process->stdout)) {
exitError(-1,errno, "Failed to close stdout for %s", process->command);
}
}
int status;
stdLog("waiting on post process %d", process->processID);
if (waitpid(process->processID, &status, 0) == -1) {
exitError(-1, errno, "Could not wait for %s", process->command);
}
stdLog("post process finished");
if (!WIFEXITED(status)) exitError(-1, 0, "Command did not exit properly %s", process->command);
if (WEXITSTATUS(status)) exitError(-1, 0, "Command %s returned %d not 0", process->command, WEXITSTATUS(status));
process->processID = 0;
}
void execPipeProcess(ChildProcess * process, const char* szCommand, int in, int out) {
// Expand any args
wordexp_t words;
if (wordexp (szCommand, &words, 0)) exitError(-1, 0, "Could not expand command %s\n", szCommand);
// Runs the command
char nChar;
int nResult;
if (in < 0) {
int aStdinPipe[2];
if (pipe(aStdinPipe) < 0) {
exitError(-1, errno, "allocating pipe for child input redirect failed");
}
process->stdin = aStdinPipe[PIPE_WRITE];
in = aStdinPipe[PIPE_READ];
}
else {
process->stdin = -1;
}
if (out < 0) {
int aStdoutPipe[2];
if (pipe(aStdoutPipe) < 0) {
exitError(-1, errno, "allocating pipe for child input redirect failed");
}
process->stdout = aStdoutPipe[PIPE_READ];
out = aStdoutPipe[PIPE_WRITE];
}
else {
process->stdout = -1;
}
process->processID = fork();
if (0 == process->processID) {
// child continues here
// these are for use by parent only
if (process->stdin >= 0) close(process->stdin);
if (process->stdout >= 0) close(process->stdout);
// redirect stdin
if (STDIN_FILENO != in) {
if (dup2(in, STDIN_FILENO) == -1) {
exitError(-1, errno, "redirecting stdin failed");
}
close(in);
}
// redirect stdout
if (STDOUT_FILENO != out) {
if (dup2(out, STDOUT_FILENO) == -1) {
exitError(-1, errno, "redirecting stdout failed");
}
close(out);
}
// we're done with these; they've been duplicated to STDIN and STDOUT
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execvp(words.we_wordv[0], words.we_wordv);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exitError(-1, errno, "could not run %s", szCommand);
} else if (process->processID > 0) {
wordfree(&words);
// parent continues here
// close unused file descriptors, these are for child only
close(in);
close(out);
process->command = szCommand;
} else {
exitError(-1,errno, "Failed to fork");
}
}
Child process inherits open file descriptors.
Every subsequent gzip child process inherits not only pipe file descriptors intended for communication with that particular instance but also file descriptors for pipes connected to previous child process instances.
It means that stdin pipe is still open when the main process performs close since there are some other file descriptors for the same pipe in a few child processes. Once those ones terminate the pipe is finally closed.
A quick fix is to prevent child processes from inheriting pipe file descriptors intended for the master process by setting close-on-exec flag.
Since there are multiple threads involved spawning child processes should be serialized to prevent child process from inheriting pipe fds intended for another child process.
You have not given us enough information to be sure, as the answer depends on how you use the functions presented. However, your closeAndWait() function looks a bit suspicious. It may be reasonable to suppose that that the child process in question will exit when it reaches the end of its stdin, but what is supposed to happen to data it has written or even may still write to its stdout? It is possible that your child processes hang because their standard output is blocked, and it is slow for them to recognize it.
I think this reflects a design problem. If you are capturing the child processes' output, as you seem at least to support doing, then after you close the parent's end of a child's input stream you'll want the parent to continue reading the child's output to its end, and performing whatever processing it intends to do on it. Otherwise you may lose some of it (which for a child performing gzip would mean corrupted data). You cannot do that if you make closing both streams part of the process of terminating the child.
Instead, you should to close the parent's end of the child's stdin first, continue processing its output until you reach its end, and only then try to collect the child. You can make closing the parent's end of the child's output stream part of the process of collecting that child if you like. Alternatively, if you really do want to discard any remaining output from the child, then you should drain its output stream between closing the input and closing the output.

communicate with an execv()'ed program via pipe doesn't work

i try to write a socket which loads programs and redirects socket io to these. sounds much like inetd but as far as i know, inetd loads the program when its port is requested. i want to have it loaded permanently.
so far so good. writing a socket server is not that tricky but i didn't get the rest working.
I basically want to open a pipe(), dup2() it to stdin and stdout and execv() my program.
the problem is, that my called program doesn't get any input.I'll try to show it with a test program. can someone tell me, what's wrong?
int create_program_fork(int *ios, char const *program) {
// create pipes to program
if (pipe(ios) != 0) {
return -1;
}
// fork to new process
int f = fork();
if (f < 0) {
// fork didn't work
close(ios[0]);
close(ios[1]);
return(-1);
}
if (f > 0) {
// master hasn't much to do here
return f;
}
// *** Child Process
// close std** file descriptors
printf ("executing program");
close(STDIN_FILENO);
close(STDOUT_FILENO);
// duplicate pipes as std**
dup2(ios[0], STDIN_FILENO);
dup2(ios[1], STDOUT_FILENO);
// close pipes
close(ios[0]);
close(ios[1]);
// call program
return execvp(program, NULL );
}
int main(int argc, char *argv[]) {
int ios[2];
// call program
int pid = create_program_fork(ios, "/bin/bash");
if (0 != pid){
exit(EXIT_FAILURE);
}
char const exit_order[] = "exit\0";
char const order[] = ">/tmp/test.txt\0";
// do something
write(ios[1], order, strlen(order));
// bash should stop then..
write(ios[1], exit_order, strlen(exit_order));
return 0;
}
I see two possible source of trouble:
1) the write part of the pipe is redirected to the child's stdout, so the new process' output
is sent back to the input. I suggest to dup only the pipe's read part at the child side. If you want to intercept the child's output, you need another channel (i.e. a new pipe, or simply let both parent and child share the same stdout).
2) the strings you send seem to contain line-oriented commands. It's possible that the child process expects newlines at the end of the strings. This is a very common source of problems. I suggest to check the way the child reads its input. A "\n" at the end of the strings could help (by the way, it's not necessary to explicitly add a "\0" at the end of C strings, since the compiler do it for you. Anyway, strlen won't count the "\0").

Resources