I'm using fork(). However, before executing fork(), I open a file (say a.txt) using freopen for writing. Now the child process redirects the output of execlp to a.txt. After terminating the child process, the parent process closes a.txt. Now how can the parent process read a.txt and show some information in stdout?
If the parent process opened the file with freopen(3), then the rewind(3) library call can be used to re-wind the stream's pointer to the start of the file, for use with fread(3) or fgets(3) or whatever API you'd like to use.
freopen does not belong in this code at all. Instead, you should do something like:
FILE *tmp = tmpfile();
if (!(pid=fork())) {
dup2(fileno(tmp), 1);
close(fileno(tmp));
execlp(...);
_exit(1);
}
wait(&status);
/* read from tmp */
However it would actually be a lot better to use a pipe if possible.
Related
How can I close my child file descriptors when killing the parent process?
I've created a program that does the following:
Fork 2 child processes.
Process 1 is a reader. It reads from STDIN_FILENO and writes to STDOUT_FILENO with scanf/printf. But I use dup2 to redirect to a named pipe (let's call it npipe).
Process 2 is a writer. I redirect STDIN_FILENO to npipe so that I can use scanf from that pipe. I redirect STDOUT_FILENO to another file called "output_file" so that I can use printf to write.
Reading / Writing is done in a while(1) loop;
while(1){
scanf("%s",word);
printf("%s",word);
}
If I use CTRL+C (send SIGTERM signal to the parent), any data that was written in "output_file" is lost. Nothing is inside there. I think it's because I kill the processes while the files are opened?
I used a word that stops both child processes when read; in this case, I do have everything in my "output_file".
How can I close the files when using SIGTERM? / Can I somehow force my process to write into "output_file" so that no data is lost when suddenly closing?
I tried closing and opening the file after each writing. Still losing all the written data? Is this because of the redirect?
void read_from_pipe(void)
{
char command[100];
int new_fd = open(pipe_name,O_RDONLY,0644);
int output_file = open("new_file",O_CREAT|O_WRONLY|O_APPEND,0644);
dup2(new_fd,0);
dup2(output_file,1);
while(1)
{
scanf("%s",command);
if(strcmp(command,"stop")==0)
{
close(output_file);
close(new_fd);
exit(0);
}
else
printf("%s ",command);
close(output_file);
output_file = open("new_file",O_CREAT|O_WRONLY|O_APPEND,0644);
}
}
Managed to attach correct signal handlers, yet data is still not written even though I close my files! Why is that?
You problem is (likely) not to do with closing the file descriptors, it's with flushing the FILE pointers. Note that FILE pointers and file descriptors are two different things.
You call printf, which does NOT write to STDOUT_FILENO (a file descriptor), it writes to stdout (a FILE pointer), which generally just buffers the data until it has a reason to write it to the underlying file descriptor (STDOUT_FILENO).
So you need to ensure your data that you write to stdout is actually written to STDOUT_FILENO. The easiest way to do that is to stick an fflush(stdout); after the printf call to manually flush it, giving you fine-grained control over when stuff is written. Alternately, you can call setbuf(stdout, 0); to make stdout unbuffered, so every write will be flushed immediately.
I am writing a C program in Linux and use fork to create a child process. When I run my program using ./test 1 > out.txt both the parent process and the child processes send information to stdout.
I would like the file out.txt to only contain output from the parent process and discard all output from the child process.
How can I do that?
I would redirect the parent's stdout to the file, then when you fork, reopen the stdout handle of the child to go to something else (like /dev/null if you just want to ditch it, or you could open the terminal again if you want it to go back to stdout).
The dup2 system call can do that. open a new one, close the old one, then dup2 the new file descriptor to the old number (1 for stdout).
This is the process the shell itself uses to redirect to a file btw.
I'm writing a C program where I fork() read a file in parent and pass to child via a pipe, then in child redirect the file receive from the pipe to the program I want to execv,
For example, if I exec /bin/less with doc.txt, I will read doc.txt in parent and pass to child, then execute less with the string receive from the read end of pipe.
Everything else is ok, except the execv() part.
I have read the man page for execv(), but it doesn't really help on doing this...
Any help?
Since any forked child share the parents file descriptors, you can simply redirect stdin to a file descriptor with dup2() then just fork and exec away in your child process.
When the child process reads data from stdin, it'll read from the file descriptor you opened (which could be a descriptor to your doc.txt).
I want to make a proxy process which opens the real one.
Like if I rename linux's espeak to espeak_real and my app to espeak.
espeak opens espeak_real and I get the output.
I want to make possible to:
Prints it's STDIN to the console
Prints it's STDIN to another process's STDIN
Prints the second process's STDOUT
I'm trying to do it in C (I guesss it's possible with raw bash too).
I don't exactly understand what you do, but it seems like a combination of fork, exec, pipe and dup2 should do it.
app can use pipe to get a pair of file descriptors, connected with a pipe (what's written into one is read from the other).
Then it can fork, and the child can exec app_real.
But between fork and exec, dup2 can be used to change any file descriptor you want to 0,1 and 2 (but close the real 0,1,2) first.
Short code example:
int pipe_fds[2];
pipe(pipe_fds);
if (fork()==0) {
// Child
close(fds[1]); // This side is for the parent only
close(0); // Close original stdin before dup2
dup2(fds[0],0); // Now one side of the pipe is the child's stdin
close(fds[0]); // No need to have it open twice
exec(...);
} else {
// Parent
close(fds[0]); // This side is for the child only
write(fds[1],data,len); // This data goes to the child
}
I want to execute a command using system() command or execl and want to capture the output directly in a buffer in C. Is ther any possibility to capture the output in a buffer using dup() system call or using pipe(). I dont want to use any file in between using mkstemp or any other temporary file. please help me in this.Thanks in advance.
I tried it with fork() creating two process and piping the output and it is working.However I dont want to use fork system call since i am going to run the module infinitely using seperate thread and it is invoking lot of fork() and system is running out of resources sometimes after.
To be clear about what i am doing is capturing an output of a shell script in a buffer processing the ouput and displaying it in a window which i have designed using ncurses.Thankyou.
Here is some code for capturing the output of program; it uses exec() instead of system(), but that is straightforward to accomodate by invoking the shell directly:
How can I implement 'tee' programmatically in C?
void tee(const char* fname) {
int pipe_fd[2];
check(pipe(pipe_fd));
const pid_t pid = fork();
check(pid);
if(!pid) { // our log child
close(pipe_fd[1]); // Close unused write end
FILE* logFile = fname? fopen(fname,"a"): NULL;
if(fname && !logFile)
fprintf(stderr,"cannot open log file \"%s\": %d (%s)\n",fname,errno,strerror(errno));
char ch;
while(read(pipe_fd[0],&ch,1) > 0) {
//### any timestamp logic or whatever here
putchar(ch);
if(logFile)
fputc(ch,logFile);
if('\n'==ch) {
fflush(stdout);
if(logFile)
fflush(logFile);
}
}
putchar('\n');
close(pipe_fd[0]);
if(logFile)
fclose(logFile);
exit(EXIT_SUCCESS);
} else {
close(pipe_fd[0]); // Close unused read end
// redirect stdout and stderr
dup2(pipe_fd[1],STDOUT_FILENO);
dup2(pipe_fd[1],STDERR_FILENO);
close(pipe_fd[1]);
}
}
A simple way is to use popen ( http://www.opengroup.org/onlinepubs/007908799/xsh/popen.html), which returns a FILE*.
You can try popen(), but your fundamental problem is running too many processes. You have to make sure your commands finish, otherwise you will end up with exactly the problems you're having. popen() internally calls fork() anyway (or the effect is as if it did).
So, in the end, you have to make sure that the program you want to run from your threads exits "soon enough".
You want to use a sequence like this:
Call pipe once per stream you want to create (eg. stdin, stdout, stderr)
Call fork
in the child
close the parent end of the handles
close any other handles you have open
set up stdin, stdout, stderr to be the appropriate child side of the pipe
exec your desired command
If that fails, die.
in the parent
close the child side of the handles
Read and write to the pipes as appropriate
When done, call waitpid() (or similar) to clean up the child process.
Beware of blocking and buffering. You don't want your parent process to block on a write while the child is blocked on a read; make sure you use non-blocking I/O or threads to deal with those issues.
If you are have implemented a C program and you want to execute a script, you want to use a fork(). Unless you are willing to consider embedding the script interpreter in your program, you have to use fork() (system() uses fork() internally).
If you are running out of resources, most likely, you are not reaping your children. Until the parent process get the exit code, the OS needs keeps the child around as a 'zombie' process. You need to issue a wait() call to get the OS to free up the final resources associated with the child.