I am developing a simple shell program, a command line interpreter and I wanted to read input from the file line by line, so I used getline() function. At the first time, the program works correctly, however, when it reaches the end of the file, instead of terminating, it starts to read a file from the start and it runs infinitely.
Here are some codes in main function that are related to getline():
int main(int argc,char *argv[]){
int const IN_SIZE = 255;
char *input = NULL;
size_t len = IN_SIZE;
// get file address
fileAdr = argv[2];
// open file
srcFile = fopen(fileAdr, "r");
if (srcFile == NULL) {
printf("No such file!\n");
exit(-1);
}
while (getline( &input, &len, srcFile) != -1) {
strtok(input, "\n");
printf("%s\n", input);
// some code that parses input, firstArgs == input
execSimpleCmd(firstArgs);
}
fclose(srcFile);
}
I am using fork() in my program and most probably it causes this problem.
void execSimpleCmd(char **cmdAndArgs) {
pid_t pid = fork();
if (pid < 0) {
// error
fprintf(stderr, "Fork Failed");
exit(-1);
} else if (pid == 0) {
// child process
if (execvp(cmdAndArgs[0], cmdAndArgs) < 0) {
printf("There is no such command!\n");
}
exit(0);
} else {
// parent process
wait(NULL);
return;
}
}
In addition, sometimes the program reads and prints a combinations of multiple lines. For example, if an input file as below:
ping
ww
ls
ls -l
pwd
it prints something like pwdg, pwdww, etc. How to fix it?
It appears that closing a FILE in some cases seeks the underlying file descriptor back to the position where the application actually read to, effectively undoing the effect of the read buffering. This matters, since the OS level file descriptors of the parent and the child point to the same file description, and the same file offset in particular.
The POSIX description of fclose() has this phrase:
[CX] [Option Start] If the file is not already at EOF, and the file is one capable of seeking, the file offset of the underlying open file description shall be set to the file position of the stream if the stream is the active handle to the underlying file description.
(Where CX means an extension to the ISO C standard, and exit() of course runs fclose() on all streams.)
I can reproduce the odd behavior with this program (on Debian 9.8):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
int main(int argc, char *argv[]){
FILE *f;
if ((f = fopen("testfile", "r")) == NULL) {
perror("fopen");
exit(1);
}
int right = 0;
if (argc > 1)
right = 1;
char *line = NULL;
size_t len = 0;
// first line
getline(&line, &len, f);
printf("%s", line);
pid_t p = fork();
if (p == -1) {
perror("fork");
} else if (p == 0) {
if (right)
_exit(0); // exit the child
else
exit(0); // wrong way to exit
} else {
wait(NULL); // parent
}
// rest of the lines
while (getline(&line, &len, f) > 0) {
printf("%s", line);
}
fclose(f);
}
Then:
$ printf 'a\nb\nc\n' > testfile
$ gcc -Wall -o getline getline.c
$ ./get
getline getline2
$ ./getline
a
b
c
b
c
Running it with strace -f ./getline clearly shows the child seeking the file descriptor back:
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f63794e0710) = 25117
strace: Process 25117 attached
[pid 25116] wait4(-1, <unfinished ...>
[pid 25117] lseek(3, -4, SEEK_CUR) = 2
[pid 25117] exit_group(1) = ?
(I didn't see the seek back with a code that didn't involve forking, but I don't know why.)
So, what happens is that the C library on the main program reads a block of data from the file, and the application prints the first line. After the fork, the child exits, and seeks the fd back to where the application level file pointer is. Then the parent continues, processes the rest of the read buffer, and when it's finished, it continues reading from the file. Because the file descriptor was seeked back, the lines starting from the second are again available.
In your case, the repeated fork() on every iteration seems to result in an infinite loop.
Using _exit() instead of exit() in the child fixes the problem in this case, since _exit() only exits the process, it doesn't do any housekeeping with the stdio buffers.
With _exit(), any output buffers are also not flushed, so you'll need to call fflush() manually on stdout and any other files you're writing to.
However, if you did this the other way around, with the child reading and buffering more than it processes, then it would be useful for the child to seek back the fd so that the parent could continue from where the child actually left.
Another solution would be not to mix stdio with fork().
Related
I am trying to write to a file and display the output of the thing i wrote with another process. The code i come up with:
void readLine (int fd, char *str) {
int n;
do {
n = read (fd, str, 1);
} while (*str++ != '\0');
}
int main(int argc,char ** argv){
int fd=open("sharedFile",O_CREAT|O_RDWR|O_TRUNC,0600);
if(fork()==0){
char buf[1000];
while(1) {
readLine(fd,buf);
printf("%s\n",buf);
}
}else{
while(1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
}
}
the output i want (each result spaced from the other with a period of one second):
abcd
abcd
abcd
....
Unfortunately this code doesn't work, it seems that the child process (the reader of the file "sharedFile") reads junk from the file because somehow it reads values even when the file is empty.
When trying to debug the code, readLine function never reads the written file correctly,it always reads 0 bytes.
Can someone help?
First of all, when a file descriptor becomes shared after forking, both the parent and child are pointing to the same open file description, which means in particular that they share the same file position. This is explained in the fork() man page.
So whenever the parent writes, the position is updated to the end of the file, and thus the child is always attempting to read at the end of the file, where there's no data. That's why read() returns 0, just as normal when you hit the end of a file.
(When this happens, you should not attempt to do anything with the data in the buffer. It's not that you're "reading junk", it's that you're not reading anything but are then pretending that whatever junk was in the buffer is what you just read. In particular your code utterly disregards the return value from read(), which is how you're supposed to tell what you actually read.)
If you want the child to have an independent file position, then the child needs to open() the file separately for itself and get a new fd pointing to a new file description.
But still, when the child has read all the data that's currently in the file, read() will again return 0; it won't wait around for the parent to write some more. The fact that some other process has a file open for writing don't affect the semantics of read() on a regular file.
So what you'll need to do instead is that when read() returns 0, you manually sleep for a while and then try again. When there's more data in the file, read() will return a positive number, and you can then process the data you read. Or, there are more elegant but more complicated approaches using system-specific APIs like Linux's inotify, which can sleep until a file's contents change. You may be familiar with tail -f, which uses some combination of these approaches on different systems.
Another dangerous bug is that if someone else writes text to the file that doesn't contain a null byte where expected, your child will read more data than the buffer can fit, thus overrunning it. This can be an exploitable security vulnerability.
Here is a version of the code that fixes these bugs and works for me:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
void readLine (int fd, char *str, size_t max) {
size_t pos = 0;
while (pos < max) {
ssize_t n = read(fd, str + pos, 1);
if (n == 0) {
sleep(1);
} else if (n == 1) {
if (str[pos] == '\0') {
return;
}
pos++;
} else {
perror("read() failure");
exit(2);
}
}
fprintf(stderr, "Didn't receive null terminator in time\n");
exit(2);
}
int main(int argc, char ** argv){
int fd=open("sharedFile", O_CREAT|O_RDWR|O_TRUNC, 0600);
if (fd < 0) {
perror("parent opening sharedFile");
exit(2);
}
pid_t pid = fork();
if (pid == 0){
int newfd = open("sharedFile", O_RDONLY);
if (newfd < 0) {
perror("child opening sharedFile");
exit(2);
}
char buf[1000];
while (1) {
readLine(newfd, buf, 1000);
printf("%s\n",buf);
}
} else if (pid > 0) {
while (1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
} else {
perror("fork");
exit(2);
}
return 0;
}
I'll post my code first, then explain the problem I'm having:
#include <stdio.h>
#include <sys/wait.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <fcntl.h>
#define MAX_ARGS 20
#define BUFSIZE 1024
int get_args(char* cmdline, char* args[])
{
int i = 0;
/* if no args */
if((args[0] = strtok(cmdline, "\n\t ")) == NULL)
return 0;
while((args[++i] = strtok(NULL, "\n\t ")) != NULL) {
if(i >= MAX_ARGS) {
printf("Too many arguments!\n");
exit(1);
}
}
/* the last one is always NULL */
return i;
}
void execute(char* cmdline)
{
int pid, async, oneapp;
char* args[MAX_ARGS];
char* args2[] = {"-l", NULL};
int nargs = get_args(cmdline, args);
if(nargs <= 0) return;
if(!strcmp(args[0], "quit") || !strcmp(args[0], "exit")) {
exit(0);
}
printf("before the if\n");
printf("%s\n",args[nargs - 2]);
int i = 0;
// EDIT: THIS IS WHAT WAS SUPPOSED TO BE COMMENTED OUT
/*
while (args[i] != ">" && i < nargs - 1) {
printf("%s\n",args[i]);
i++;
}
*/
// Presence of ">" token in args
// causes errors in execvp() because ">" is not
// a built-in Unix command, so remove it from args
args[i - 1] = NULL;
printf("Escaped the while\n");
// File descriptor array for the pipe
int fd[2];
// PID for the forked process
pid_t fpid1;
// Open the pipe
pipe(fd);
// Here we fork
fpid1 = fork();
if (fpid1 < 0)
{
// The case where the fork fails
perror("Fork failed!\n");
exit(-1);
}
else if (fpid1 == 0)
{
//dup2(fd[1], STDOUT_FILENO);
close(fd[1]);
//close(fd[0]);
// File pointer for the file that'll be written to
FILE * file;
// freopen() redirects stdin to args[nargs - 1],
// which contains the name of the file we're writing to
file = freopen(args[nargs - 1], "w+", stdin);
// If we include this line, the functionality works
//execvp(args[0],args);
// We're done writing to the file, so close it
fclose(file);
// We're done using the pipe, so close it (unnecessary?)
//close(fd[1]);
}
else
{
// Wait for the child process to terminate
wait(0);
printf("This is the parent\n");
// Connect write end of pipe (fd[1]) to standard output
dup2(fd[1], STDOUT_FILENO);
// We don't need the read end, so close it
close(fd[0]);
// args[0] contains the command "ls", which is
// what we want to execute
execvp(args[0], args);
// This is just a test line I was using before to check
// whether anything was being written to stdout at all
printf("Exec was here\n");
}
// This is here to make sure program execution
// doesn't continue into the original code, which
// currently causes errors due to incomplete functionality
exit(0);
/* check if async call */
printf("Async call part\n");
if(!strcmp(args[nargs-1], "&")) { async = 1; args[--nargs] = 0; }
else async = 0;
pid = fork();
if(pid == 0) { /* child process */
execvp(args[0], args);
/* return only when exec fails */
perror("exec failed");
exit(-1);
} else if(pid > 0) { /* parent process */
if(!async) waitpid(pid, NULL, 0);
else printf("this is an async call\n");
} else { /* error occurred */
perror("fork failed");
exit(1);
}
}
int main (int argc, char* argv [])
{
char cmdline[BUFSIZE];
for(;;) {
printf("COP4338$ ");
if(fgets(cmdline, BUFSIZE, stdin) == NULL) {
perror("fgets failed");
exit(1);
}
execute(cmdline) ;
}
return 0;
}
So, what's the problem? Simple: the code above creates a file with the expected name, i.e. the name provided in the command line, which gets placed at args[nargs - 1]. For instance, running the program and then typing
ls > test.txt
Creates a file called test.txt... but it doesn't actually write anything to it. I did manage to get the program to print garbage characters to the file more than a few times, but this only happened during bouts of desperate hail mary coding where I was basically just trying to get the program to write SOMETHING to the file.
I do think I've managed to narrow down the cause of the problems to this area of the code:
else if (fpid1 == 0)
{
printf("This is the child.\n");
//dup2(fd[1], STDOUT_FILENO);
close(fd[1]);
//close(fd[0]);
// File pointer for the file that'll be written to
FILE * file;
// freopen() redirects stdin to args[nargs - 1],
// which contains the name of the file we're writing to
file = freopen(args[nargs - 1], "w+", stdout);
// If we include this line, the functionality works
//execvp(args[0],args);
// We're done writing to the file, so close it
fclose(file);
// We're done using the pipe, so close it (unnecessary?)
//close(fd[1]);
}
else
{
// Wait for the child process to terminate
wait(0);
printf("This is the parent\n");
// Connect write end of pipe (fd[1]) to standard output
dup2(fd[1], STDOUT_FILENO);
// We don't need the read end, so close it
close(fd[0]);
// args[0] contains the command "ls", which is
// what we want to execute
execvp(args[0], args);
// This is just a test line I was using before to check
// whether anything was being written to stdout at all
printf("Exec was here\n");
}
More specifically, I believe the problem is with the way I'm using (or trying to use) dup2() and the piping functionality. I basically found this out by process of elimination. I spent a few hours commenting things out, moving code around, adding and removing test code, and I've found the following things:
1.) Removing the calls to dup2() and using execvp(args[0], args) prints the result of the ls command to the console. The parent and child processes begin and end properly. So, the calls to execvp() are working properly.
2.) The line
file = freopen(args[nargs - 1], "w+", stdout)
Successfully creates a file with the correct name, so the call to freopen() isn't failing. While this doesn't immediately prove that this function is working properly as it's written now, consider fact #3:
3.) In the child process block, if we make freopen redirect to the output file from stdin (rather than stdout) and uncomment the call to execvp(args[0], args), like so:
// freopen() redirects stdin to args[nargs - 1],
// which contains the name of the file we're writing to
file = freopen(args[nargs - 1], "w+", stdin);
// If we include this line, the functionality works
execvp(args[0],args);
and run the program, then it works and result of the ls command is successfully written to the output file. Knowing this, it seems pretty safe to say that freopen() isn't the problem either.
In other words, the only thing I haven't been able to successfully do is pipe the output of the execvp() call that's done in the parent process to stdout, and then from stdout to the file using freopen().
Any help is appreciated. I've been at this since 10 AM yesterday and I'm completely out of ideas. I just don't know what I'm doing wrong. Why isn't this working?
I'm trying to write a program that reads some text from a file and prints it to the screen. The parent will read the content of the file write it to n number of pipes and the children will read it and then print it.
So far this is what I've got:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <string.h>
int main (void)
{
pid_t pid;
char c;
FILE *fd;
char buf[100];
int N_CHILDREN = 2;
int p[N_CHILDREN][2];
int i,j;
for(i=0; i<N_CHILDREN; i++)
{
pipe(p[i]);
}
fd=fopen("123.txt","r");
for(j=0; j < N_CHILDREN;j++)
{
pid = fork ();
if (pid == 0)
{
close (p[j][1]);
while(read(p[j][0], &fd,sizeof(buf)) > 0)
printf("\n%c",&fd);
}
if (pid < 0)
{
//Fork Failed
fprintf (stderr, "Fork failure.\n");
return EXIT_FAILURE;
}
if ( pid > 0) //Parent
{
close (p[j][0]);
write(p[j][1], fd ,sizeof(buf));
}
}
}
Problem is it's not really reading the content from the file. I've tried sending it a string of characters instead of reading from a file and it worked as intended, both children printed the message one time and the program ended.
Any thoughts about it? After reading the manuals I still can't see where the problem is.
You are confusing C Standard I/O streams (created with fopen(); written to with fprintf() et al., read with fscanf() et al.) with Unix file descriptor I/O (created with open() or pipe() et al., written to with write() et al., read with read() et al.)
Standard I/O functions take an opaque FILE * as a handle; Unix I/O functions take a file descriptor (a small int) as a handle.
Once you understand the conceptual difference, I'm sure you will realize that
FILE *fd = ...
read(..., &fd, ...);
is reading into a pointer-to-FILE -- not terribly useful :-)
Several problems here:
you make bad usage of read function by passing &fd, which is a FILE*. This function needs a pointer to the "buffer" to print, here I guess buf.
you don't check errors. For example if fopen fails.
you never read data from your file, so you have "nothing" to send to children.
you have to get returned value of read (in children) because it is the effective amount of data that you get. So it is the amount of data that you have to print after that (to stdout).
So here is an example code, see comments inside:
// put here all the needed includes (see manpages of functions)
// it is better to create a function for the child: the code
// is easier to read
// the child just get the file descriptor to read (the pipe)
void child(int fd) {
char buf[100]; // buffer to store data read
int ret; // the number of bytes that are read
// we read from 'fd', into 'buf'. It returns the number of bytes
// really read (could be smaller than size). Return <=0 when over
while((ret = read(fd, buf, sizeof(buf))) > 0) {
// write the 'ret' bytes to STDOUT (which as file descriptor 1)
write(1, buf, ret);
}
}
int main (void) {
pid_t pid;
char buf[100];
int N_CHILDREN = 2;
int p[N_CHILDREN][2];
int i,j, ret;
int fdi;
// create the pipes
for(i=0; i<N_CHILDREN; i++) {
if (pipe(p[i]) == -1) {
perror("pipe"); // ALWAYS check for errors
exit(1);
}
}
// open the file (with 'open' not 'fopen', more suitable for
// reading raw data
fdi = open("123.txt",O_RDONLY);
if (fdi < 0) {
perror("open"); // ALWAYS check for errors
exit(1);
}
// just spawn the children
for(j=0; j < N_CHILDREN;j++) {
pid = fork();
if (pid < 0) {
perror("fork"); // ALWAYS check for errors
exit(1);
}
if (pid == 0) { // child
close(p[j][1]); // close the writing part
child(p[j][0]); // call child function with corresp. FD
exit(0); // leave : the child should do nothing else
}
}
// don't need that part
for(j=0; j<N_CHILDREN; j++) {
close(p[j][0]); // close the read-part of pipes
}
// need to read file content, see comment in child() function
while ((ret = read(fdi, buf, sizeof(buf))) > 0) {
// write the data to all children
for(j=0; j<N_CHILDREN; j++) {
write(p[j][1], buf , ret); // we write the size we get
}
}
// close everithing
for(j=0; j<N_CHILDREN; j++) {
close(p[j][1]); // needed, see text after
}
close(fdi); // close read file
return(0); // main returns a int, 0 is "ok"
}
You have to close every parts of pipes when not needed or when it is over. Until a file descriptor is open a read will block the process. Only when last write counterpart is closed the read returns <=0.
Note: 1. the correct usage of read/write function 2. checking for errors 3. reading from the file and writing to the pipe(s) 4. dealing with effective amount of data read (ret variable) so that you can write (to "screen" or to an other file descriptor the right amount of data.
You're not reading anything in to buf as far as I can tell.
Parent has opened a file to read, I fork two children to read from file and write on different files.
child 1 reads the first line, and child 2,reads nothing. When I do an ftell, it reaches the end.
Can anyone please explain this behaviour?
f[0] = fopen("input", "r");
for ( i = 1; i <= 2; i++ ){
if ((pid = fork()) != 0){
waitpid(pid);
}
else
{
snprintf ( buffer, 10, "output%d", i );
printf("opening file %s \n",buffer);
f[i] = fopen( buffer, "w");
fgets(buff2, 10, f[0]);
fprintf(f[i], "%s", buff2);
fclose(f[i]);
_exit(0);
}
}
fclose(f[0]);
Your problem is buffering. stdio reads files on fully buffered mode by default, which means a call to fgets(3) will actually read a huge block of characters from the file, buffer everything, and then return the first line, while leaving the rest in the buffer, in the perspective of being called again in the future (remember that stdio strives for minimizing the number of read(2) and write(2) syscalls). Note that stdio buffering is a user-space thing; all the kernel sees is a single process reading a huge block on that file, and so the cursor is updated accordingly.
Common block sizes are 4096 and 8192; your input file is probably smaller than that and so the first process that calls fgets(3) ends up reading the whole file, leaving the cursor in the end. Buffering is tricky.
What can you do? One solution I can think of is to disable buffering (since this is an input stream we're talking about, we can't use line buffered mode, because line buffering is meaningless for input streams). So if you disable buffering on the input stream before forking, everything will work. This is done with setvbuf(3).
Here's a working example:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <errno.h>
static FILE *f[3];
static char buffer[128];
static char buff2[128];
int main(void) {
pid_t pid;
int i;
if ((f[0] = fopen("input", "r")) == NULL) {
perror("Error opening input file");
exit(EXIT_FAILURE);
}
if (setvbuf(f[0], NULL, _IONBF, 0) < 0) {
perror("setvbuf(3) failed");
exit(EXIT_FAILURE);
}
for (i = 1; i <= 2; i++) {
if ((pid = fork()) < 0) {
perror("fork(2) failed");
exit(EXIT_FAILURE);
}
if (pid != 0) {
if (waitpid(pid, NULL, 0) < 0) {
perror("waitpid(2) failed");
exit(EXIT_FAILURE);
}
} else {
snprintf(buffer, sizeof(buffer), "output%d", i);
printf("opening file %s\n", buffer);
if ((f[i] = fopen(buffer, "w")) == NULL) {
perror("fopen(2) failed");
exit(EXIT_FAILURE);
}
errno = 0;
if (fgets(buff2, sizeof(buff2), f[0]) == NULL) {
if (errno != 0) {
perror("fgets(3) error");
exit(EXIT_FAILURE);
}
}
fprintf(f[i], "%s", buff2);
fclose(f[i]);
exit(EXIT_SUCCESS);
}
}
fclose(f[0]);
return 0;
}
Note that this may incur a significant performance hit. Your code will be making a lot more syscalls, and it might be too expensive for huge files, but it doesn't seem to be a problem since apparently you're dealing with relatively small input files.
Here's an extract of my fork() man page:
The child process has its own copy of the parent's descriptors. These descriptors reference the same underlying objects, so that, for instance, file pointers in file objects are shared between the child and the parent, so that an lseek(2) on a descriptor in the child process can affect a subsequent read or write by the parent. This descriptor copying is also used by the shell to establish standard input and output for newly created processes as well as to set up pipes.
So your behaviour is completely normal. If you want your child to have its own file descriptor, it should open its own file.
For example, you could do the following:
for ( i = 1; i <= 2; i++ )
{
if ((pid = fork()) != 0)
{
waitpid(pid);
}
else
{
f[0] = fopen("input", "r"); // New
snprintf ( buffer, 10, "output%d", i );
printf("opening file %s \n",buffer);
f[i] = fopen( buffer, "w");
fgets(buff2, 10, f[0]);
fprintf(f[i], "%s", buff2);
fclose(f[i]);
fclose(f[0]); //New
_exit(0);
}
}
Also, you should check for errors (almost all the functions in your else could fail with error).
I need to fork a process, redirect output (stdout and stderr) in buffer. My code seems to work with most of binary but not all. For example I can run my code with a very long "ls" like ls -R /proc/ and it is working perfectly. When I run mke2fs process, my code does not work anymore.
If I run mke2fs in a fork and wait for it, it is working perfectly. Now if I add redirect stuff, my programs never finish to run.
I wrote a little main to test this specific trouble :
#include <sys/wait.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main ()
{
pid_t pid;
int status = -42;
int pipefd_out[2];
int pipefd_err[2];
char buf_stderr[1024];
char buf_stdout[1024];
int count;
int ret;
pipe(pipefd_out);
pipe(pipefd_err);
memset (buf_stdout, 0, 1024);
memset (buf_stderr, 0, 1024);
pid = fork ();
if (pid == -1)
{
fprintf (stderr, "Error when forking process : /usr/sbin/mke2fs\n");
return 1;
}
if (pid == 0)
{
close(pipefd_out[0]);
close(pipefd_err[0]);
dup2(pipefd_out[1], 1);
dup2(pipefd_err[1], 2);
close(pipefd_out[1]);
close(pipefd_err[1]);
char **args;
args = malloc (sizeof (1024));
args[0] = strdup("/usr/sbin/mke2fs");
args[1] = strdup("/dev/sda4");
args[2] = strdup("-t");
args[3] = strdup("ext4");
args[4] = NULL;
execvp ("/usr/sbin/mke2fs", args);
/*
args = malloc (sizeof (1024));
args[0] = strdup("/bin/ls");
args[1] = strdup("-R");
args[2] = strdup("/proc/irq");
args[3] = NULL;
execvp ("/bin/ls", args);
*/
perror ("execv");
fprintf (stderr, "Error when execvp process /usr/sbin/mke2fs\n");
return 1;
}
close(pipefd_out[1]);
close(pipefd_err[1]);
if (waitpid(pid, &status, 0) == -1)
{
fprintf (stderr, "Error when waiting pid : %d\n", pid);
return 1;
}
do
{
count = read(pipefd_out[0], buf_stdout, sizeof(buf_stdout));
}
while (count != 0);
do
{
count = read(pipefd_err[0], buf_stderr, sizeof(buf_stderr));
}
while (count != 0);
ret = WEXITSTATUS(status);
FILE* file = NULL;
file = fopen("/root/TUTU", "w");
if (file != NULL)
{
fwrite(buf_stdout, 1, sizeof(buf_stdout), file);
fwrite(buf_stderr, 1, sizeof(buf_stdout), file);
fclose(file);
}
return 0;
}
If I run ps, I could see my child process running :
# ps | grep sda4
936 root 2696 S {mke2fs} /dev/sda4 -t ext4
I am not able to understand why I got this strange behavior. Not sure if its related, but output of mke2fs is not classic. Instead of print output and move forward the prompt, the process seems to update the output during the computing. It is a kind of progress bar. Not sure if my explanation is really clear.
Thanks,
Eva.
You can't wait for the program to finish (what you do with waitpid) before reading its stdout/stderr from the pipe. When the program writes to the pipe and its full it will sleep until you read from the pipe to make space in it. So the program waits until there's more space in the pipe before it can continue and exit, while you're waiting for the program to exit before you read from the pipe to make space in it.
The simplest solution in this case would be to just move waitpid until after you're done reading from the pipes. It should be fine since the program you execute will close the pipes when exiting.