Using the advise found here: Restoring stdout after using dup
I tried to restore the stdin and stdout. However, when using printf to check if stdout was restored I could not read an output.
The code is as follows. After restoring stdout as 1, I tried printing "done".
#include <stdio.h>
#include <unistd.h>
void main(int argv, char *argc) {
int stdin_copy = dup(STDIN_FILENO);
int stdout_copy = dup(STDOUT_FILENO);
int testpipe[2];
pipe(testpipe);
int PID = fork();
if (PID == 0) {
dup2(testpipe[0], 0);
close(testpipe[1]);
execl("./multby", "multby", "3", NULL);// Just multiplies argument 3 with the number in testpipe.
} else {
dup2(testpipe[1], 1);
close(testpipe[0]);
printf("5");
fclose(stdout);
close(testpipe[1]);
char initialval[100];
read(testpipe[0], initialval, 100);
fprintf(stderr, "initial value: %s\n", initialval);
wait(NULL);
dup2(stdin_copy, 0);
dup2(stdout_copy, 1);
printf("done");//was not displayed when I run code.
}
}
However, I did not see a "done" when I ran the code. (There should be a done after 15).
This is my output:
initial value:
exec successful
a: 3
b: 5
multby successful
15
What did I do wrong when restoring stdout?
You're calling fclose(stdout); which closes the stdout FILE object as well as STDOUT_FILELO -- the stdout file descriptor (they're two different things, linked together). So when you later call dup2(stdout_copy, 1);, that restores the file descriptor, but the FILE object remains closed, so you can't use it to output anything. There's no way to reopen the FILE object to refer to a file descriptor1, so your best bet is just to remove the fclose line. The dup2 will close the file descriptor you're replacing (so you don't really need a separate close) and you should be able to see the output
1You could possibly use freopen with /dev/fd on some systems, but /dev/fd is non-portable
General remarks
Reiterating what I've said before on SO in other answers.
You aren't closing enough file descriptors in the child process.
Rule of thumb: If you
dup2()
one end of a pipe to standard input or standard output, close both of the
original file descriptors returned by
pipe()
as soon as possible.
In particular, you should close them before using any of the
exec*()
family of functions.
The rule also applies if you duplicate the descriptors with either
dup()
or
fcntl()
with F_DUPFD or F_DUPFD_CLOEXEC.
If the parent process will not communicate with any of its children via
the pipe, it must ensure that it closes both ends of the pipe early
enough (before waiting, for example) so that its children can receive
EOF indications on read (or get SIGPIPE signals or write errors on
write), rather than blocking indefinitely.
Even if the parent uses the pipe without using dup2(), it should
normally close at least one end of the pipe — it is extremely rare for
a program to read and write on both ends of a single pipe.
Note that the O_CLOEXEC option to
open(),
and the FD_CLOEXEC and F_DUPFD_CLOEXEC options to fcntl() can also factor
into this discussion.
If you use
posix_spawn()
and its extensive family of support functions (21 functions in total),
you will need to review how to close file descriptors in the spawned process
(posix_spawn_file_actions_addclose(),
etc.).
Note that using dup2(a, b) is safer than using close(b); dup(a);
for a variety of reasons.
One is that if you want to force the file descriptor to a larger than
usual number, dup2() is the only sensible way to do that.
Another is that if a is the same as b (e.g. both 0), then dup2()
handles it correctly (it doesn't close b before duplicating a)
whereas the separate close() and dup() fails horribly.
This is an unlikely, but not impossible, circumstance.
Analysis of code
The question contains the code:
#include <stdio.h>
#include <unistd.h>
void main(int argv, char *argc) {
int stdin_copy = dup(STDIN_FILENO);
int stdout_copy = dup(STDOUT_FILENO);
int testpipe[2];
pipe(testpipe);
int PID = fork();
if (PID == 0) {
dup2(testpipe[0], 0);
close(testpipe[1]);
execl("./multby", "multby", "3", NULL);// Just multiplies argument 3 with the number in testpipe.
} else {
dup2(testpipe[1], 1);
close(testpipe[0]);
printf("5");
fclose(stdout);
close(testpipe[1]);
char initialval[100];
read(testpipe[0], initialval, 100);
fprintf(stderr, "initial value: %s\n", initialval);
wait(NULL);
dup2(stdin_copy, 0);
dup2(stdout_copy, 1);
printf("done");//was not displayed when I run code.
}
}
The line void main(int argv, char *argc) { should be int main(void) since you do not use the command line arguments. You also have the names argv and argc reversed from the normal convention — the first argument is normally called argc (argument count) and the second is normally called argv (argument vector). Additionally, the type for the second argument should be char **argv (or char **argc if you want to confuse all your casual readers). See also What should main() return in C and C++?
The next block of code that warrants discussion is:
if (PID == 0) {
dup2(testpipe[0], 0);
close(testpipe[1]);
execl("./multby", "multby", "3", NULL);// Just multiplies argument 3 with the number in testpipe.
}
This breaks the rule of thumb. You should also put error handling code after the execl().
if (PID == 0)
{
dup2(testpipe[0], STDIN_FILENO);
close(testpipe[0]);
close(testpipe[1]);
execl("./multby", "multby", "3", (char *)NULL);
fprintf(stderr, "failed to execute ./multby\n");
exit(EXIT_FAILURE);
}
The next block of code to analyze is:
dup2(testpipe[1], 1);
close(testpipe[0]);
printf("5");
fclose(stdout);
close(testpipe[1]);
In theory, you should use STDOUT_FILENO instead of 1, but I have considerable sympathy with the use of 1 (not least because when I first learned C, there was no such symbolic constant). You do actually close both ends of the pipe, but I'd prefer to see both closes immediately after the dup2() call, in line with the rule of thumb. The printf() without a newline does send anything down the pipe; it stashes the 5 in the I/O buffer.
As Chris Dodd identified in their answer, the fclose(stdout) call is a source of much trouble. You should probably simply replace it with fflush(stdout).
Moving on:
char initialval[100];
read(testpipe[0], initialval, 100);
fprintf(stderr, "initial value: %s\n", initialval);
wait(NULL);
dup2(stdin_copy, 0);
dup2(stdout_copy, 1);
printf("done");
You didn't check whether the read() worked. It didn't; it failed with EBADF, because just above this you use close(testpipe[0]);. What you print on stderr there is an uninitialized string — that's not good. In practice, if you want to read information from the child reliably, you need two pipes, one for parent-to-child communication and the other for child-to-parent communication. Otherwise, there's no guarantee that the parent won't read what it wrote. If you waited for the child to die before reading, you'd be in with a decent chance of it working, but you can't always rely on being able to do that.
The first of the two dup2() calls is pointless; you didn't change the redirection for standard input (so in fact stdin_copy is unnecessary). The second changes the assignment of the standard output file descriptor, but you had already closed stdout, so there is no easy way to reopen it so that the printf() would work. The message should end with a newline — most printf() format strings should end with a newline, unless it is deliberately being used to build up a single line of output piecemeal. However, if you paid attention to the return value, you'd find that it failed (-1) and the chances are good that you'd find errno == EBADF again.
Fixing the code — fragile solution
Given this code for multby.c:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
if (argc != 2)
{
fprintf(stderr, "Usage; %s number", argv[0]);
return 1;
}
int multiplier = atoi(argv[1]);
int number;
if (scanf("%d", &number) == 1)
printf("%d\n", number * multiplier);
else
fprintf(stderr, "%s: failed to read a number from standard input\n", argv[0]);
return 0;
}
and this code (as pipe23.c, compiled to pipe23):
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(void)
{
int stdout_copy = dup(STDOUT_FILENO);
int testpipe[2];
pipe(testpipe);
int PID = fork();
if (PID == 0)
{
dup2(testpipe[0], 0);
dup2(testpipe[1], 1);
close(testpipe[1]);
close(testpipe[0]);
execl("./multby", "multby", "3", NULL);// Just multiplies argument 3 with the number in testpipe.
fprintf(stderr, "failed to execute ./multby (%d: %s)\n", errno, strerror(errno));
exit(EXIT_FAILURE);
}
else
{
dup2(testpipe[1], 1);
close(testpipe[1]);
printf("5\n");
fflush(stdout);
close(1);
wait(NULL);
char initialval[100];
read(testpipe[0], initialval, 100);
close(testpipe[0]);
fprintf(stderr, "initial value: [%s]\n", initialval);
dup2(stdout_copy, 1);
printf("done\n");
}
}
the combination barely works — it is not a resilient solution. For example, I added the newline after the 5. The child waits for another character after the 5 to determine that it has finished reading the number. It doesn't get EOF because it has the write end of the pipe open for sending the response to the parent, even if it is hung reading from the pipe so it never will write to it. But because it only attempts to read one number, it is OK.
The output is:
initial value: [15
]
done
Fixing the code — robust solution
If you were dealing with arbitrary quantities of numbers, you'd need to use two pipes — it is the only reliable way of doing the task. This would also work for a single number passed to the child, of course.
Here's a modified multby.c which loops on reading:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
if (argc != 2)
{
fprintf(stderr, "Usage; %s number", argv[0]);
return 1;
}
int multiplier = atoi(argv[1]);
int number;
while (scanf("%d", &number) == 1)
printf("%d\n", number * multiplier);
return 0;
}
and here's a modified pipe23.c that uses two pipes and writes 3 numbers to the child, and gets back three results. Note that it doesn't need to put a newline after the third number with this organization (though there'd be no harm done if it did include a newline). Also, if you're devious, the second space in the list of numbers is unnecessary too; the - isn't part of the second number, so the scanning stops after the 0.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(void)
{
int stdout_copy = dup(STDOUT_FILENO);
int p_to_c[2];
int c_to_p[2];
if (pipe(p_to_c) != 0 || pipe(c_to_p) != 0)
{
fprintf(stderr, "failed to open a pipe (%d: %s)\n", errno, strerror(errno));
exit(EXIT_FAILURE);
}
int PID = fork();
if (PID == 0)
{
dup2(p_to_c[0], 0);
dup2(c_to_p[1], 1);
close(c_to_p[0]);
close(c_to_p[1]);
close(p_to_c[0]);
close(p_to_c[1]);
execl("./multby", "multby", "3", NULL);// Just multiplies argument 3 with the number in testpipe.
fprintf(stderr, "failed to execute ./multby (%d: %s)\n", errno, strerror(errno));
exit(EXIT_FAILURE);
}
else
{
dup2(p_to_c[1], 1);
close(p_to_c[1]);
close(p_to_c[0]);
close(c_to_p[1]);
printf("5 10 -15");
fflush(stdout);
close(1);
char initialval[100];
int n = read(c_to_p[0], initialval, 100);
if (n < 0)
{
fprintf(stderr, "failed to read from the child (%d: %s)\n", errno, strerror(errno));
exit(EXIT_FAILURE);
}
close(c_to_p[0]);
wait(NULL);
fprintf(stderr, "initial value: [%.*s]\n", n, initialval);
dup2(stdout_copy, 1);
printf("done\n");
}
}
Note that there are lots of calls to close() in there — two for each of the 4 descriptors involved in handling two pipes. This is normal. Not taking care to close the file descriptors can easily lead to hung systems.
The output from running this pipe23 is this, which is what I wanted:
initial value: [15
30
-45
]
done
Related
I'm trying to understand what is behind this behaviour in my parent process.
Basically, I create a child process and connect its stdout to my pipe. The parent process continuously reads from the pipe and does some stuff.
I noticed that when inserting the while loop in the parent the stdout seems to be lost, nothing appears on the terminal etc I thought that the output of stdout would somehow go to the pipe (maybe an issue with dup2) but that doesn't seem to be the issue. If I don't continuously fflush(stdout) in the parent process, whatever I'm trying to get to the terminal just won't show. Without a while loop in the parent it works fine, but I'm really not sure why it's happening or if the rest of my implementation is problematic somehow.
Nothing past the read system call seems to be going to the stdout in the parent process. Assuming the output of inotifywait in the pipe is small enough ( 30 > bytes ), what exactly is wrong with this program?
What I expect to happen is the stdout of inotifywait to go to the pipe, then for the parent to read the message, run strtok and print the file name (which only appears in stdout when I fflush)
Running the program with inotify installed and creating any file in the current directory of the program should be enough. Removing the while loop does print the created file's name (as expected).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <errno.h>
int main(void) {
char b[100];
int pipefd;
if (mkfifo("fifo", 0666) == -1) {
if (errno != EEXIST) {
perror("mkfifo");
exit(EXIT_FAILURE);
}
}
pid_t pid = fork();
if (pid < 0) {
perror("fork");
exit(1);
}
if ((pipefd = open("fifo", O_RDWR)) < 0) {
perror("open pipe");
exit(EXIT_FAILURE);
}
if (pid == 0) {
dup2(pipefd, 1);
const char* dir = ".";
const char* args[] = {"inotifywait", dir, "-m", "-e",
"create", "-e", "moved_to", NULL};
execvp("inotifywait", (char**)args);
perror("inotifywait");
} else {
while (1) {
fflush(stdout); // the output only appears in stdout with this here
if (read(pipefd, b, 30) < 0) {
perror("problem # read");
exit(1);
}
char filename[30];
printf("anything");
sscanf(b, "./ CREATE %s", filename);
printf("%s", filename);
}
}
}
The streams used by the C standard library are designed in such a way that they are normally buffered (except for the standard error stream stderr).
The standard output stream is normally line buffered, unless the output device is not an interactive device, in which case it is normally fully buffered. Therefore, in your case, it is probably line buffered.
This means that the buffer will only be flushed
when it is full,
when an \n character is encountered,
when the stream is closed (e.g. during normal program termination),
when reading input from an unbuffered or line-buffered stream (in certain situations), or
when you explicitly call fflush.
This explains why you are not seeing the output, because none of the above are happening in your infinite loop (when you don't call fflush). Although you are reading input, you are not doing this from a C standard library FILE * stream. Instead, you are bypassing the C runtime library (e.g. glibc) by using the read system call directly (i.e. you are using a file descriptor instead of a stream).
The simplest solution to your problem would probably be to replace the line
printf("%s", filename);
with:
printf("%s\n", filename);
If stdout is line-buffered (which should be the case if it is connected to a terminal), then the input should automatically be flushed after every line and an explicit call to fflush should no longer be necessary.
Throughout my years as a C programmer, I've always been confused about the standard stream file descriptors. Some places, like Wikipedia[1], say:
In the C programming language, the standard input, output, and error streams are attached to the existing Unix file descriptors 0, 1 and 2 respectively.
This is backed up by unistd.h:
/* Standard file descriptors. */
#define STDIN_FILENO 0 /* Standard input. */
#define STDOUT_FILENO 1 /* Standard output. */
#define STDERR_FILENO 2 /* Standard error output. */
However, this code (on any system):
write(0, "Hello, World!\n", 14);
Will print Hello, World! (and a newline) to STDOUT. This is odd because STDOUT's file descriptor is supposed to be 1. write-ing to file descriptor 1
also prints to STDOUT.
Performing an ioctl on file descriptor 0 changes standard input[2], and on file descriptor 1 changes standard output. However, performing termios functions on either 0 or 1 changes standard input[3][4].
I'm very confused about the behavior of file descriptors 1 and 0. Does anyone know why:
writeing to 1 or 0 writes to standard output?
Performing ioctl on 1 modifies standard output and on 0 modifies standard input, but performing tcsetattr/tcgetattr on either 1 or 0 works for standard input?
I guess it is because in my Linux, both 0 and 1 are by default opened with read/write to the /dev/tty which is the controlling terminal of the process. So indeed it is possible to even read from stdout.
However this breaks as soon as you pipe something in or out:
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
int main() {
errno = 0;
write(0, "Hello world!\n", 14);
perror("write");
}
and run with
% ./a.out
Hello world!
write: Success
% echo | ./a.out
write: Bad file descriptor
termios functions always work on the actual underlying terminal object, so it doesn't matter whether 0 or 1 is used for as long as it is opened to a tty.
Let's start by reviewing some of the key concepts involved:
File description
In the operating system kernel, each file, pipe endpoint, socket endpoint, open device node, and so on, has a file description. The kernel uses these to keep track of the position in the file, the flags (read, write, append, close-on-exec), record locks, and so on.
The file descriptions are internal to the kernel, and do not belong to any process in particular (in typical implementations).
File descriptor
From the process viewpoint, file descriptors are integers that identify open files, pipes, sockets, FIFOs, or devices.
The operating system kernel keeps a table of descriptors for each process. The file descriptor used by the process is simply an index to this table.
The entries to in the file descriptor table refer to a kernel file description.
Whenever a process uses dup() or dup2() to duplicate a file descriptor, the kernel only duplicates the entry in the file descriptor table for that process; it does not duplicate the file description it keeps to itself.
When a process forks, the child process gets its own file descriptor table, but the entries still point to the exact same kernel file descriptions. (This is essentially a shallow copy, will all file descriptor table entries being references to file descriptions. The references are copied; the referred to targets remain the same.)
When a process sends a file descriptor to another process via an Unix Domain socket ancillary message, the kernel actually allocates a new descriptor on the receiver, and copies the file description the transferred descriptor refers to.
It all works very well, although it is a bit confusing that "file descriptor" and "file description" are so similar.
What does all that have to do with the effects the OP is seeing?
Whenever new processes are created, it is common to open the target device, pipe, or socket, and dup2() the descriptor to standard input, standard output, and standard error. This leads to all three standard descriptors referring to the same file description, and thus whatever operation is valid using one file descriptor, is valid using the other file descriptors, too.
This is most common when running programs on the console, as then the three descriptors all definitely refer to the same file description; and that file description describes the slave end of a pseudoterminal character device.
Consider the following program, run.c:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
static void wrerrp(const char *p, const char *q)
{
while (p < q) {
ssize_t n = write(STDERR_FILENO, p, (size_t)(q - p));
if (n > 0)
p += n;
else
return;
}
}
static inline void wrerr(const char *s)
{
if (s)
wrerrp(s, s + strlen(s));
}
int main(int argc, char *argv[])
{
int fd;
if (argc < 3) {
wrerr("\nUsage: ");
wrerr(argv[0]);
wrerr(" FILE-OR-DEVICE COMMAND [ ARGS ... ]\n\n");
return 127;
}
fd = open(argv[1], O_RDWR | O_CREAT, 0666);
if (fd == -1) {
const char *msg = strerror(errno);
wrerr(argv[1]);
wrerr(": Cannot open file: ");
wrerr(msg);
wrerr(".\n");
return 127;
}
if (dup2(fd, STDIN_FILENO) != STDIN_FILENO ||
dup2(fd, STDOUT_FILENO) != STDOUT_FILENO) {
const char *msg = strerror(errno);
wrerr("Cannot duplicate file descriptors: ");
wrerr(msg);
wrerr(".\n");
return 126;
}
if (dup2(fd, STDERR_FILENO) != STDERR_FILENO) {
/* We might not have standard error anymore.. */
return 126;
}
/* Close fd, since it is no longer needed. */
if (fd != STDIN_FILENO && fd != STDOUT_FILENO && fd != STDERR_FILENO)
close(fd);
/* Execute the command. */
if (strchr(argv[2], '/'))
execv(argv[2], argv + 2); /* Command has /, so it is a path */
else
execvp(argv[2], argv + 2); /* command has no /, so it is a filename */
/* Whoops; failed. But we have no stderr left.. */
return 125;
}
It takes two or more parameters. The first parameter is a file or device, and the second is the command, with the rest of the parameters supplied to the command. The command is run, with all three standard descriptors redirected to the file or device named in the first parameter. You can compile the above with gcc using e.g.
gcc -Wall -O2 run.c -o run
Let's write a small tester utility, report.c:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
int main(int argc, char *argv[])
{
char buffer[16] = { "\n" };
ssize_t result;
FILE *out;
if (argc != 2) {
fprintf(stderr, "\nUsage: %s FILENAME\n\n", argv[0]);
return EXIT_FAILURE;
}
out = fopen(argv[1], "w");
if (!out)
return EXIT_FAILURE;
result = write(STDIN_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "write(STDIN_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "write(STDIN_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
result = read(STDOUT_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "read(STDOUT_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "read(STDOUT_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
result = read(STDERR_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "read(STDERR_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "read(STDERR_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
if (ferror(out))
return EXIT_FAILURE;
if (fclose(out))
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
It takes exactly one parameter, a file or device to write to, to report whether writing to standard input, and reading from standard output and error work. (We can normally use $(tty) in Bash and POSIX shells, to refer to the actual terminal device, so that the report is visible on the terminal.) Compile this one using e.g.
gcc -Wall -O2 report.c -o report
Now, we can check some devices:
./run /dev/null ./report $(tty)
./run /dev/zero ./report $(tty)
./run /dev/urandom ./report $(tty)
or on whatever we wish. On my machine, when I run this on a file, say
./run some-file ./report $(tty)
writing to standard input, and reading from standard output and standard error all work -- which is as expected, as the file descriptors refer to the same, readable and writable, file description.
The conclusion, after playing with the above, is that there is no strange behaviour here at all. It all behaves exactly as one would expect, if file descriptors as used by processes are simply references to operating system internal file descriptions, and standard input, output, and error descriptors are duplicates of each other.
I'm experimenting with this dup2 command in linux. I've written a code as follows:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
int main()
{
int pipe1_ends[2];
int pipe2_ends[2];
char string[] = "this \n is \n not \n sorted";
char buffer[100];
pid_t pid;
pipe(pipe1_ends);
pipe(pipe2_ends);
pid = fork();
if(pid > 0) { /* parent */
close(pipe1_ends[0]);
close(pipe2_ends[1]);
write(pipe1_ends[1],string,strlen(string));
read(pipe2_ends[0], buffer, 100);
printf("%s",buffer);
return 0;
}
if(pid == 0) { /* child */
close(pipe1_ends[1]);
close(pipe2_ends[0]);
dup2(pipe1_ends[0], 0);
dup2(pipe2_ends[1],1);
char *args[2];
args[0] = "/usr/bin/sort";
args[1] = NULL;
execv("/usr/bin/sort",args);
}
return 0;
}
I expect this program to behave as follows:
It should fork a child and replace its image with sort process. And since the stdin and stdout are replaced with dup2 command, I expect sort to read input from the pipe and write the output into the other pipe which is printed by the parent. But the sort program doesn't seem to be reading any input. If no commandline argument is given, sort reads it from the stdin right? Can someone help me with this problem, please.
Many thanks!
Hm. What's happening is that you aren't finishing your write: after sending data to the child process, you have to tell it you're done writing, either by closing pipe1_ends[1] or calling shutdown(2) on it. You should also call write/read in a loop, since it's quite likely in the general case that read at least won't give you all the results in one go. Obviously the full code checks all return values, doesn't it?
One final thing: Your printf is badly broken. It can only accept null-terminated strings, and the result returned by read will not be null-terminated (it's a buffer-with-length, the other common way of knowing where the end is). You want:
int n = read(pipe2_ends[0], buffer, 99);
if (n < 0) { perror("read"); exit(1); }
buffer[n] = 0;
printf("%s",buffer);
I am trying to read from a file, write it to a pipe, and in a child process read from the pipe and write it to a new file. The program is passed two parameters: the name of the input file, and the name of the file to be copied to. This is a homework project, but I have spent hours online and have found only ways of making it more confusing. We were given two assignments, this and matrix multiplication with threads. I got the matrix multiplication with no problems, but this one, which should be fairly easy, I am having so much trouble with. I get the first word of the file that I am copying, but then a whole bunch of garble.
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
if(argc < 3) {
printf("Not enough arguments: FileCopy input.txt copy.txt\n");
exit(0);
}
char buffer[200];
pid_t pid;
int fds[2];
pipe(fds);
pid = fork();
if (pid == 0) { /* The child process */
//wait(NULL);
write(1, "hi i am in child\n", 17);
int copy = open(argv[2], O_WRONLY | O_CREAT, S_IWUSR | S_IRUSR | S_IXUSR | S_IRGRP);
FILE* stream;
close(fds[1]);
stream = fdopen(fds[0], "r");
while (fgets(buffer, sizeof(buffer), stream) != NULL) {
//printf("%s\n", buffer);
write(copy, buffer, 200);
//printf("kjlkjljljlkj\n");
//puts(buffer);
}
close(copy);
close(fds[0]);
exit(0);
}
else {
write(1, "hi i am in parent\n", 18);
FILE* input = fopen(argv[1], "r");
FILE* stream;
close(fds[0]);
stream = fdopen(fds[1], "w");
/*while (fscanf(input, "%s", buffer) != EOF) {
//printf("%s\n", buffer);
fprintf(stream, "%s\n", buffer);
fflush(stream);
//printf("howdy doody\n");
}*/
fgets(buffer, sizeof(buffer), input);
printf("%s", buffer);
fprintf(stream, "%s", buffer);
fflush(stream);
close(fds[1]);
fclose(input);
wait(NULL);
exit(0);
}
return 0;
}
Am I doing the reads and writes wrong?
Am I doing the reads and writes wrong?
Yes.
In the child, you are mixing string-oriented buffered I/O (fgets()) with block-oriented binary I/O. (That is, write().) Either approach will work, but it would be normal practice to pick one or the other.
If you mix them, you have to consider more aspects of the problem. For example, in the child, you are reading just one line from the pipe but then you write the entire buffer to the file. This is the source of the garbage characters you are probably seeing in the file.
In the parent, you are sending only a single line with no loop. And after that, you close the underlying file descriptor before you fclose() the buffered I/O system. This means when fclose tries to flush the buffer, the now-closed descriptor will not work to write any remaining data.
You can either use write()/read()/close(), which are the Posix-specified kernel-level operations, or you can use fdopen/puts/gets/fclose which are the ISO C - specified standard I/O library operations. Now, there is one way of mixing them that will work. If you use stdio in the parent, you could still use read/write in the child, but then you would be making kernel calls for each line, which would not usually be an ideal practice.
You should generally read/write pipes only using the read/write-calls.
You should close the according ends of the pipe for child (read-only) and parent (write-only).
Afterwards, write from the parent into the pipe using write()-systemcall. And in the child read using read()-systemcall.
Look here for a good explanation.
I'm writing a little program, and here is what it should do.
In the main process I have to create a new one and that one should execute another program which only does a printf("text"). I want to redirect the pipe write end on stdout and the main process should read from its pipe read and and print it on stdout. I wrote the code but again and again I get a segmentation fault when the parent process tries to read from the pipe.
#include <sys/types.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>
void write_to(FILE *f){
char buf[50];
fprintf(f,"KOMA");
}
int main(){
int cpPipe[2];
int child1_fd;
int child2_fd;
if(pipe(cpPipe) == -1){
fprintf(stderr,"ERROR PIPE creation");
exit(1);
}else{printf("pipe couldn't be created\n");}
child1_fd = fork();
if(child1_fd < 0){
fprintf(stderr, " CHILD creation error");
exit(1);
}
if(child1_fd == 0){
printf("*CHILD*\n");
char program[] = "./Damn";
int dupK;
printf("stdout %d \n", STDOUT_FILENO);
printf("stdin %d \n", STDIN_FILENO);
printf("pipe1 %d \n", cpPipe[1]);
printf("pipe0 %d \n", cpPipe[0]);
// closing pipe write
close(cpPipe[0]);
close(1);
dup(cpPipe[1]);
printf("and");
close(cpPipe[1]);
exit(0);
}else{
printf("*Parent*\n");
char *p;
char *buf;
FILE *pipe_read;
close(cpPipe[1]);
pipe_read = fdopen(cpPipe[0],"r");
while((buf = fgets(p,30,pipe_read)) != NULL){
printf("buf %s \n", buf);
}
wait();
printf("Child is done\n");
fclose(pipe_read);
exit(0);
}
}
Do I have to close the pipe write end when I redirect stdout to it?
Uhm,... the reason for your segmentation fault is here:
buf = fgets(p,30,pipe_read);
p is a pointer to essentially nowhere of importance. It's content is whatever is in the stack at the time of execution, you never initialize it. You need it to point to a chunk of memory you can use! Assign the return of a malloc() call to it, or declare it as char p[LEN].
Edit: you are also reopening already open file descriptors. Check the documentation on fgets and pipe, I think you are confused as to how they work.
Now, that said, the flow of your function is kinda confusing. Try working on clarifying it! Remember, code is meant to express intentions, ideas of functionality. Try using pencil and paper to organize your program, and then write it as actual code :).
Cheers!
Do I have to close the pipe write end when I redirect stdout to it?
In general, yes, because while there is a process with the write end of the pipe open, the processes reading the pipe will not get EOF and will hang. It is also tidy to close file descriptors you aren't going to use, of course.
Your code also says "pipe could not be created" in the success path.