Communicate with child process stdout/stdin - c

I am trying to communicate with a process (that itself writes to stdin and stdout to interact in a terminal with a user) and read it's stdin and write to it's stdout in C.
Hence I try to substitute a shell user programmatically. A methapohrical example: Imagine I want to use VIM in C for some reason. Then I also need to write commands (stdout) and read stuff from the editor (stdin).
Initially I thought this might be a trivial task, but it seems like there's no standard approach. int system(const char *command); just executes a command and sets the commands stdin/stdout to the one of the calling process.
Because this leads nowhere, I looked at FILE *popen(const char *command, const char *type); but the manual pages state that:
Since a pipe is by definition unidirectional, the type argument may specify only reading or writing, not both; the resulting stream is correspondingly read-only or write-only.
and its implication:
The return value from popen() is a normal standard I/O stream in all respects save that it must be closed with pclose() rather than fclose(3). Writing to such a stream writes to the standard input
of the command; the command's standard output is the same as that of the process that called popen(), unless this is altered by the command itself. Conversely, reading from a "popened" stream reads
the command's standard output, and the command's standard input is the same as that of the process that called popen().
Hence it wouldn't be completely impossible to use popen(), but it appears to me very inelegant, because I would have to parse the stdout of the calling process (the code that called popen()) in order to parse data sent from the popened command (when using popen type 'w').
Conversely, when popen is called with type 'r', I would need to write to the calling's process stdin, in order to write data to the popened command. It's not even clear to me whether both these processes receive the same data in the stdin in this case...
I just need to control stdin and stdout of a program. I mean can't there be a function like:
stdin_of_process, stdout_of_process = real_popen("/path/to/bin", "rw")
// write some data to the process stdin
write("hello", stdin_of_process)
// read the response of the process
read(stdout_of_process)
So my first question: What is the best way to implement the upper functionality?
Currently I am trying the following approach to communicate with another process:
Set up two pipes with int pipe(int fildes[2]);. One pipe to read the stdout of the process, the other pipe to write to the stdin of the process.
Fork.
Execute the process that I want to communicate with in the forked child process using int execvp(const char *file, char *const argv[]);.
Communicate with the child using the two pipes in the original process.
That's easy said bot not so trivially implemented (At least for me). I oddly managed to do so in one case, but when I tried to understand what I am doing with a simpler example, I fail. Here is my current problem:
I have two programs. The first just writes a incremented number every 100 ms to it's stdout:
#include <unistd.h>
#include <time.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
void sleepMs(uint32_t ms) {
struct timespec ts;
ts.tv_sec = 0 + (ms / 1000);
ts.tv_nsec = 1000 * 1000 * (ms % 1000);
nanosleep(&ts, NULL);
}
int main(int argc, char *argv[]) {
long int cnt = 0;
char buf[0x10] = {0};
while (1) {
sleepMs(100);
sprintf(buf, "%ld\n", ++cnt);
if (write(STDOUT_FILENO, buf, strlen(buf)) == -1)
perror("write");
}
}
Now the second program is supposed to read the stdout of the first program (Please keep in my mind that I eventually want to read AND write with a process, so a technical correct solution to use popen() for the upper use case might be right in this specific case, because I simplified my experiments to just capture the stdout of the bottom program). I expect from the bottom program that it reads whatever data the upper program writes to stdout. But it does not read anything. Where could be the reason? (second question).
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdint.h>
#include <time.h>
void sleepMs(uint32_t ms) {
struct timespec ts;
ts.tv_sec = 0 + (ms / 1000);
ts.tv_nsec = 1000 * 1000 * (ms % 1000);
nanosleep(&ts, NULL);
}
int main() {
int pipe_fds[2];
int n;
char buf[0x100] = {0};
pid_t pid;
pipe(pipe_fds);
char *cmd[] = {"/path/to/program/above", NULL};
if ((pid = fork()) == 0) { /* child */
dup2(pipe_fds[1], 1); // set stdout of the process to the write end of the pipe
execvp(cmd[0], cmd); // execute the program.
fflush(stdout);
perror(cmd[0]); // only reached in case of error
exit(0);
} else if (pid == -1) { /* failed */
perror("fork");
exit(1);
} else { /* parent */
while (1) {
sleepMs(500); // Wait a bit to let the child program run a little
printf("Trying to read\n");
if ((n = read(pipe_fds[0], buf, 0x100)) >= 0) { // Try to read stdout of the child process from the read end of the pipe
buf[n] = 0; /* terminate the string */
fprintf(stderr, "Got: %s", buf); // this should print "1 2 3 4 5 6 7 8 9 10 ..."
} else {
fprintf(stderr, "read failed\n");
perror("read");
}
}
}
}

Here is a (C++11-flavored) complete example:
//
// Example of communication with a subprocess via stdin/stdout
// Author: Konstantin Tretyakov
// License: MIT
//
#include <ext/stdio_filebuf.h> // NB: Specific to libstdc++
#include <sys/wait.h>
#include <unistd.h>
#include <iostream>
#include <memory>
#include <exception>
// Wrapping pipe in a class makes sure they are closed when we leave scope
class cpipe {
private:
int fd[2];
public:
const inline int read_fd() const { return fd[0]; }
const inline int write_fd() const { return fd[1]; }
cpipe() { if (pipe(fd)) throw std::runtime_error("Failed to create pipe"); }
void close() { ::close(fd[0]); ::close(fd[1]); }
~cpipe() { close(); }
};
//
// Usage:
// spawn s(argv)
// s.stdin << ...
// s.stdout >> ...
// s.send_eol()
// s.wait()
//
class spawn {
private:
cpipe write_pipe;
cpipe read_pipe;
public:
int child_pid = -1;
std::unique_ptr<__gnu_cxx::stdio_filebuf<char> > write_buf = NULL;
std::unique_ptr<__gnu_cxx::stdio_filebuf<char> > read_buf = NULL;
std::ostream stdin;
std::istream stdout;
spawn(const char* const argv[], bool with_path = false, const char* const envp[] = 0): stdin(NULL), stdout(NULL) {
child_pid = fork();
if (child_pid == -1) throw std::runtime_error("Failed to start child process");
if (child_pid == 0) { // In child process
dup2(write_pipe.read_fd(), STDIN_FILENO);
dup2(read_pipe.write_fd(), STDOUT_FILENO);
write_pipe.close(); read_pipe.close();
int result;
if (with_path) {
if (envp != 0) result = execvpe(argv[0], const_cast<char* const*>(argv), const_cast<char* const*>(envp));
else result = execvp(argv[0], const_cast<char* const*>(argv));
}
else {
if (envp != 0) result = execve(argv[0], const_cast<char* const*>(argv), const_cast<char* const*>(envp));
else result = execv(argv[0], const_cast<char* const*>(argv));
}
if (result == -1) {
// Note: no point writing to stdout here, it has been redirected
std::cerr << "Error: Failed to launch program" << std::endl;
exit(1);
}
}
else {
close(write_pipe.read_fd());
close(read_pipe.write_fd());
write_buf = std::unique_ptr<__gnu_cxx::stdio_filebuf<char> >(new __gnu_cxx::stdio_filebuf<char>(write_pipe.write_fd(), std::ios::out));
read_buf = std::unique_ptr<__gnu_cxx::stdio_filebuf<char> >(new __gnu_cxx::stdio_filebuf<char>(read_pipe.read_fd(), std::ios::in));
stdin.rdbuf(write_buf.get());
stdout.rdbuf(read_buf.get());
}
}
void send_eof() { write_buf->close(); }
int wait() {
int status;
waitpid(child_pid, &status, 0);
return status;
}
};
// ---------------- Usage example -------------------- //
#include <string>
using std::string;
using std::getline;
using std::cout;
using std::endl;
int main() {
const char* const argv[] = {"/bin/cat", (const char*)0};
spawn cat(argv);
cat.stdin << "Hello" << std::endl;
string s;
getline(cat.stdout, s);
cout << "Read from program: '" << s << "'" << endl;
cat.send_eof();
cout << "Waiting to terminate..." << endl;
cout << "Status: " << cat.wait() << endl;
return 0;
}
For many practical purposes, however, the Expect library could probably be a good choice (check out the code in the example subdirectory of its source distribution).

You've got the right idea, and I don't have time to analyze all of your code to point out the specific problem, but I do want to point out a few things that you may have overlooked on how programs and terminals work.
The idea of a terminal as a "file" is naivé. Programs like vi use a library (ncurses) to send special control characters (and change terminal device driver settings). For example, vi puts the terminal device driver itself into a mode where it can read a character at a time, among other things.
It is very non-trivial to "control" a program like vi this way.
On your simplified experiment...
Your buffer is one byte too small. Also, be aware IO is sometimes line buffered. So, you might try making sure the newline is getting transferred (use printf instead of sprintf/strlen/write...you hooked the stdout up to your pipe already), otherwise you might not see data until a newline is hit. I don't remember pipe being line buffered, but it is worth a shot.

Related

How to get the error stream pipe of the child process?

I'm trying to execute an external program via libpipeline , but I can't get the error stream for the child process. code show as below:
#include <pipeline.h>
#include <stdlib.h>
#include <stdio.h>
#include <pwd.h>
#include <unistd.h>
#include <string.h>
int main() {
pipeline *cmd;
const char *line;
// get user info
struct passwd *uinfo = getpwuid(geteuid());
if (!uinfo) {
perror(NULL);
return EXIT_FAILURE;
}
printf("login shell: %s\n", uinfo->pw_shell);
// Create the pipeline of the external application, NULL indicates the end of parameter input.
cmd = pipeline_new_command_args(uinfo->pw_shell, "-c", "echo 'Hello World' 1>&2", NULL);
pipeline_want_out(cmd, -1);
pipeline_start(cmd);
line = pipeline_peekline(cmd);
if (!strstr(line, "coding: UTF-8")) printf("Unicode text follows:0\n");
while ((line = pipeline_readline(cmd))) printf("stdout: %s", line);
printf("exit code: %d\n", pipeline_wait(cmd));
return EXIT_SUCCESS;
}
How can I read the error stream of the child process?
The environmental information is as follows:
operating system: Linux, 5.15.60-1-MANJARO
gcc version: 12.1.1 20220730 (GCC)
shell: zsh 5.9
Here is a complete application using only the C library.
I based off this tutorial
#include <iostream>
#include <unistd.h>
#include <sys/wait.h>
#include <string.h>
// Child process executes this
void child( int out[2], int err[2] ) {
// Duplicates stdout and stderr in the child, taking care to wait
while ((dup2(out[1], STDOUT_FILENO) == -1) && (errno == EINTR)) {}
while ((dup2(err[1], STDERR_FILENO) == -1) && (errno == EINTR)) {}
// Child does not need to access any pipe ends as it is using
// stdout/stderr
::close(out[1]);
::close(err[1]);
::close(out[0]);
::close(err[0]);
// Print something nice to papa
std::cerr << "Hello World" << std::endl;
}
// This is the parent executing
void parent( pid_t pid, int out[2], int err[2] ) {
// Parent does not need the write end of the child pipes (only read)
::close(out[1]);
::close(err[1]);
// Read stderr
while ( true ) {
char buf[4096];
ssize_t nb = ::read( err[0], buf, sizeof(buf));
if ( nb>0 ) {
std::cerr << "Read " << nb << " bytes" << std::endl;
std::cerr << " [" << std::string(buf,nb) << "]" << std::endl;
} else if ( nb==0 ) {
// pipe broke or was signaled
break;
}
else if ( nb<0 ) {
std::cerr << "Error " << strerror(errno) << std::endl;
break;
}
}
// wait for the child to end clean
while( true ) {
int pstatus = 0;
pid_t res = ::waitpid(pid, &pstatus, 0);
if ( res==-1 ) {
perror("waitpid");
exit(EXIT_FAILURE);
}
if ( res==pid ) {
std::cout << "Process exited with status "
<< WEXITSTATUS(pstatus) << std::endl;
break;
}
if(WIFEXITED(pstatus)) break;
}
}
int main() {
// We dont want signals
signal(SIGPIPE, SIG_IGN);
// Create the pipe between child and parent
int err[2];
int out[2];
::pipe(err);
::pipe(out);
// fork
pid_t pid = fork();
if ( pid==0 ) {
child(out,err);
return 69;
}
else {
parent(pid, out, err);
return 0;
}
}
It produces
Program stdout
Process exited with status 69
Program stderr
Read 11 bytes
[Hello World]
Read 1 bytes
[
]
Complete code: https://godbolt.org/z/hjaf8fYE6
How can I read the error stream of the child process?
As far as I can determine, libpipeline does not perform any stderr redirection, except as requested on a per-command basis via function pipecmd_discard_err(), or as you manually inject with the help of pipecmd_pre_exec() or pipeline_install_post_fork(). There is no built-in facility for reading the standard error output of any of the commands in the pipeline.
But it looks like you indeed can use some of the aforementioned mechanisms to do what you ask. Supposing that you want to capture the stderr output of all of the commands in the pipeline, you should be able to do this:
write a function to redirect stderr to the write end of a pipe. Something like this, maybe:
void redirect_stderr(void *pipe) {
int *pipe_fds = pipe;
// Hope that the following doesn't fail, because there aren't any
// particularly good choices for how to handle that in this
// context.
dup2(pipe_fds[1], STDERR_FILENO);
close(pipe_fds[1]);
}
Before you set up the pipeline, create a pipe for stderr redirection:
int err_pipe_fds[2];
int status;
status = pipe(pipe_fds);
// handle any error ...
Use the above to set up a pre-exec handler on each command in your pipeline:
void pipecmd_pre_exec(command1, redirect_stderr, NULL, err_pipe_fds);
void pipecmd_pre_exec(command2, redirect_stderr, NULL, err_pipe_fds);
// ...
(A pipeline-wide post-fork handler is not a good fit for this, because those do not accept arguments.)
Then, once the pipeline is running, you should be able to read the error output of all the commands from the read end of the pipe (file descriptor err_pipe_fds[0]). If you prefer stream I/O for that, then use fdopen() to wrap the file descriptor in a stream.
HOWEVER, do note that you should set up a separate thread to consume the stderr data, and you must start that thread before starting the pipeline. Otherwise, there is a risk that the pipeline will deadlock on account of the stderr pipe's buffer filling.

is it possible to read and write with the same file descriptor in C

I am trying to write to a file and display the output of the thing i wrote with another process. The code i come up with:
void readLine (int fd, char *str) {
int n;
do {
n = read (fd, str, 1);
} while (*str++ != '\0');
}
int main(int argc,char ** argv){
int fd=open("sharedFile",O_CREAT|O_RDWR|O_TRUNC,0600);
if(fork()==0){
char buf[1000];
while(1) {
readLine(fd,buf);
printf("%s\n",buf);
}
}else{
while(1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
}
}
the output i want (each result spaced from the other with a period of one second):
abcd
abcd
abcd
....
Unfortunately this code doesn't work, it seems that the child process (the reader of the file "sharedFile") reads junk from the file because somehow it reads values even when the file is empty.
When trying to debug the code, readLine function never reads the written file correctly,it always reads 0 bytes.
Can someone help?
First of all, when a file descriptor becomes shared after forking, both the parent and child are pointing to the same open file description, which means in particular that they share the same file position. This is explained in the fork() man page.
So whenever the parent writes, the position is updated to the end of the file, and thus the child is always attempting to read at the end of the file, where there's no data. That's why read() returns 0, just as normal when you hit the end of a file.
(When this happens, you should not attempt to do anything with the data in the buffer. It's not that you're "reading junk", it's that you're not reading anything but are then pretending that whatever junk was in the buffer is what you just read. In particular your code utterly disregards the return value from read(), which is how you're supposed to tell what you actually read.)
If you want the child to have an independent file position, then the child needs to open() the file separately for itself and get a new fd pointing to a new file description.
But still, when the child has read all the data that's currently in the file, read() will again return 0; it won't wait around for the parent to write some more. The fact that some other process has a file open for writing don't affect the semantics of read() on a regular file.
So what you'll need to do instead is that when read() returns 0, you manually sleep for a while and then try again. When there's more data in the file, read() will return a positive number, and you can then process the data you read. Or, there are more elegant but more complicated approaches using system-specific APIs like Linux's inotify, which can sleep until a file's contents change. You may be familiar with tail -f, which uses some combination of these approaches on different systems.
Another dangerous bug is that if someone else writes text to the file that doesn't contain a null byte where expected, your child will read more data than the buffer can fit, thus overrunning it. This can be an exploitable security vulnerability.
Here is a version of the code that fixes these bugs and works for me:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
void readLine (int fd, char *str, size_t max) {
size_t pos = 0;
while (pos < max) {
ssize_t n = read(fd, str + pos, 1);
if (n == 0) {
sleep(1);
} else if (n == 1) {
if (str[pos] == '\0') {
return;
}
pos++;
} else {
perror("read() failure");
exit(2);
}
}
fprintf(stderr, "Didn't receive null terminator in time\n");
exit(2);
}
int main(int argc, char ** argv){
int fd=open("sharedFile", O_CREAT|O_RDWR|O_TRUNC, 0600);
if (fd < 0) {
perror("parent opening sharedFile");
exit(2);
}
pid_t pid = fork();
if (pid == 0){
int newfd = open("sharedFile", O_RDONLY);
if (newfd < 0) {
perror("child opening sharedFile");
exit(2);
}
char buf[1000];
while (1) {
readLine(newfd, buf, 1000);
printf("%s\n",buf);
}
} else if (pid > 0) {
while (1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
} else {
perror("fork");
exit(2);
}
return 0;
}

c - spawned a bash shell. Shell died but pipe not broken?

Problem
I'm trying to pipe contents from the main routine to a execvp'd bash shell. I'm encountering a problem where when I write "exit" into the subshell, it doesn't tell me that the pipe is really broken. It should be though - right? The process died and thus the pipe fd should also return an EOF or a SIGPIPE. It doesn't, however, and just keeps on reading/writing like normal.
Code
The code is attached here:
/************************************************************
* Includes:
* ioctl - useless(?)
* termios, tcsetattr, tcgetattr - are for setting the
* noncanonical, character-at-a-time terminal.
* fork, exec - creating the child process for part 2.
* pthread, pipe - creating the pipe process to communicate
* with the child shell.
* kill - to exit the process
* atexit - does some cleanups. Used in termios, tcsetattr,
* tcgetattr.
************************************************************/
#include <sys/ioctl.h> // ioctl
#include <termios.h> // termios, tcsetattr, tcgetattr
#include <unistd.h> // fork, exec, pipe
#include <sys/wait.h> // waitpid
#include <pthread.h> // pthread
#include <signal.h> // kill
#include <stdlib.h> // atexit
#include <stdio.h> // fprintf and other utility functions
#include <getopt.h> // getopt
/**********************
* GLOBALS
**********************/
pid_t pid;
/**********************
* CONSTANTS
**********************/
static const int BUFFER_SIZE = 16;
static const int STDIN_FD = 0;
static const int STDOUT_FD = 1;
static const int STDERR_FD = 2;
// these attributes are reverted to later
struct termios saved_attributes;
// to revert the saved attributes
void
reset_input_mode (void) {
tcsetattr (STDIN_FILENO, TCSANOW, &saved_attributes);
}
// to set the input mode to correct non-canonical mode.
void
set_input_mode (void) {
struct termios tattr;
/* Make sure stdin is a terminal. */
if (!isatty (STDIN_FILENO))
{
fprintf (stderr, "Not a terminal.\n");
exit (EXIT_FAILURE);
}
/* Save the terminal attributes so we can restore them later. */
tcgetattr (STDIN_FILENO, &saved_attributes);
atexit (reset_input_mode);
/* Set the funny terminal modes. */
tcgetattr (STDIN_FILENO, &tattr);
tattr.c_lflag &= ~(ICANON|ECHO); /* Clear ICANON and ECHO. */
tattr.c_cc[VMIN] = 1;
tattr.c_cc[VTIME] = 0;
tcsetattr (STDIN_FILENO, TCSAFLUSH, &tattr);
}
// pthread 1 will read from pipe_fd[0], which
// is really the child's pipe_fd[1](stdout).
// It then prints out the contents.
void* thread_read(void* arg){
int* pipe_fd = ((int *) arg);
int read_fd = pipe_fd[0];
int write_fd = pipe_fd[1];
char c;
while(1){
int bytes_read = read(read_fd, &c, 1);
if(bytes_read > 0){
putchar(c);
}
else{
close(read_fd);
close(write_fd);
fprintf(stdout, "The read broke.");
fflush(stdout);
break;
}
}
}
// pthread 2 will write to child_pipe_fd[1], which
// is really the child's stdin.
// but in addition to writing to child_pipe_fd[1],
// we must also print to stdout what our
// argument was into the terminal. (so pthread 2
// does extra).
void* thread_write(void* arg){
set_input_mode();
int* pipe_args = ((int *) arg);
int child_read_fd = pipe_args[0];
int child_write_fd = pipe_args[1];
int parent_read_fd = pipe_args[2];
int parent_write_fd = pipe_args[3];
char c;
while(1) {
int bytes_read = read(STDIN_FD, &c, 1);
write(child_write_fd, &c, bytes_read);
putchar(c);
if(c == 0x04){
// If an EOF has been detected, then
// we need to close the pipes.
close(child_write_fd);
close(child_read_fd);
close(parent_write_fd);
close(parent_read_fd);
kill(pid, SIGHUP);
break;
}
}
}
int main(int argc, char* argv[]) {
/***************************
* Getopt process here for --shell
**************************/
int child_pipe_fd[2];
int parent_pipe_fd[2];
pipe(child_pipe_fd);
pipe(parent_pipe_fd);
// We need to spawn a subshell.
pid = fork();
if(pid < 0){
perror("Forking was unsuccessful. Exiting");
exit(EXIT_FAILURE);
}
else if(pid == 0){ // is the child.
// We dup the fd and close the pipe.
close(0); // close stdin. child's pipe should read.
dup(child_pipe_fd[0]); // pipe_fd[0] is the read. Make read the stdin.
close(child_pipe_fd[0]);
close(1); // close stdout
dup(parent_pipe_fd[1]); // pipe_fd[1] is the write. Make write the stdout.
close(parent_pipe_fd[1]);
char* BASH[] = {"/bin/bash", NULL};
execvp(BASH[0], BASH);
}
else{ // is the parent
// We dup the fd and close the pipe.
//
// create 2 pthreads.
// pthread 1 will read from pipe_fd[0], which
// is really the child's pipe_fd[1](stdout).
// It then prints out the contents.
//
// pthread 2 will write to pipe_fd[1], which
// is really the child's pipe_fd[0](stdin)
// but in addition to writing to pipe_fd[1],
// we must also print to stdout what our
// argument was into the terminal. (so pthread 2
// does extra).
//
// We also need to take care of signal handling:
signal(SIGINT, sigint_handler);
/*signal(SIGPIPE, sigpipe_handler);*/
int write_args[] = {child_pipe_fd[0], child_pipe_fd[1],
parent_pipe_fd[0], parent_pipe_fd[1]};
pthread_t t[2];
pthread_create(t, NULL, thread_read, parent_pipe_fd);
pthread_create(t+1, NULL, thread_write, write_args);
pthread_join(t[0], NULL);
pthread_join(t[1], NULL);
int status;
if (waitpid(pid, &status, 0) == -1) {
perror("Waiting for child failed.");
exit(EXIT_FAILURE);
}
printf("Subshell exited with the error code %d", status);
exit(0);
}
return 0;
}
The program basically pipes inputs from the terminal into the subshell and tries to execute them and return the outputs. To write to the pipe, I have a pthread that writes the stdin inputs into the subshell. To read to the pipe, I have a pthread that reads the pipe to the parent. To detect the broken pipe via the subshell dying(calling exit), I detect the EOF character from the read thread.
My attempts
I added a check for the 0x04 character(EOF), I checked for read_bytes == 0 or read_bytes < 0. It seems that it never gets the memo unless I explicitly close the pipes on the writing end. It only meets the EOF character if I send the character ^D(which, in my code, handles via closing all pipes of the child & parent).
Any comments would be appreciated! Thank you.
Your parent process is holding copies of the child's file descriptors. Thus, even after the child has exited, those FDs are still open -- so the other ends of those pipelines remain open as well, preventing any SIGPIPE.
Modify your code as follows:
else {
// pid >0; this is the parent
close(child_pipe_fd[0]); // ADD THIS LINE
close(parent_pipe_fd[1]); // ADD THIS LINE

Capture stdout to a string and output it back to stdout in C

C language is used. I have a function that writes to stdout.
I would like to capture that output, modify it a bit (replacing some strings). And than output it again to the stdout. So I want to start with:
char huge_string_buf[MASSIVE_SIZE];
freopen("NUL", "a", stdout); -OR- freopen("/dev/null", "a", stdout);
setbuf(stdout, huge_string_buffer);
/* modify huge_string_buffer */
The question is now, how do I output the huge_string_buffer back to the original stdout?
One idea is to mimic the functionality of the standard Unix utility tee, but to do so entirely within your program, without relying on outside redirection.
So I've written a simple function, mytee(), which seems to work. It uses shmget(), pipe(), fork(), and dup2():
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/shm.h>
static char *mytee(int size) {
int shmid = shmget(IPC_PRIVATE, size + 1, 0660 | IPC_CREAT);
int pipe_fds[2];
pipe(pipe_fds);
switch (fork()) {
case -1: // = error
perror("fork");
exit(EXIT_FAILURE);
case 0: { // = child
char *out = shmat(shmid, 0, 0), c;
int i = 0;
out[0] = 0;
dup2(pipe_fds[0], 0); // redirect pipe to child's stdin
setvbuf(stdout, 0, _IONBF, 0);
while (read(0, &c, 1) == 1 && i < size) {
printf("<%c>", c); // pass parent's stdout to real stdout,
out[i++] = c; // and then buffer in mycapture buffer
out[i] = 0; // (the extra <> are just for clarity)
}
_exit(EXIT_SUCCESS);
}
default: // = parent
dup2(pipe_fds[1], 1); // replace stdout with output to child
setvbuf(stdout, 0, _IONBF, 0);
return shmat(shmid, 0, 0); // return the child's capture buffer
}
}
My test program is:
int main(void) {
char *mycapture = mytee(100); // capture first 100 bytes
printf("Hello World"); // sample test string
sleep(1);
fprintf(stderr, "\nCaptured: <%s>\n", mycapture);
return 0;
}
The output is:
<H><e><l><l><o>< ><W><o><r><l><d>
Captured: <Hello World>
To use this in your application, in mytee() you'll need to replace the test statement printf("<%c>", c) with just write(1, &c, 1). And you may need to handle signals in the call to read. And after each of the two dup2()'s, you may want to add:
close(pipe_fds[0]);
close(pipe_fds[1]);
For a reference on this sort of stuff, see for example the excellent and short 27-year-old 220-page O'Reilly book Using C on the Unix System by Dave Curry.
The Unix way to do this is really to just write a little program that does the input processing you need, and then pipe the output of that other program to it on the command line.
If you insist on keeping it all in your C program, what I'd do instead is rewrite that function to have it send its output to a given char buffer (preferably returning the buffer's char *), so that it can be sent to stdout or processed as the client desires.
For example, the old way:
void usage () {
printf ("usage: frob sourcefile [-options]\n");
}
...and the new way:
char * usage(char * buffer) {
strcpy (buffer, "usage: frob sourcefile [-options]\n");
return buffer;
}
I really don't like tricky games with file descriptors. Can't you modify the function so that it returns its data some other way than by writing to stdout?
If you don't have access to the source code, and you can't do that, then I would suggest breaking out the code that writes to stdout into a small separate program, and run that as another process. It is easy and clean to redirect output from a process (maybe through a named pipe), and then you will have no problem with outputting to stdout from the process that receives the data.
Also, depending on the sort of editing you wish to do, you might be better off using a high-level language like Python to edit the data.
char huge_string_buf[MASSIVE_SIZE];
FILE stdout_ori=fdopen(stdout,"a");
freopen("NUL", "a", stdout); -OR- freopen("/dev/null", "a", stdout);
setbuf(stdout, huge_string_buffer);
/* modify huge_string_buffer */
//Write to stdout_fdopen
If you are on a unix system, you can use pipes (you don't need to use fork). I'll try to thoroughly comment the code below so it doesn't look like I am doing any "magic."
#include <stdio.h>
#include <unistd.h>
int main()
{
// Flush stdout first if you've previously printed something
fflush(stdout);
// Save stdout so it can be restored later
int temp_stdout;
temp_stdout = dup(fileno(stdout));
// Redirect stdout to a new pipe
int pipes[2];
pipe(pipes);
dup2(pipes[1], fileno(stdout));
// Do whatever here. stdout will be captured.
func_that_prints_something();
// Terminate captured output with a zero
write(pipes[1], "", 1);
// Restore stdout
fflush(stdout);
dup2(temp_stdout, fileno(stdout));
// Print the captured output
while (1)
{
char c;
read(pipes[0], &c, 1);
if (c == 0)
break;
putc(c, stdout);
}
// Alternatively, because the output is zero terminated, you could
// print using printf. You just need to make sure you read enough
// from the captured output
const int buffer_size = 1024;
char buffer[buffer_size];
read(pipes[0], buffer, buffer_size);
printf(buffer);
return 0;
}

How to output to a file with information from a pipe in C?

I'm confused about what I'm doing wrong when I'm attempting to output to a file after I've execed a second program.
Say I have input file that has the following names:
Marty B. Beach 7 8
zachary b. Whitaker 12 23
Ivan sanchez 02 15
Jim Toolonganame 9 03
After my programs finish, it will convert the student's names to their usernames and output it to a file such as this:
mbb0708
zbw1223
is0215
jt0903
Currently as my program stands, it outputs nothing to the file and the terminal seems to be in an infinite loop despite self testing my converter program before and making sure it outputs names correctly to stdout.
I'm not sure what I'm doing wrong here? First time programming with pipes. I know have to make use of the read and write commands extract the data, but with the dup2 command is that necessary for the read command alone?
manager.c
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
int main(int argc, char** argv)
{
pid_t pid;
int nbytes;
/*Buffer to hold data from pipe*/
char buffer[BUFSIZ + 1];
/*Pipe Information*/
int commpipe[2];
if(pipe(commpipe))
{
fprintf(stderr, "Pipe failed.\n");
return EXIT_FAILURE;
}
if((pid = fork()) == -1)
{
fprintf(stderr,"Fork error. Exiting.\n");
exit(1);
}
else if(pid == 0)
{
/*This is the child process. Close our copy of the write end of the file descriptor.*/
close(commpipe[1]);
/* Connect the read end of the pipe to standard input*/
dup2(commpipe[0], STDIN_FILENO);
/*Program will convert the Student's name to their respective names*/
execl("converter","converter",NULL);
/*Exit if failure appears*/
exit(EXIT_FAILURE);
}
else
{
FILE *file;
file = fopen("usernames.txt","a+"); //append a file(add text to a file or create a file it does not exist)
/*Close or copy of the read end of the file descriptor */
//close(commpipe[1]);
nbytes = write(commpipe[1], buffer, BUFSIZ);
//Read from pipe here first?
//Output to usernames.txt the usernames of the user from the pipe.
fprintf(file, "%s", buffer);
/*Wait for the child process to finish*/
waitpid(pid, NULL, 0);
}
return 0;
}
One problem is that after manager has sent all the data to converter, the manager is not closing commpipe[1]. Because of that, converter will never get EOF on stdin so will not exit.
Most likely manager isn't getting any data back from converter due to buffering. Some implementations of stdio use full-buffer buffering (as opposed to line-buffering) when not writing to a terminal. Once you fix the previous error and get the process to close, that will flush stdout. You can also consider adding fflush(stdout) after your puts line.
Have a look at the OpenGroup site, there's an example that looks similar to yours. I suggest you get the sample working first with some hard coded. Once that is working, add the code to read and write the results.
I made some minor changes to get the example working:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <assert.h>
int main(int argc, char** argv){
int fildes[2];
const int BSIZE = 100;
char buf[BSIZE];
ssize_t nbytes;
int status;
status = pipe(fildes);
if (status == -1 ) {
/* an error occurred */
printf("Error!\n");
exit(-1);
}
printf("Forking!\n");
switch (fork()) {
case -1: /* Handle error */
printf("Broken Handle :(\n");
break;
case 0: /* Child - reads from pipe */
printf("Child!\n");
close(fildes[1]); /* Write end is unused */
nbytes = read(fildes[0], buf, BSIZE); /* Get data from pipe */
/* At this point, a further read would see end of file ... */
assert(nbytes < BSIZE); /* Prevent buffer overflow */
buf[nbytes] = '\0'; /* buf won't be NUL terminated */
printf("Child received %s", buf);
close(fildes[0]); /* Finished with pipe */
fflush(stdout);
exit(EXIT_SUCCESS);
default: /* Parent - writes to pipe */
printf("Parent!\n");
close(fildes[0]); /* Read end is unused */
write(fildes[1], "Hello world\n", 12); /* Write data on pipe */
close(fildes[1]); /* Child will see EOF */
/* Note that the Parent should wait for a response from the
child here, because the child process will be terminated once
the parent exits */
exit(EXIT_SUCCESS);
}
return 0;
}
As I understand, your converter program reads lines from stdin and writes them to stdout. As a pipe is a uni-directional entity, you will need TWO of them to communicate with the manager - one to send data to the converter and one to receive output from it.
Maybe you should consider enhancing the converter to take (as optional arguments) the name of an input and output file.

Resources