I am trying to make a program to takes a command including pipes and then executes it. This is a simplified version of it where I'm trying to pipe the ls and wc command:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include<fcntl.h>
int main(){
char* arglist1[] = {"ls", NULL}; // writing process
char* arglist2[] = {"wc", NULL}; // reading process
int pipefd[2];
pid_t p1, p2;
if (pipe(pipefd) < 0) {
printf("\nPipe could not be initialized");
return 0;
}
p1 = fork();
if (p1 < 0) {
printf("\nCould not fork");
return 0;
}
if (p1 == 0) { // Child 1 executing it needs to write at the write end
close(pipefd[0]);
dup2(pipefd[1], STDOUT_FILENO);
close(pipefd[1]);
if (execvp(arglist1[0], arglist1) < 0) {
printf("\nCould not execute command 1..");
exit(0);
}
} else { // Parent executing
p2 = fork();
if (p2 < 0) {
printf("\nCould not fork");
return 0;
}
if (p2 == 0) { // Child 2 executing it needs to read at the read end
close(pipefd[1]);
dup2(pipefd[0], STDIN_FILENO);
close(pipefd[0]);
if (execvp(arglist2[0], arglist2) < 0) {
printf("\nCould not execute command 2..");
exit(0);
}
} else { // parent executing, waiting for two children
wait(NULL);
wait(NULL);
}
}
printf("\n");
return 0;
}
Although there is error handling in the program, it neither shows anything nor ends. Where is it blocking?
Your problem is that the parent doesn't close both the pipe's file descriptors, and the wc process won't die until it gets EOF on the pipe, and that won't happen until every process that has the write end of the pipe open has closed it. You need to close both ends of the pipe in the parent before waiting for the children to die.
Rule of thumb: If you
dup2()
one end of a pipe to standard input or standard output, close both of the
original file descriptors returned by
pipe()
as soon as possible.
In particular, you should close them before using any of the
exec*()
family of functions.
The rule also applies if you duplicate the descriptors with either
dup()
or
fcntl()
with F_DUPFD or F_DUPFD_CLOEXEC.
If the parent process will not communicate with any of its children via
the pipe, it must ensure that it closes both ends of the pipe early
enough (before waiting, for example) so that its children can receive
EOF indications on read (or get SIGPIPE signals or write errors on
write), rather than blocking indefinitely.
Even if the parent uses the pipe without using dup2(), it should
normally close at least one end of the pipe — it is extremely rare for
a program to read and write on both ends of a single pipe.
Note that the O_CLOEXEC option to
open(),
and the FD_CLOEXEC and F_DUPFD_CLOEXEC options to fcntl() can also factor
into this discussion.
If you use
posix_spawn()
and its extensive family of support functions (21 functions in total),
you will need to review how to close file descriptors in the spawned process
(posix_spawn_file_actions_addclose(),
etc.).
Note that using dup2(a, b) is safer than using close(b); dup(a);
for a variety of reasons.
One is that if you want to force the file descriptor to a larger than
usual number, dup2() is the only sensible way to do that.
Another is that if a is the same as b (e.g. both 0), then dup2()
handles it correctly (it doesn't close b before duplicating a)
whereas the separate close() and dup() fails horribly.
This is an unlikely, but not impossible, circumstance.
Side notes:
Error messages should be written to stderr, not stdout, and should end with a newline. They don't normally need to start with a newline.
You don't need to test the return value from the exec*() family of functions. If they succeed, they don't return; if they return, they failed. But it is important to have code after the eec*() call to trap the error.
The program should exit with a non-zero status (e.g. EXIT_FAILURE) if the exec*() function fails. Exiting with status zero reports success.
Related
I'm new in Unix systems programming and I'm struggling to understand file descriptors and pipes. Let's consider this simple code:
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <string.h>
int main() {
int fd[2], p;
char *m = "123456789\n", c;
pipe(fd);
p = fork();
if (p == 0) {
// child
while(read(fd[0], &c, 1) > 0) write(1, &c, 1);
}
else {
// parent
write(fd[1], m, strlen(m));
close(fd[1]);
wait(NULL);
}
exit (0);
}
When I compile and run the code, it outputs 123456789 but the process never ends unless I issue ^C. Actually, both processes appear as stopped in htop.
If the child closes fd[1] prior to read() then it seems to work OK but I don't understand why. The fd are shared between both processes and the parent closes fd[1] after writing. Why then the child doesn't get the EOF when reading?
Thank you in advance!
Well, first of all your parent process is waiting for the child to terminate in the wait(2) system call, whyle your child is blocked in the pipe to read(2) for another character. Both processes are blocked... so you need to act externally to take them off. The problem is that the child process doesn't close it's writing descriptor of the pipe (and also the parent doesn't close its reading descriptor of the pipe, but this doesn't affect here) Simply the pipe blocks any reader while at least one such writing descriptor is still open. Only when all writing descriptors are closed, the read returns 0 to the reader.
When you did the fork(2) both pipe descriptors (fd[0] and fd[1]) were dup()ed on the child process, so you have a pipe with two open file descriptors (one in the parent, one in the child) for writing, and two open descriptors (again, one in the parent, one in the child) for reading, so as one writer remains with the pipe open for writing (the child process in this case) the read made by the child still blocks. The kernel cannot detect this as an anomaly, because the child could still write on the pipe if another thread (or a signal handler) should want to.
By the way, I'm going to comment some things you made bad in your code:
first is that you consider only two cases from fork() for the parent, and for the child, but if the fork fails, it will return -1 and you'll have a parent process writing on a pipe with no reading process, so probably it should block (as I say, this is not your case, but it is an error either) You have always to check for errors from system calls, and don't assume your fork() call is never to fail (think that -1 is considered != 0 and so it falls through the parent's code). There's only one system call that you can execute without checking it for errors, and it is close(2) (although there's much controversy on this)
This same happens with read() and write(). A better solution to your problem would be to have used a larger buffer (not just one char, to reduce the number of system calls made by your program and so speed it up) and use the return value of read() as a parameter on the write() call.
Your program should (it does on my system, indeed) work with just inserting the following line:
close(fd[1]);
just before the while loop in the child code, as shown here:
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <string.h>
int main() {
int fd[2], p;
char *m = "123456789\n", c;
pipe(fd);
p = fork();
if (p == 0) {
// child
close(fd[1]); // <--- this close is fundamental for the pipe to work properly.
while(read(fd[0], &c, 1) > 0) write(1, &c, 1);
}
else if (p > 0) {
// parent
// another close(fd[0]); should be included here
write(fd[1], m, strlen(m));
close(fd[1]);
wait(NULL);
} else {
// include error processing for fork() here
}
exit (0);
}
If the child closes fd[1] prior to read() then it seems to work OK but I don't understand why.
That's what you need to do. There's not much more to it than that. A read from the read end of a pipe won't return 0 (signaling EOF) until the kernel is sure that nothing will ever write to the write end of that pipe again, and as long as it's still open anywhere, including the process doing the reading, it can't be sure of that.
I am trying to find out how I can send output of one process into a child process. I have gone down a journey learning of file descriptors and pipes. I think I am almost there but am missing a key component.
This is what I have so far:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int fd[2];
pid_t sort_pid;
/* Create the pipe */
if(pipe(fd) == -1) {
fprintf(stderr, "Pipe failed\n");
exit(EXIT_FAILURE);
}
/* create child process that will sort */
sort_pid = fork();
if(sort_pid < 0) { // failed to fork
fprintf(stderr, "Child Fork failed\n");
exit(EXIT_FAILURE);
}
else if(sort_pid == 0) { // child process
close(0); // close stdin
dup2(fd[0], 0); // make stdin same as fd[0]
close(fd[1]); // don't need this end of the pipe
execlp("D:/Cygwin/bin/sort", "sort", NULL);
}
else { // parent process
close(1); // close stdout
dup2(fd[1], 1); // make stdout same as fd[1]
close(fd[0]); // don't need this end of the pipe
printf("Hello\n");
printf("Bye\n");
printf("Hi\n");
printf("G'day\n");
printf("It Works!\n");
wait(NULL);
}
return EXIT_SUCCESS;
}
This doesn't work, as it seems to go into an endless loop or something. I tried combinations of the wait() but that doesnt help either.
I am doing this to learn how to apply this idea in my actual program. In my actual program I read files, parse them line by line and save the processed data to a static array of structs. I want to be able to then generate output based on these results and use the fork() and execv() syscalls to sort the output.
This is ultimately for a project in uni.
These are similar examples which I dissected to get to the stage I am at so far:
pipe() and fork() in c
How to call UNIX sort command on data in pipe
Using dup,pipe,fifo to communicate with the child process
Furthermore I read the manual pages on the relevant syscalls to try and understand them. I will admit my knowledge of pipes and using them is still basically nothing, as this is my first every try with them.
Any help is appreciated, even further sources of information I could look into myself. I seem to have exhausted most of the useful stuff a google search give me.
sort will read until it encounters end-of-file. You therefore have to close the write-end of the pipe if you want it to complete. Because of the dup2, you have two copies of the open file description, so you need
close(fd[1]); anytime after the call to dup2
close(1); after you're done writing to (the new) stdout
Make sure to fflush(stdout) before the second of these to ensure that all your data actually made it into the pipe.
(This is a simple example of a deadlock: sort is waiting on the pipe to close, which will happen when the parent exits. But the parent won't exit until it finishes waiting on the child to exit…)
My task is to write a C program that executes the command "ls -l /bin/?? | grep rwxr-xr-x | sort". There are 3 child processes where each of them executes one of the commands separately and sends the result through a pipe to the next child process. I'm using a Swedish modified verision of debian so the error message is in Swedish, but i'll translate the error i get, it's something along the lines of: sort: failed to status -: unknown fileidentifier.
Maybe it's my pipes that do not work as intended, I'm not too sure about the close() commands. I'm pretty sure the error comes from the pipes. Would be grateful if someone could run the program and get the english error message.
#include <stdio.h>
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <assert.h>
#include <errno.h>
#include <string.h>
int main()
{
int ret;
int fds1[2], fds2[2], fds3[2];
char buf[20];
pid_t pid;
///initiating pipes
ret=pipe(fds1);
if(ret == -1){
perror("could not pipe");
exit(1);
}
ret=pipe(fds2);
if( ret == -1){
perror("could not pipe");
exit(1);
}
ret=pipe(fds3);
if (ret == -1){
perror("could not pipe");
exit(1);
}
pid=fork();
if(pid==-1){
fprintf(stderr,"fork failed");
exit(0);
}
if(pid==0){
///CHILD 1
close(1);
dup(fds1[1]);
close(fds1[0]);
close(fds1[1]);
close(0);
execlp("/bin/sh","bin/sh", "ls-l /bin/??", (char *)NULL);
}
else{
wait(0);
}
pid=fork();
if(pid==-1){
fprintf(stderr,"fork failed");
exit(0);
}
if(pid==0){
close(0);
dup(fds1[0]);
close(fds1[0]);
close(fds1[1]);
close(1);
dup(fds2[1]);
close(fds2[0]);
close(fds2[1]);
execlp("/usr/share/grep/", "grep", "rwxr-xr-x", NULL);
}
else{
wait(0);
}
close(fds1[0]);
close(fds1[1]);
pid=fork();
if(pid==-1){
fprintf(stderr,"fork failed");
exit(0);
}
if(pid==0){
close(0);
dup(fds2[0]);
close(fds2[0]);
close(fds2[1]);
execlp("sort", "sort", NULL);
}
else{
wait(0);
}
close(fds2[0]);
close(fds2[1]);
}
Your code has several problems, but before I discuss them, let me introduce you to a flavor of one of my favorite preprocessor macros:
#define DO_OR_DIE(x, s) do { \
if ((x) < 0) { \
perror(s); \
exit(1); \
} \
} while (0)
Using this macro where it is applicable can clarify your code by replacing all the boilerplate error checking. For example, this:
ret=pipe(fds1);
if(ret == -1){
perror("could not pipe");
exit(1);
}
becomes just
DO_OR_DIE(pipe(fds1), "pipe");
That makes it a lot easier to see and focus on the key parts of the code, and it's easier to type, too. As a result, it also reduces the temptation to skip error checks, such as those for your calls to dup().
Now, as to your code. For me, it exhibits not just the one misbehavior you now describe in your question, but three:
It emits an error message "bin/sh: ls-l /bin/??: No such file or directory".
It emits the error message you describe, "sort: stat failed: -: Bad file descriptor".
It does not terminate.
The first error message pertains to multiple problems in the arguments to your first execlp() call. If you want to launch a shell and specify a command for it to run, as opposed to a file from which to read commands, then you must pass the -c option to it. Additionally, you've omitted mandatory whitespace between the ls and its arguments. It looks like you want this:
execlp("/bin/sh","sh", "-c", "ls -l /bin/??", (char *)NULL);
Setting aside the second problem for the moment, let's turn to the failure to terminate. You have several problems in this area, falling into these categories:
Holding pipe ends open where you should ensure them closed
Calling wait() at the wrong points
When you set up a pipe between two processes, you generally want to make sure that there are no open file descriptors on either end of the pipe other than one on the write end held by one process, and one on the read end held by the other process. Each end should be open exactly once, in exactly one process. Since the processes being connected invariably inherit these file descriptors from their parent, it is essential that the parent close its copies (except that the parent will want to keep one open in the event that it itself is one of the communicating processes).
The process on the read end of a pipe will not see EOF on that pipe until all open file descriptors on the write end are closed. Child processes running programs such as grep and sort that read their input to its end will hang indefinitely if the write end of the pipe is not completely closed.
That can be a particularly perverse problem when the child reading the pipe also has a copy of the write end of that pipe, unused, or if one of its siblings does.
Additionally, the whole point of a pipeline is that the processes involved run concurrently. If you wait() after starting one before starting the next, then at minimum you prevent such concurrency. Worse, however, that can also cause your program to hang, because a pipe has finite buffer capacity. If the child is writing output to a pipe, but no one is reading it, then the pipe's buffer can fill to capacity, at which point the child blocks. If the parent is waiting for the child to finish before launching the process that will drain the pipe, then you have a deadlock. Therefore, you should start all the processes in the pipeline first, then wait for them all.
Having fixed such problems in your code, I find that the program emits a different error for me:
execlp: No such file or directory
(The specifics of this message derive from the nature of my fixes.) This should be especially concerning, because if execlp() fails then it returns in the process in which it was called. In your cases, control will then fall right out of your if statement, into the code intended only for the parent to execute. For this reason, it is essential to handle errors from execlp(). At minimum, add a call to exit() or _Exit() immediately after.
But what's failing? Well, it's the grep this time. Note that you specify the command to execute as "/usr/share/grep/" -- that trailing / is erroneous, and the path itself is suspect. On my system, the correct path is /usr/bin/grep, but since we're using execlp, which resolves the executable in the path, we might as well omit the path altogether:
execlp("grep", "grep", "rwxr-xr-x", (char *) NULL);
Et voilà! After making that correction as well, your program runs for me.
Additional advice: do not use dup() when you care what file descriptor number you want the duplicate to have, such as when you're trying to dup onto one of the standard streams. Use dup2() for that, which has the additional advantage that you don't need to close the specified file descriptor first.
I want to do simple thing: my_process | proc2 | proc3, but programatically - without using shell, that can do this pretty easy. Is this possible? I cannot find anything :(
EDIT:
Well, without code, nobody will know, what problem I'm trying to resolve. Actually, no output is going out (I'm using printfs)
int pip1[2];
pipe(pip1);
dup2(pip1[1], STDOUT_FILENO);
int fres = fork();
if (fres == 0) {
close(pip1[1]);
dup2(pip1[0], STDIN_FILENO);
execlp("wc", "wc", (char*)0);
}
else {
close(pip1[0]);
}
Please learn about file descriptors and the pipe system call. Also, check read and write.
Your 'one child' code has some major problems, most noticeably that you configure the wc command to write to the pipe, not to your original standard output. It also doesn't close enough file descriptors (a common problem with pipes), and isn't really careful enough if the fork() fails.
You have:
int pip1[2];
pipe(pip1);
dup2(pip1[1], STDOUT_FILENO); // The process will write to the pipe
int fres = fork(); // Both the parent and the child will…
// Should handle fork failure
if (fres == 0) {
close(pip1[1]);
dup2(pip1[0], STDIN_FILENO); // Should close pip1[0] too
execlp("wc", "wc", (char*)0);
}
else { // Should duplicate pipe to stdout here
close(pip1[0]); // Should close pip1[1] too
}
You need:
fflush(stdout); // Print any pending output before forking
int pip1[2];
pipe(pip1);
int fres = fork();
if (fres < 0)
{
/* Failed to create child */
/* Report problem */
/* Probably close both ends of the pipe */
close(pip1[0]);
close(pip1[1]);
}
else if (fres == 0)
{
dup2(pip1[0], STDIN_FILENO);
close(pip1[0]);
close(pip1[1]);
execlp("wc", "wc", (char*)0);
}
else
{
dup2(pip1[1], STDOUT_FILENO);
close(pip1[0]);
close(pip1[1]);
}
Note that the amended code follows the:
Rule of thumb: If you use dup2() to duplicate one end of a pipe to standard input or standard output, you should close both ends of the original pipe.
This also applies if you use dup() or fcntl() with F_DUPFD.
The corollary is that if you don't duplicate one end of the pipe to a standard I/O channel, you typically don't close both ends of the pipe (though you usually still close one end) until you're finished communicating.
You might need to think about saving your original standard output before running the pipeline if you ever want to reinstate things.
As Alex answered, you'll need syscalls like pipe(2), dup2(2), perhaps poll(2) and some other syscalls(2) etc.
Read Advanced Linux Programming, it explains that quite well...
Also, play with strace(1) and study the source code of some simple free software shell.
See also popen(3) -which is not enough in your case-
Recall that stdio(3) streams are buffered. You probably need to fflush(3) at appropriate places (e.g. before fork(2))
I'm implementing piping on a simulated file system in C++ (with mostly C). It needs to run commands in the host shell but perform the piping itself on the simulated file system.
I could achieve this with the pipe(), fork(), and system() system calls, but I'd prefer to use popen() (which handles creating a pipe, forking a process, and passing a command to the shell). This may not be possible because (I think) I need to be able to write from the parent process of the pipe, read on the child process end, write the output back from the child, and finally read that output from the parent. The man page for popen() on my system says a bidirectional pipe is possible, but my code needs to run on a system with an older version supporting only unidirectional pipes.
With the separate calls above, I can open/close pipes to achieve this. Is that possible with popen()?
For a trivial example, to run ls -l | grep .txt | grep cmds I need to:
Open a pipe and process to run ls -l on the host; read its output back
Pipe the output of ls -l back to my simulator
Open a pipe and process to run grep .txt on the host on the piped output of ls -l
Pipe the output of this back to the simulator (stuck here)
Open a pipe and process to run grep cmds on the host on the piped output of grep .txt
Pipe the output of this back to the simulator and print it
man popen
From Mac OS X:
The popen() function 'opens' a
process by creating a bidirectional
pipe, forking, and invoking the shell.
Any streams opened by previous popen()
calls in the parent process are closed
in the new child process.
Historically, popen() was implemented
with a unidirectional pipe; hence,
many implementations of popen() only
allow the mode argument to specify
reading or writing, not both. Because
popen() is now implemented using a
bidirectional pipe, the mode argument
may request a bidirectional data flow.
The mode argument is a pointer to a
null-terminated string which must be
'r' for reading, 'w' for writing, or
'r+' for reading and writing.
I'd suggest writing your own function to do the piping/forking/system-ing for you. You could have the function spawn a process and return read/write file descriptors, as in...
typedef void pfunc_t (int rfd, int wfd);
pid_t pcreate(int fds[2], pfunc_t pfunc) {
/* Spawn a process from pfunc, returning it's pid. The fds array passed will
* be filled with two descriptors: fds[0] will read from the child process,
* and fds[1] will write to it.
* Similarly, the child process will receive a reading/writing fd set (in
* that same order) as arguments.
*/
pid_t pid;
int pipes[4];
/* Warning: I'm not handling possible errors in pipe/fork */
pipe(&pipes[0]); /* Parent read/child write pipe */
pipe(&pipes[2]); /* Child read/parent write pipe */
if ((pid = fork()) > 0) {
/* Parent process */
fds[0] = pipes[0];
fds[1] = pipes[3];
close(pipes[1]);
close(pipes[2]);
return pid;
} else {
close(pipes[0]);
close(pipes[3]);
pfunc(pipes[2], pipes[1]);
exit(0);
}
return -1; /* ? */
}
You can add whatever functionality you need in there.
You seem to have answered your own question. If your code needs to work on an older system that doesn't support popen opening bidirectional pipes, then you won't be able to use popen (at least not the one that's supplied).
The real question would be about the exact capabilities of the older systems in question. In particular, does their pipe support creating bidirectional pipes? If they have a pipe that can create a bidirectional pipe, but popen that doesn't, then I'd write the main stream of the code to use popen with a bidirectional pipe, and supply an implementation of popen that can use a bidirectional pipe that gets compiled in an used where needed.
If you need to support systems old enough that pipe only supports unidirectional pipes, then you're pretty much stuck with using pipe, fork, dup2, etc., on your own. I'd probably still wrap this up in a function that works almost like a modern version of popen, but instead of returning one file handle, fills in a small structure with two file handles, one for the child's stdin, the other for the child's stdout.
POSIX stipulates that the popen() call is not designed to provide bi-directional communication:
The mode argument to popen() is a string that specifies I/O mode:
If mode is r, when the child process is started, its file descriptor STDOUT_FILENO shall be the writable end of the pipe, and the file descriptor fileno(stream) in the calling process, where stream is the stream pointer returned by popen(), shall be the readable end of the pipe.
If mode is w, when the child process is started its file descriptor STDIN_FILENO shall be the readable end of the pipe, and the file descriptor fileno(stream) in the calling process, where stream is the stream pointer returned by popen(), shall be the writable end of the pipe.
If mode is any other value, the result is unspecified.
Any portable code will make no assumptions beyond that. The BSD popen() is similar to what your question describes.
Additionally, pipes are different from sockets and each pipe file descriptor is uni-directional. You would have to create two pipes, one configured for each direction.
In one of netresolve backends I'm talking to a script and therefore I need to write to its stdin and read from its stdout. The following function executes a command with stdin and stdout redirected to a pipe. You can use it and adapt it to your liking.
static bool
start_subprocess(char *const command[], int *pid, int *infd, int *outfd)
{
int p1[2], p2[2];
if (!pid || !infd || !outfd)
return false;
if (pipe(p1) == -1)
goto err_pipe1;
if (pipe(p2) == -1)
goto err_pipe2;
if ((*pid = fork()) == -1)
goto err_fork;
if (*pid) {
/* Parent process. */
*infd = p1[1];
*outfd = p2[0];
close(p1[0]);
close(p2[1]);
return true;
} else {
/* Child process. */
dup2(p1[0], 0);
dup2(p2[1], 1);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execvp(*command, command);
/* Error occured. */
fprintf(stderr, "error running %s: %s", *command, strerror(errno));
abort();
}
err_fork:
close(p2[1]);
close(p2[0]);
err_pipe2:
close(p1[1]);
close(p1[0]);
err_pipe1:
return false;
}
https://github.com/crossdistro/netresolve/blob/master/backends/exec.c#L46
(I used the same code in popen simultaneous read and write)
Here's the code (C++, but can be easily converted to C):
#include <unistd.h>
#include <cstdlib>
#include <cstdio>
#include <cstring>
#include <utility>
// Like popen(), but returns two FILE*: child's stdin and stdout, respectively.
std::pair<FILE *, FILE *> popen2(const char *__command)
{
// pipes[0]: parent writes, child reads (child's stdin)
// pipes[1]: child writes, parent reads (child's stdout)
int pipes[2][2];
pipe(pipes[0]);
pipe(pipes[1]);
if (fork() > 0)
{
// parent
close(pipes[0][0]);
close(pipes[1][1]);
return {fdopen(pipes[0][1], "w"), fdopen(pipes[1][0], "r")};
}
else
{
// child
close(pipes[0][1]);
close(pipes[1][0]);
dup2(pipes[0][0], STDIN_FILENO);
dup2(pipes[1][1], STDOUT_FILENO);
execl("/bin/sh", "/bin/sh", "-c", __command, NULL);
exit(1);
}
}
Usage:
int main()
{
auto [p_stdin, p_stdout] = popen2("cat -n");
if (p_stdin == NULL || p_stdout == NULL)
{
printf("popen2() failed\n");
return 1;
}
const char msg[] = "Hello there!";
char buf[32];
printf("I say \"%s\"\n", msg);
fwrite(msg, 1, sizeof(msg), p_stdin);
fclose(p_stdin);
fread(buf, 1, sizeof(buf), p_stdout);
fclose(p_stdout);
printf("child says \"%s\"\n", buf);
return 0;
}
Possible Output:
I say "Hello there!"
child says " 1 Hello there!"
No need to create two pipes and waste a filedescriptor in each process. Just use a socket instead. https://stackoverflow.com/a/25177958/894520