Reading my child pipe created by exec() with C - c

I'm just starting to learn C programming and I have some uncertainty about fork(), exec(), pipe(), etc.
I've developed this code, but when I execute it, the variable c remains empty, so I don't know if the child isn't writing to the pipe, or the parent isn't reading from it.
Could you help me please? This is the code:
int main() {
int pid=0;
int pipefd[2];
char* c=(char *)malloc(sizeof(char));
FILE *fp;
pipe(pipefd);
pid=fork();
if (pid==0){
close(pipefd[0]);
dup2(pipefd[1],1);
close(pipefd[1]);
execl("ls -l | cut -c28","ls -l | cut -c28", (char *) 0);
}
else{
close(pipefd[1]);
read(pipefd[0], c, 1);
char* path="/home/random";
char* txt=".txt";
char* root=malloc(strlen(path) + strlen(txt) + sizeof(char));
strcpy(root,path);
strcat(root,c);
strcat(root,txt);
close(pipefd[0]);
fp=fopen(root,"w+");
(...)
}
The problem is that the final root string its only "/home/random.txt" because there is nothing in the char c, and what I want is to open the file "/home/random(number stored in char c).txt".

execl executes a single command, and is not aware of shell concepts such as pipes. If you want to execute a shell command, you will have to execute a shell, as follows:
execl("/bin/sh","/bin/sh","-c","ls -l | cut -c28", (char*) 0);

Always check the return value of the system calls (like execve(2) and derived functions like execl(3)), and use the errno(3) to figure out what went wrong.
In your case the execl line fails.

Using strcpy/strcat seems a bit excessively complex. snprintf can turn those 3 lines into one.
snprintf( root, size_of_buf, "/home/random%s", c );
Additionally, check your error codes. As noted, execl is failing and you don't know it. fork, dup2, ...,can also fail, you want to know sooner rather than later.

Related

write Error: Broken Pipe in C using execlp

I have issues creating simple C program which takes arguments from command line, the last argument is path to the file. Program runs cat command on given file, and then runs tr on the result of cat. Tr gets arguments from command line(other than the last argument). I am getting errors:
Missing operand.
write error: Broken Pipe.
I am not sure where the mistake is...
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define WRITE_END 1
#define READ_END 0
int main(int argc, char* argv[]){
if(argc < 2){
printf("\nPROVIDE AN ARGUMENT\n");
return 1;
}
const char * file = argv[argc - 1];
char ** args = calloc(argc - 2, sizeof(char*));
for( int i = 1; i<argc-2; i++){
args[i - 1 ] = argv[i];
}
int fd[2];
pipe(fd);
pid_t child;
if((child = fork()) == -1)return 2;
if(child == 0){
dup2(fd[WRITE_END], STDOUT_FILENO);
close(fd[READ_END]);
close(fd[WRITE_END]);
execlp("cat", "cat", file, (char*)NULL);
exit(1);
}
else{
dup2(fd[READ_END], STDIN_FILENO);
close(fd[WRITE_END]);
close(fd[READ_END]);
execlp("tr", "tr", *args, (char*)NULL);
exit(1);
}
close(fd[0]);
close(fd[1]);
wait(0);
wait(0);
return 0;
}
There are a few problems here that are keeping you from getting this to work. First, as mentioned by Nate Eldredge in a comment, there are problems with the allocation and copying of the all-but-last arguments to variable args. Second, your use of execlp has a slight problem in that the arguments should include an extra argument corresponding to the name of the program run (not the same as the file opened as the executable, lots of people get confused about this point). Third, as also mentioned by Nate, you need to call execvp in the branch of the if-else corresponding to the parent process (the "else" branch). Its second argument will need to be an array of pointers to character, the last of which is NULL.
So taking these one at a time. First, you need to allocate argc slots for args to use it in something like the way you intend:
char ** args = calloc(argc, sizeof(char*));
memcpy(args, argv, sizeof(char*)*(argc -1));
The first line allocates an array of character pointers the same size as the arg. list. The second line copies all but the last pointer in argv to the corresponding location in args and leaves the last one as NULL (calloc initialized the storage for it to be zero, and you need the last pointer in args to be a null pointer if you're going to pass it to execvp, which you will). Note that you're not duplicating all of the storage under argv, just the pointers in the first dimension (remember: argv[0] is a pointer and argv[0][0] is the first character in the program name).
Note that your use of close and dup was fine. I don't know why anyone objected to that unless they forgot that allocating a file descriptor always takes the lowest-numbered descriptor that is unused. That's about the most important thing about descriptor tables as originally used in UNIX.
Next, the call to execlp that overlays the child process created by fork with "cat" is missing an argument. It should be:
execlp("cat", "cat", file, (char*)NULL);
That extra "cat" in there is the value cat will receive when it enters main() as argv[0]. You're probably noticing that this looks like you could lie about the name of the program you're running with the exec__ functions, and you can (but you can't completely hide having done it).
Finally, that second execlp call. You can't pass arguments through as if they were typed on the command line, in one big string: exec in any form doesn't use a shell to invoke the other program and it's not going to parse the command line for you. In addition, the way you were (apparently, if I've read your intent correctly) trying to concatenate the argument strings was also not right (see above comments about args allocation and the memcpy call). You have to break out individual arguments and pass them to it. So if you have an array of pointer to character and the last one is NULL, like you'll have in args after the changes I indicated for allocating and copying data, then you can just pass args to execvp:
execvp("tr", args);
These aren't huge errors and a lot of people make these kinds of mistakes when starting out with manipulating the argument list and using the fork and exec functions. A lot of people make mistakes trying to use a pipe between parent and child processes but you seem to have gotten that part right.
One last thing: the lines downstream in execution from the exec__ calls only get executed if there's an error performing the actual replacement of the running program with the new one. Errors on the command line of "cat" or "tr", for example, won't cause exec__ to fail. Errors like lack of permission to execute the file given as the first argument or absence of the file will cause the exec__ functions to fail. Unless exec returns an error, nothing downstream of the exec call is executed in the process in which it is executed (a successful exec never returns).

Perl, how do I create a pipe to my exec'd child?

I am trying to pass data from my perl script to my c program using a pipe (uni-directional).
I need to find a way to to do this without messing with the child programs STDIN or STDOUT, so I try creating a new handle and passing the fd.
I create 2 IO::Handles and create a pipe. I write to one end of the pipe and attempt to pass the File descriptor of the other end of the pipe to my child program that is being execed. I pass the file descriptor by setting an ENV variable. Why does this not work? (It does not print out 'hello world'). As far as I know, file descriptors and pipes are inherited by the child when exec'd.
Perl script:
#!/opt/local/bin/perl
use IO::Pipe;
use IO::Handle;
my $reader = IO::Handle->new();
my $writer = IO::Handle->new();
$reader->autoflush(1);
$writer->autoflush(1);
my $pipe = IO::Pipe->new($reader, $writer);
print $writer "hello world";
my $fh = $reader->fileno;
$ENV{'MY_FD'} = $fh;
exec('./child') or print "error opening app\n";
# No more code after this since exec replaces the current process
C Program, app.c (Compiled with gcc app.c -o child):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d\n", fd);
printf("message: %s\n", buf);
}
Output:
fd: 3
message:
The message is never passed through the pipe to the C program. Any suggestions?
Your pipe file descriptors are set FD_CLOEXEC, and so are closed upon exec().
Perl's $^F controls this behavior. Try something like this, before you call IO::Pipe->new:
$^F = 10; # Assumes we don't already have a zillion FDs open
Alternatively, you can with Fcntl clear the FD_CLOEXEC flag yourself after creating the pipe.
I found the solution. Some people said that it was not possible with exec, that it would not see pipes or file descriptors, but that was not correct.
Turns out that perl closes/invalidates all fd > 2 automatically unless you say otherwise.
Adding the following flags to the FD fixes this problem (where READ is the handle here, NOT STDIN):
my $flags = fcntl(READ, F_GETFD, 0);
fcntl(READ, F_SETFD, $flags & ~FD_CLOEXEC);
Your program is failing because exec calls another program and never returns. It isn't designed for communication with another process at all.
You probably wrote the above code based on the IO::Pipe documentation, which says "ARGS are passed to exec". That isn't what it means, though. IO::Pipe is for communication between two processes within your Perl script, which are created by fork. They mean the execution of the new process, rather than a call to exec in your own code.
Edit: for one-directional communication, all you need is open with a pipe:
open my $prog, '|-', './child' or die "can't run program: $!";
print {$prog} "Hello, world!";
Rodrigo, I can tell you that your file descriptor is no longer valid when you exec into the c app.
Please be aware that I just say it is INVALID, but it still exists in the environment variables. The FD=3 will continue existing until the whole process ends.
You can check the fd by fcntl. The code is listing below
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char ** argv) {
int fd = atoi(getenv("MY_FD"));
char buf[12];
read(fd, buf, 11);
buf[11] = '\0';
printf("fd: %d, if fd still valid: %d\n", fd, fcntl(fd, F_GETFD));
printf("strlen %d\n", (int)strlen(buf));
printf("message: %s\n", buf);
}
You can see that MY_FD=3 will always in ENV as the process doesn't destroy itself, so you can get fd as 3. But, this file descriptor has been invalid. so the result of fcntl(fd, F_GETFD) will be -1, and the length you read from fd will be 0.
That's why you will never see the "hello world" sentence.
One more thing, #dan1111 is right, but you don't need to open a new pipe, as you have already done so.
All you need to is just set MY_FD=0, like
$ENV{'MY_FD'} = 0;
The STDIN/OUT is another independent process that always exists, so the pipe will not broken down when your perl app exec into c app. That's why you can read from what you input in app.
If your requirement is writing from another file hanle, please try to make that file handle an independent process and always exist, just like STDIN.

Limit execution time of a piped program

I want execute a Linux command in a C program and read (parse) stdout from this command in the program. The code below works but I don't know how to limit execution time of the command, in addition to the string and bytes read limits. Any ideas?
FILE *ps_pipe;
int bytes_read;
int nbytes = 100;
char *my_string=NULL;
char message[1024];
message=sprintf(message,"any command here");
ps_pipe = popen (message, "r");
my_string = (char *) malloc (nbytes + 1);
bytes_read = getdelim (&my_string, &nbytes, "delimiter_word", ps_pipe);
pclose(ps_pipe);
free(my_string);
You could do that with select(). Select can "wait" on one or more file descriptors for an event to happen (readable, writable, ...), with an optional time-out. Since it operates on file descriptors, you'll also need fileno(ps_pipe).
Keep in mind however that you won't be able to kill the forked process easily, because popen hides certain details of the child process. If you need such control, you'll need to use lower level functions fork(), pipe(), dup(), exec(), wait() and possibly kill().

C: How to redirect named pipe to stdin/out of child process

Basically I want to do in C (and without buffering) the same as this bash-script:
#!/bin/sh
cat ./fifo_in | myprogram > ./fifo_out
In other words I want to exec "myprogram" and redirect its stdin and stdout to two pipes which have been created previously.
Another program is feeding data into fifo_in and reading out of fifo_out.
Of course it would be easy to just read from ./fifo_in, buffer it in the parent and write to myprogram's stdin (and reverse for stdout and ./fifo_out) but I think there is probably a way to let "myprogram" read/write directly from/to the fifos without buffering in the parent process.
Edit:
Eugen's answer seems to be the correct one, but I cannot get it to work.
I use this function on the C-side, which seems correct to me:
pid_t execpipes(const char *wd, const char *command, const char *pipename)
{
char pipename_in[FALK_NAMESIZE];
char pipename_out[FALK_NAMESIZE];
strcpy(pipename_in, FALKPATH);
strcat(pipename_in, "/");
strcat(pipename_in, FALK_FIFO_PATH);
strcat(pipename_in, "/");
strncat(pipename_in, pipename, FALK_NAMESIZE-2);
strcpy(pipename_out, pipename_in);
strcat(pipename_out, "R");
pid_t pid;
pid = fork();
if (pid < 0)
{ //Error occured
perror("fork");
exit(1);
}
if (pid == 0)
{
chdir(wd);
d("execpipes: pipename_in=\"%s\"\n", pipename_in);
d(" pipename_out=\"%s\"\n", pipename_out);
freopen(pipename_in,"r",stdin);
freopen(pipename_out,"w",stdout);
d("execpipes: command=\"%s\"\n", command);
execl("/bin/sh", "sh", "-c", command, (char *)NULL); // using execv is probably faster
// Should never get here
perror("execl");
exit(1);
}
return pid;
}
I read and write the pipes from a PHP-script (only relevant part posted):
$pipe_in = fopen($fp.$pipename, "w");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000); // Program hangs here
The program is started correctly, receives the input via $pipe_in correctly, processes the data correctly and (because it ran fine for many months) I assume it puts out the data correctly to stdout, but when I try to read from $pipe_out, it hangs. I know that the pipes themselves are set up correctly because if I don't open $pipe_out, the program does not get any input - which makes sense because there is no reader for $pipe_out and therefore the pipeline is not complete. So I can open $pipe_out, but I cannot read anything from it, which is quite strange.
Edit2:
Program works now, thanks guys - For some reason the first pipe has to be closed before you can read from the second pipe:
$pipe_in = fopen($fp.$pipename, "w");
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
fclose($pipe_in);
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000);
fclose($pipe_out);
unlink($fp.$pipename);
unlink($fp.$pipename.'R');
I'd write a small wrapper for myprogram, that does
freopen("./fifo_in","r",stdin)
freopen("./fifo_out","w",stdout)
(Ofcourse not with constant paths!), then execve myprogram
Korn shell supports coprocesses, which I think effectively does what you ask: read from a pipe and write to a pipe (which can be stdout and stdin of a C process)
http://www.dartmouth.edu/~rc/classes/ksh/coprocesses.html
How about
myprogram < ./fifo_in > ./fifo_out
?
As for getting rid of the buffering: Since your program directly reads/writes the pipes, the buffering shouldn't hurt you.
An important point is that the process which writes fifo_in should flush properly so you don't have to wait. The same goes for your output: As soon as a "work unit" is complete, flush your stdout which will make the data available to whoever reads the output pipe.
But you can't do anything in myprogram to make the writer of fifo_in flush its buffers.
[EDIT] To do this from C (without the help of a shell), use code like this:
- Put the names of the two pipes into local variables on the stack
- Call `fork()`. If that returns '0', then open the two fifos with `freopen()` [like Eugen suggested][1]
- Call `execve` to launch the real exec.
That's (in a nutshell) what the shell is doing when it runs commands. Make sure the parent process (the one where fork() returns a PID != 0) handles the signal SIGCHLD
Perhaps you are looking of a named pipe? For example:
mkfifo fifo_in
As a test stub for my_program.c, to read fifo_in via the buffered stdin:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
char buf[80];
if (!freopen("./fifo_in", "r", stdin)) {
perror("freopen");
exit(EXIT_FAILURE);
}
while (!ferror(stdin)) {
while (fgets(buf, sizeof buf, stdin))
fputs(buf, stdout);
sleep(1);
}
return 0;
}
Then as a test for the writer, using the bash shell:
for x in {1..10}; do
echo $x
echo $x >> fifo_in
sleep 1
done
Notes:
I'd prefer to use unbuffered I/O.
The writer, at least on my machine, blocks until there is a reader.
The reader, in this sample, cannot tell when the writer is finished.

How to intercept SSH stdin and stdout? (not the password)

I realize this question is asked frequently, mainly by people who want to intercept the password-asking phase of SSH. This is not what I want. I'm after the post-login text.
I want to write a wrapper for ssh, that acts as an intermediary between SSH and the terminal. I want this configuration:
(typing on keyboard / stdin) ----> (wrapper) ----> (ssh client)
and the same for output coming from ssh:
(ssh client) -----> (wrapper) -----> stdout
I seem to be able to attain the effect I want for stdout by doing a standard trick I found online (simplified code):
pipe(fd)
if (!fork()) {
close(fd[READ_SIDE]);
close(STDOUT_FILENO); // close stdout ( fd #1 )
dup(fd[WRITE_SIDE]); // duplicate the writing side of my pipe ( to lowest # free pipe, 1 )
close(STDERR_FILENO);
dup(fd[WRITE_SIDE]);
execv(argv[1], argv + 1); // run ssh
} else {
close(fd[WRITE_SIDE]);
output = fdopen(fd[READ_SIDE], "r");
while ( (c = fgetc(output)) != EOF) {
printf("%c", c);
fflush(stdout);
}
}
Like I said, I think this works. However, I can't seem to do the opposite. I can't close(STDIN_FILENO) and dup the readside of a pipe. It seems that SSH detects this and prevents it. I've read I can use the "-t -t" option to force SSH to ignore the non-stdin nature of its input; but when I try this it still doesn't work.
Any hints?
Thanks very much!
Use popen (instead of execv) to execute the ssh cmd and be able to read and write to the session.
A pipe will not work if you want to allow any interactive use of ssh with the interceptor in place. In this case, you need to create a pseudo-tty. Look up the posix_openpt, ptsname, and grantpt functions. There's also a nonstandard but much-more-intuitive function called openpty, and a wrapper for it called forkpty, which make what you're trying to do extremely easy.
Python's Paramiko does all of this with SSH but it is in Python source code. However, for a C programmer, reading Python is a lot like reading pseudocode so go to the source and learn exactly what works.
Here's a working example that writes to ssh:
#include <unistd.h>
int main(int argc, char **argv)
{
int pid;
int fds[2];
if (pipe(fds))
return -1;
pid = fork();
if (!pid)
{
close(fds[1]);
close(STDERR_FILENO);
dup2(fds[0], STDIN_FILENO);
execvp(argv[1], argv + 1);
}
else
{
char buf[256];
int rc;
close(fds[0]);
while ((rc = read(STDIN_FILENO, buf, 256)) > 0)
{
write(fds[1], buf, rc);
}
}
wait(NULL);
return 0;
}
This line is probably wrong:
execv(argv[1], argv + 1); // run ssh
The array must be terminated by a NULL pointer, if you are using argv[] the parameter from main() I don't think there is any guarantee that this is the case. Edit: just checked the C99 standard and argv is NULL terminated.
execv() does not search the path for the file to execute, so if you are passing ssh as the parameter, it is equivalent to ./ssh which is probably not what you want. You could use execvp() but that is a security risk if a malicious program called ssh appears in $PATH before /bin/ssh. Better to use execv() and force the correct path.

Resources