I am writing a toy version of SSH in C, so I need to redirect stdin, stdout and stderr of the server shell to the client shell (by using pipes and sockets). Right now I have redirected only stdout. If I send commands like ls, netstat, etc there are no problems, the client receives the right output.
But if I write a C program wich, for example, asks for an int n and prints the numbers between 1 and n, and call it through the client I get this behavior: the client doesn't receive the string with the request of n, but if I tape n on the server shell the client receives the request string and the numbers.
I have tried the same thing leaving stdout on the server shell and in this case it works correctly. I have tried to use fflush, fsync and setvbuf without solve it.
Another strange behavior is: if I call through the client the program httrack without redirecting stdout, the server shell shows the normal wizard wich asks for project name, url, etc.
But If I redirect stdout the client doesn't receive the wizard output, instead it receives the httrack help page.
This is the code that performs redirection. As you can see I used both system and execlp call, but the behavior is the same.
pipe(pipefd);
if(fork() > 0)
{
close(pipefd[1]);
*_result = fdopen(pipefd[0], "r");
return 0;
}
else
{
close(pipefd[0]);
close(1);
dup(pipefd[1]);
system(buf);
//execlp(buf, buf, NULL);
exit(0);
}
EDIT:
After this I call this function that reads the stream associated to the read end of the pipe and puts it on the socket. The other parameters are used by AES and log file.
sock_send encrypt the buffer and write it to the socket through sock_write. sock_write(_wsock, NULL, 0) only signals the end of the output. Anyway as I have pointed out before, the output of commands like ls etc. is ok.
void write_result(int _wsock, aes_context *_enc_ctx, unsigned char *_iv, FILE *_result, char *_ip_addr)
{
unsigned char buf[BUF_SIZE];
char lstr[MAX_LOG_LEN];
while(fgets(buf, BUF_SIZE * sizeof(char), _result) != NULL)
{
//li scrivo sulla socket
if(sock_send(_wsock, _enc_ctx, buf, strlen(buf) + 1, _iv) == -1)
{
sprintf(lstr, WRITE_SOCK_ERR, _ip_addr);
log_str(lstr);
close_connection(_wsock, _ip_addr);
}
}
sock_write(_wsock, NULL, 0);
fclose(_result);
}
Related
I want to redirect stdout and stdin in a specific file which would be given in argv array.
For instance when I enter a command like - ./shell ls > test
it should be redirected to the "test" file, now I am bit confuse because without writing any code it automatically redirects to that file, I want to do it manually, secondly, when I enter a command like- ./shell ls < test, the stdin should be redirected. I tried to find a file name and ">" or "<" sign using argv[argc-1] and argv[argc-2], but it seems that when I use ">" and a filename afterwards, the output prints(the arguments before ">" "<" sing) in that file instead of getting that name and a sign.
Basically, I am creating a shell command using execvp() and fork().
Here is my code, I am able to redirect stdout in a static file.
void call_system(char *argv[],int argc)
{
int pid;
int status=0;
signal(SIGCHLD, SIG_IGN);
int background;
/*two process are created*/
pid=fork();
background = 0;
if(pid<0)
{
fprintf(stderr,"unsuccessful fork /n");
exit(EXIT_SUCCESS);
}
else if(pid==0)
{
//system(argv[1]);
/*argument will be executed*/
freopen("CON","w",stdout);
char *bname;
char *path2 = strdup(*argv);
bname = basename(path2);
execvp(bname, argv);
fclose (stdout);
}
else if(pid>0)
{
/*it will wait untill the child process doesn't finish*/
//waitpid(pid,&status,0);
wait(&status);
//int tempid;
//tempid=waitpid(pid,&status,WNOHANG);
//while(tempid!= pid);// no blocking wait
if(!WIFEXITED(status) || WEXITSTATUS(status))
printf("error");
exit(EXIT_SUCCESS);
}
}
Try using dup() or dup2() or dup3().
The dup() system call creates a copy of the file descriptor oldfd,
using the lowest-numbered unused descriptor for the new descriptor.
File *fp=fopen(argv[1],"r");
int fd=fileno(fp);
dup2(fd,0); //dup2(fd,STDIN_FILENO) redirect file stream to input stream
scanf("%s",buff); //reading from file.
Similarly output can also be redirected.From manual these informations may be useful
On program startup, the integer file descriptors associated with the
streams stdin, stdout, and stderr are 0, 1, and 2, respectively. The
preprocessor symbols STDIN_FILENO, STDOUT_FILENO, and STDERR_FILENO
are defined with these values in <unistd.h>.
Suppose you want to redirect stdout to this file.
dup2(fd,1);//dup2(fd,STDOUT_FILENO)
printf("%s",buff); //this will write it to the file.
stdio redirection is handled by the shell, not the launched program. The relevant syscalls are pipe, open and dup2, the later of the two is used to redirect the stdio filedescriptors into the pipe or file to be read from or written to.
I want to run node.js as a subprocess and feed it input. Using C, here is some sample code of mine that does that.
The issue I have is that although the subprocess's stdout is still directed to the terminal, I see nothing after having fed the subprocess stdin a print 'Hello World' line. Even if I fflush() the pipe, I see nothing on output. However, if I close the pipe's input, then the 'Hello World' appears on the terminal.
The subprocess seems to simply buffer - why is that?
I would like to eventually redirect the subprocess stdout to another pipe and read it
in from main().
int main(int argc, char* argv[]) {
int toNode[2];
pipe(toNode);
pid_t child_pid = fork();
if (child_pid == 0) { // child
// close write end
close(toNode[1]);
// connect read end to stdin
dup2(toNode[0], STDIN_FILENO);
// run node executable
char* arg_list[] = { "/usr/bin/node", NULL};
execvp(arg_list[0], arg_list);
fprintf(stderr, "process failed to start: %s\n", strerror(errno));
abort();
}
else { // parent
FILE* stream;
// close read end
close(toNode[0]);
// convert write fd to FILE object
stream = fdopen(toNode[1], "w");
fprintf(stream, "console.log('Hello World');\n");
fflush(stream);
//close(toNode[1]);
waitpid(child_pid, NULL, 0);
}
return 0; }
There's no problem with the pipe being read. The problem is that /usr/bin/node only invokes the REPL (read-eval-print loop), by default, if it detects that stdin is interactive. If you have a sufficiently recent version of nodejs, then you can provide the -i or --interactive command line flag, but that will do more than just execute each line as it is read; it also really will act as a console, including inserting ANSI colour sequences into the output and printing the value of each expression.
See this forum thread for more information.
I wrote this piece of code that is supposed to redirect something written on the STDOUT by a function to the STDIN so that it can be read by another function. I cannot access these functions, so this is the only way I can use them.
mpz_fput(stdout, c) is one of these function. It just prints on the STDOUT something contained in the c data structure.
Now everything worked fine while debugging as before the following code I had a printf(); followed by a fflush(stdout); (needed to print debugging messages).
Now that I removed these two lines I noticed (using gdb) that this code stays idle on the read() function (last line of this piece of code)
char buffer[BUFSIZ] = "";
int out_pipe[2];
int in_pipe[2];
int saved_stdout;
int saved_stdin;
int errno;
// REDIRECT STDIN
saved_stdin = dup(STDIN_FILENO); /* save stdin for later */
if(errno= pipe(in_pipe) != 0 ) { /* make a pipe */
printf("\n%s",strerror(errno));
exit(1);
}
close(STDIN_FILENO);
dup2(in_pipe[0], STDIN_FILENO); /* redirect pipe to stdin */
// REDIRECT STDOUT
saved_stdout = dup(STDOUT_FILENO); /* save stdout for display later */
if(errno= pipe(out_pipe) != 0 ) { /* make a pipe */
printf("\n%s",strerror(errno));
exit(1);
}
dup2(out_pipe[1], STDOUT_FILENO); /* redirect stdout to the pipe */
close(out_pipe[1]);
mpz_fput(stdout,c); // put c on stdout
read(out_pipe[0], buffer, BUFSIZ); // read c from stdout pipe into buffer
any idea why is that?
Seems you used the blocking type. In this case, out_pipe[0] is a blocking handle. So read() blocked, and waiting for anything out from out_pipe[0].
Besides, I think there's something to do with the fflush():
For output streams, fflush() forces a write of all user-space buffered data
for the given output or update stream via the stream's underlying write
function. For input streams, fflush() discards any buffered data that has
been fetched from the underlying file, but has not been by the application.
The open status of the stream is unaffected.
In your case, you redirected pipe to STDOUT, then called fflush() to make everything in STDOUT flushed and move them to read() buffer. Then you called read() to read them out. If you didn't call fflush(), and read() buffer would be empty. Since it's a blocking handle used by read(), you can't read anything from the buffer, so it will be blocked.
This is the brief theory, I suggest you to read Linux manpage for more details.
Basically I want to do in C (and without buffering) the same as this bash-script:
#!/bin/sh
cat ./fifo_in | myprogram > ./fifo_out
In other words I want to exec "myprogram" and redirect its stdin and stdout to two pipes which have been created previously.
Another program is feeding data into fifo_in and reading out of fifo_out.
Of course it would be easy to just read from ./fifo_in, buffer it in the parent and write to myprogram's stdin (and reverse for stdout and ./fifo_out) but I think there is probably a way to let "myprogram" read/write directly from/to the fifos without buffering in the parent process.
Edit:
Eugen's answer seems to be the correct one, but I cannot get it to work.
I use this function on the C-side, which seems correct to me:
pid_t execpipes(const char *wd, const char *command, const char *pipename)
{
char pipename_in[FALK_NAMESIZE];
char pipename_out[FALK_NAMESIZE];
strcpy(pipename_in, FALKPATH);
strcat(pipename_in, "/");
strcat(pipename_in, FALK_FIFO_PATH);
strcat(pipename_in, "/");
strncat(pipename_in, pipename, FALK_NAMESIZE-2);
strcpy(pipename_out, pipename_in);
strcat(pipename_out, "R");
pid_t pid;
pid = fork();
if (pid < 0)
{ //Error occured
perror("fork");
exit(1);
}
if (pid == 0)
{
chdir(wd);
d("execpipes: pipename_in=\"%s\"\n", pipename_in);
d(" pipename_out=\"%s\"\n", pipename_out);
freopen(pipename_in,"r",stdin);
freopen(pipename_out,"w",stdout);
d("execpipes: command=\"%s\"\n", command);
execl("/bin/sh", "sh", "-c", command, (char *)NULL); // using execv is probably faster
// Should never get here
perror("execl");
exit(1);
}
return pid;
}
I read and write the pipes from a PHP-script (only relevant part posted):
$pipe_in = fopen($fp.$pipename, "w");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000); // Program hangs here
The program is started correctly, receives the input via $pipe_in correctly, processes the data correctly and (because it ran fine for many months) I assume it puts out the data correctly to stdout, but when I try to read from $pipe_out, it hangs. I know that the pipes themselves are set up correctly because if I don't open $pipe_out, the program does not get any input - which makes sense because there is no reader for $pipe_out and therefore the pipeline is not complete. So I can open $pipe_out, but I cannot read anything from it, which is quite strange.
Edit2:
Program works now, thanks guys - For some reason the first pipe has to be closed before you can read from the second pipe:
$pipe_in = fopen($fp.$pipename, "w");
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
fclose($pipe_in);
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000);
fclose($pipe_out);
unlink($fp.$pipename);
unlink($fp.$pipename.'R');
I'd write a small wrapper for myprogram, that does
freopen("./fifo_in","r",stdin)
freopen("./fifo_out","w",stdout)
(Ofcourse not with constant paths!), then execve myprogram
Korn shell supports coprocesses, which I think effectively does what you ask: read from a pipe and write to a pipe (which can be stdout and stdin of a C process)
http://www.dartmouth.edu/~rc/classes/ksh/coprocesses.html
How about
myprogram < ./fifo_in > ./fifo_out
?
As for getting rid of the buffering: Since your program directly reads/writes the pipes, the buffering shouldn't hurt you.
An important point is that the process which writes fifo_in should flush properly so you don't have to wait. The same goes for your output: As soon as a "work unit" is complete, flush your stdout which will make the data available to whoever reads the output pipe.
But you can't do anything in myprogram to make the writer of fifo_in flush its buffers.
[EDIT] To do this from C (without the help of a shell), use code like this:
- Put the names of the two pipes into local variables on the stack
- Call `fork()`. If that returns '0', then open the two fifos with `freopen()` [like Eugen suggested][1]
- Call `execve` to launch the real exec.
That's (in a nutshell) what the shell is doing when it runs commands. Make sure the parent process (the one where fork() returns a PID != 0) handles the signal SIGCHLD
Perhaps you are looking of a named pipe? For example:
mkfifo fifo_in
As a test stub for my_program.c, to read fifo_in via the buffered stdin:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
char buf[80];
if (!freopen("./fifo_in", "r", stdin)) {
perror("freopen");
exit(EXIT_FAILURE);
}
while (!ferror(stdin)) {
while (fgets(buf, sizeof buf, stdin))
fputs(buf, stdout);
sleep(1);
}
return 0;
}
Then as a test for the writer, using the bash shell:
for x in {1..10}; do
echo $x
echo $x >> fifo_in
sleep 1
done
Notes:
I'd prefer to use unbuffered I/O.
The writer, at least on my machine, blocks until there is a reader.
The reader, in this sample, cannot tell when the writer is finished.
I realize this question is asked frequently, mainly by people who want to intercept the password-asking phase of SSH. This is not what I want. I'm after the post-login text.
I want to write a wrapper for ssh, that acts as an intermediary between SSH and the terminal. I want this configuration:
(typing on keyboard / stdin) ----> (wrapper) ----> (ssh client)
and the same for output coming from ssh:
(ssh client) -----> (wrapper) -----> stdout
I seem to be able to attain the effect I want for stdout by doing a standard trick I found online (simplified code):
pipe(fd)
if (!fork()) {
close(fd[READ_SIDE]);
close(STDOUT_FILENO); // close stdout ( fd #1 )
dup(fd[WRITE_SIDE]); // duplicate the writing side of my pipe ( to lowest # free pipe, 1 )
close(STDERR_FILENO);
dup(fd[WRITE_SIDE]);
execv(argv[1], argv + 1); // run ssh
} else {
close(fd[WRITE_SIDE]);
output = fdopen(fd[READ_SIDE], "r");
while ( (c = fgetc(output)) != EOF) {
printf("%c", c);
fflush(stdout);
}
}
Like I said, I think this works. However, I can't seem to do the opposite. I can't close(STDIN_FILENO) and dup the readside of a pipe. It seems that SSH detects this and prevents it. I've read I can use the "-t -t" option to force SSH to ignore the non-stdin nature of its input; but when I try this it still doesn't work.
Any hints?
Thanks very much!
Use popen (instead of execv) to execute the ssh cmd and be able to read and write to the session.
A pipe will not work if you want to allow any interactive use of ssh with the interceptor in place. In this case, you need to create a pseudo-tty. Look up the posix_openpt, ptsname, and grantpt functions. There's also a nonstandard but much-more-intuitive function called openpty, and a wrapper for it called forkpty, which make what you're trying to do extremely easy.
Python's Paramiko does all of this with SSH but it is in Python source code. However, for a C programmer, reading Python is a lot like reading pseudocode so go to the source and learn exactly what works.
Here's a working example that writes to ssh:
#include <unistd.h>
int main(int argc, char **argv)
{
int pid;
int fds[2];
if (pipe(fds))
return -1;
pid = fork();
if (!pid)
{
close(fds[1]);
close(STDERR_FILENO);
dup2(fds[0], STDIN_FILENO);
execvp(argv[1], argv + 1);
}
else
{
char buf[256];
int rc;
close(fds[0]);
while ((rc = read(STDIN_FILENO, buf, 256)) > 0)
{
write(fds[1], buf, rc);
}
}
wait(NULL);
return 0;
}
This line is probably wrong:
execv(argv[1], argv + 1); // run ssh
The array must be terminated by a NULL pointer, if you are using argv[] the parameter from main() I don't think there is any guarantee that this is the case. Edit: just checked the C99 standard and argv is NULL terminated.
execv() does not search the path for the file to execute, so if you are passing ssh as the parameter, it is equivalent to ./ssh which is probably not what you want. You could use execvp() but that is a security risk if a malicious program called ssh appears in $PATH before /bin/ssh. Better to use execv() and force the correct path.