I realize this question is asked frequently, mainly by people who want to intercept the password-asking phase of SSH. This is not what I want. I'm after the post-login text.
I want to write a wrapper for ssh, that acts as an intermediary between SSH and the terminal. I want this configuration:
(typing on keyboard / stdin) ----> (wrapper) ----> (ssh client)
and the same for output coming from ssh:
(ssh client) -----> (wrapper) -----> stdout
I seem to be able to attain the effect I want for stdout by doing a standard trick I found online (simplified code):
pipe(fd)
if (!fork()) {
close(fd[READ_SIDE]);
close(STDOUT_FILENO); // close stdout ( fd #1 )
dup(fd[WRITE_SIDE]); // duplicate the writing side of my pipe ( to lowest # free pipe, 1 )
close(STDERR_FILENO);
dup(fd[WRITE_SIDE]);
execv(argv[1], argv + 1); // run ssh
} else {
close(fd[WRITE_SIDE]);
output = fdopen(fd[READ_SIDE], "r");
while ( (c = fgetc(output)) != EOF) {
printf("%c", c);
fflush(stdout);
}
}
Like I said, I think this works. However, I can't seem to do the opposite. I can't close(STDIN_FILENO) and dup the readside of a pipe. It seems that SSH detects this and prevents it. I've read I can use the "-t -t" option to force SSH to ignore the non-stdin nature of its input; but when I try this it still doesn't work.
Any hints?
Thanks very much!
Use popen (instead of execv) to execute the ssh cmd and be able to read and write to the session.
A pipe will not work if you want to allow any interactive use of ssh with the interceptor in place. In this case, you need to create a pseudo-tty. Look up the posix_openpt, ptsname, and grantpt functions. There's also a nonstandard but much-more-intuitive function called openpty, and a wrapper for it called forkpty, which make what you're trying to do extremely easy.
Python's Paramiko does all of this with SSH but it is in Python source code. However, for a C programmer, reading Python is a lot like reading pseudocode so go to the source and learn exactly what works.
Here's a working example that writes to ssh:
#include <unistd.h>
int main(int argc, char **argv)
{
int pid;
int fds[2];
if (pipe(fds))
return -1;
pid = fork();
if (!pid)
{
close(fds[1]);
close(STDERR_FILENO);
dup2(fds[0], STDIN_FILENO);
execvp(argv[1], argv + 1);
}
else
{
char buf[256];
int rc;
close(fds[0]);
while ((rc = read(STDIN_FILENO, buf, 256)) > 0)
{
write(fds[1], buf, rc);
}
}
wait(NULL);
return 0;
}
This line is probably wrong:
execv(argv[1], argv + 1); // run ssh
The array must be terminated by a NULL pointer, if you are using argv[] the parameter from main() I don't think there is any guarantee that this is the case. Edit: just checked the C99 standard and argv is NULL terminated.
execv() does not search the path for the file to execute, so if you are passing ssh as the parameter, it is equivalent to ./ssh which is probably not what you want. You could use execvp() but that is a security risk if a malicious program called ssh appears in $PATH before /bin/ssh. Better to use execv() and force the correct path.
Related
Line number #15 { printf("This goes to the terminal\n"); } is not getting printed anywhere not in the terminal nor in the file.
//inputs argc = 3 :- ./executable_file output_file command
int main(int argc, char **argv)
{
if(argc < 3)
{
return 0;
}
int stdout_copy = dup(1);
int fd = open(argv[1], O_CREAT | O_RDWR | O_TRUNC, 0644);
if (fd < 0)
{
printf("ERROR\n");
return 0;
}
printf("This goes to the standard output(terminal).\n");
printf("Now the standard output will go to \"%s\" file .\n", argv[1]);
dup2(fd, 1);
printf("This output goes to \"%s\"\n",argv[1]);
close(fd);
execvp(argv[2],argv+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
return 0;
}
Apologies for the Previous Question :
I'm really sorry, it was my mistake in analysing it.
And special thanks for all answers and hints.
problem in writing to terminal after using execvp and dup2 syscalls
Neither:
execvp(argc[2],argc+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
Or:
dup2(stdout_copy,1);
execvp(argc[2],argc+2);
printf("This goes to the terminal\n");
...will output to stdout if the call to execvp(argc[2],argc+2); succeeds.
However, both will output to stdout if it fails.
(Unless command line arguments are incorrect, dup2() likely has nothing to do with failure to output to stdout. See additional content below for how to check this.)
Read all about it here: execvp.
In a nutshell, execvp() replaces the current process with a new process. If it is successful the current process is no longer what you are viewing on the terminal. Only when it is not successful will the commands following it be executed.
The following suggestions are not precisely on-topic, but important nonetheless...
Change:
int main(int argv, char **argc)
To:
int main(int argc, char **argv) //or int main(int argc, char *argv[]), either are fine.
This will be the foundation of seeing normal behavior. Anything else is very confusing to future maintainers of your code, and to people trying to understand what you are doing here.
These names are easily remembered by keeping in mind that argc is used for the count of command line arguments, and argv is the vector that is use to store them.
Also, your code shows no indications that you are checking/validating these arguments, but given the nature of your program, they should be validated before going on. For example:
//verify required number of command line arguments was entered
if(argc <!= 3)//requires at least one additional command line argument
{
printf("Usage: prog.exe [path_filename]\nInclude path_filename and try again.\nProgram will exit.");
return 0;
}
//check if file exists before going on
if( access( argv[1], F_OK ) != -1 )
{
// file exists
} else {
// file doesn't exist
}
//do same for argv[2]
(2nd example to check file in Linux environment is from here)
BTW, Knowing the command line arguments that were passed into the program, would help to provide a more definitive answer here. Their syntax and content, and whether or not the files that they reference exist, determine how the call to execvp will behave.
Suggestions
It is generally always look at the return values of functions that have them. But because of the unique behavior of execvp If is successful it does not return, and if it fails it will always return -1. So in this case pay special attention to the value of errno for error indications, again all of which are covered in the link above.
As mentioned in comments (in two places.) it is a good idea to use fflush(stdout) to empty buffers when interpreting standard I/O and file descriptor I/O, and before using any of the exec*() family of calls.
Take time to read the man pages for the functions - shell commands that are used. It will save time, and guide you during debugging sessions.
I am writing a toy version of SSH in C, so I need to redirect stdin, stdout and stderr of the server shell to the client shell (by using pipes and sockets). Right now I have redirected only stdout. If I send commands like ls, netstat, etc there are no problems, the client receives the right output.
But if I write a C program wich, for example, asks for an int n and prints the numbers between 1 and n, and call it through the client I get this behavior: the client doesn't receive the string with the request of n, but if I tape n on the server shell the client receives the request string and the numbers.
I have tried the same thing leaving stdout on the server shell and in this case it works correctly. I have tried to use fflush, fsync and setvbuf without solve it.
Another strange behavior is: if I call through the client the program httrack without redirecting stdout, the server shell shows the normal wizard wich asks for project name, url, etc.
But If I redirect stdout the client doesn't receive the wizard output, instead it receives the httrack help page.
This is the code that performs redirection. As you can see I used both system and execlp call, but the behavior is the same.
pipe(pipefd);
if(fork() > 0)
{
close(pipefd[1]);
*_result = fdopen(pipefd[0], "r");
return 0;
}
else
{
close(pipefd[0]);
close(1);
dup(pipefd[1]);
system(buf);
//execlp(buf, buf, NULL);
exit(0);
}
EDIT:
After this I call this function that reads the stream associated to the read end of the pipe and puts it on the socket. The other parameters are used by AES and log file.
sock_send encrypt the buffer and write it to the socket through sock_write. sock_write(_wsock, NULL, 0) only signals the end of the output. Anyway as I have pointed out before, the output of commands like ls etc. is ok.
void write_result(int _wsock, aes_context *_enc_ctx, unsigned char *_iv, FILE *_result, char *_ip_addr)
{
unsigned char buf[BUF_SIZE];
char lstr[MAX_LOG_LEN];
while(fgets(buf, BUF_SIZE * sizeof(char), _result) != NULL)
{
//li scrivo sulla socket
if(sock_send(_wsock, _enc_ctx, buf, strlen(buf) + 1, _iv) == -1)
{
sprintf(lstr, WRITE_SOCK_ERR, _ip_addr);
log_str(lstr);
close_connection(_wsock, _ip_addr);
}
}
sock_write(_wsock, NULL, 0);
fclose(_result);
}
I'm just starting to learn C programming and I have some uncertainty about fork(), exec(), pipe(), etc.
I've developed this code, but when I execute it, the variable c remains empty, so I don't know if the child isn't writing to the pipe, or the parent isn't reading from it.
Could you help me please? This is the code:
int main() {
int pid=0;
int pipefd[2];
char* c=(char *)malloc(sizeof(char));
FILE *fp;
pipe(pipefd);
pid=fork();
if (pid==0){
close(pipefd[0]);
dup2(pipefd[1],1);
close(pipefd[1]);
execl("ls -l | cut -c28","ls -l | cut -c28", (char *) 0);
}
else{
close(pipefd[1]);
read(pipefd[0], c, 1);
char* path="/home/random";
char* txt=".txt";
char* root=malloc(strlen(path) + strlen(txt) + sizeof(char));
strcpy(root,path);
strcat(root,c);
strcat(root,txt);
close(pipefd[0]);
fp=fopen(root,"w+");
(...)
}
The problem is that the final root string its only "/home/random.txt" because there is nothing in the char c, and what I want is to open the file "/home/random(number stored in char c).txt".
execl executes a single command, and is not aware of shell concepts such as pipes. If you want to execute a shell command, you will have to execute a shell, as follows:
execl("/bin/sh","/bin/sh","-c","ls -l | cut -c28", (char*) 0);
Always check the return value of the system calls (like execve(2) and derived functions like execl(3)), and use the errno(3) to figure out what went wrong.
In your case the execl line fails.
Using strcpy/strcat seems a bit excessively complex. snprintf can turn those 3 lines into one.
snprintf( root, size_of_buf, "/home/random%s", c );
Additionally, check your error codes. As noted, execl is failing and you don't know it. fork, dup2, ...,can also fail, you want to know sooner rather than later.
Basically I want to do in C (and without buffering) the same as this bash-script:
#!/bin/sh
cat ./fifo_in | myprogram > ./fifo_out
In other words I want to exec "myprogram" and redirect its stdin and stdout to two pipes which have been created previously.
Another program is feeding data into fifo_in and reading out of fifo_out.
Of course it would be easy to just read from ./fifo_in, buffer it in the parent and write to myprogram's stdin (and reverse for stdout and ./fifo_out) but I think there is probably a way to let "myprogram" read/write directly from/to the fifos without buffering in the parent process.
Edit:
Eugen's answer seems to be the correct one, but I cannot get it to work.
I use this function on the C-side, which seems correct to me:
pid_t execpipes(const char *wd, const char *command, const char *pipename)
{
char pipename_in[FALK_NAMESIZE];
char pipename_out[FALK_NAMESIZE];
strcpy(pipename_in, FALKPATH);
strcat(pipename_in, "/");
strcat(pipename_in, FALK_FIFO_PATH);
strcat(pipename_in, "/");
strncat(pipename_in, pipename, FALK_NAMESIZE-2);
strcpy(pipename_out, pipename_in);
strcat(pipename_out, "R");
pid_t pid;
pid = fork();
if (pid < 0)
{ //Error occured
perror("fork");
exit(1);
}
if (pid == 0)
{
chdir(wd);
d("execpipes: pipename_in=\"%s\"\n", pipename_in);
d(" pipename_out=\"%s\"\n", pipename_out);
freopen(pipename_in,"r",stdin);
freopen(pipename_out,"w",stdout);
d("execpipes: command=\"%s\"\n", command);
execl("/bin/sh", "sh", "-c", command, (char *)NULL); // using execv is probably faster
// Should never get here
perror("execl");
exit(1);
}
return pid;
}
I read and write the pipes from a PHP-script (only relevant part posted):
$pipe_in = fopen($fp.$pipename, "w");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000); // Program hangs here
The program is started correctly, receives the input via $pipe_in correctly, processes the data correctly and (because it ran fine for many months) I assume it puts out the data correctly to stdout, but when I try to read from $pipe_out, it hangs. I know that the pipes themselves are set up correctly because if I don't open $pipe_out, the program does not get any input - which makes sense because there is no reader for $pipe_out and therefore the pipeline is not complete. So I can open $pipe_out, but I cannot read anything from it, which is quite strange.
Edit2:
Program works now, thanks guys - For some reason the first pipe has to be closed before you can read from the second pipe:
$pipe_in = fopen($fp.$pipename, "w");
$pipe_out = fopen($fp.$pipename.'R', "r");
$DEBUG .= "Write to pipe_in\n";
$ret = fwrite($pipe_in, $in);
fclose($pipe_in);
$DEBUG .= "Read from pipe_out\n";
$atext = fread($pipe_out, 200000);
fclose($pipe_out);
unlink($fp.$pipename);
unlink($fp.$pipename.'R');
I'd write a small wrapper for myprogram, that does
freopen("./fifo_in","r",stdin)
freopen("./fifo_out","w",stdout)
(Ofcourse not with constant paths!), then execve myprogram
Korn shell supports coprocesses, which I think effectively does what you ask: read from a pipe and write to a pipe (which can be stdout and stdin of a C process)
http://www.dartmouth.edu/~rc/classes/ksh/coprocesses.html
How about
myprogram < ./fifo_in > ./fifo_out
?
As for getting rid of the buffering: Since your program directly reads/writes the pipes, the buffering shouldn't hurt you.
An important point is that the process which writes fifo_in should flush properly so you don't have to wait. The same goes for your output: As soon as a "work unit" is complete, flush your stdout which will make the data available to whoever reads the output pipe.
But you can't do anything in myprogram to make the writer of fifo_in flush its buffers.
[EDIT] To do this from C (without the help of a shell), use code like this:
- Put the names of the two pipes into local variables on the stack
- Call `fork()`. If that returns '0', then open the two fifos with `freopen()` [like Eugen suggested][1]
- Call `execve` to launch the real exec.
That's (in a nutshell) what the shell is doing when it runs commands. Make sure the parent process (the one where fork() returns a PID != 0) handles the signal SIGCHLD
Perhaps you are looking of a named pipe? For example:
mkfifo fifo_in
As a test stub for my_program.c, to read fifo_in via the buffered stdin:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void) {
char buf[80];
if (!freopen("./fifo_in", "r", stdin)) {
perror("freopen");
exit(EXIT_FAILURE);
}
while (!ferror(stdin)) {
while (fgets(buf, sizeof buf, stdin))
fputs(buf, stdout);
sleep(1);
}
return 0;
}
Then as a test for the writer, using the bash shell:
for x in {1..10}; do
echo $x
echo $x >> fifo_in
sleep 1
done
Notes:
I'd prefer to use unbuffered I/O.
The writer, at least on my machine, blocks until there is a reader.
The reader, in this sample, cannot tell when the writer is finished.
What is the best way to do this theoretically? I need to let the user enter the number of processes to send to a pipe for instance "3" and as it loops through the three [three whats?] on each iteration I need to create a process, send it [what?] to the pipe and print it.
The next time the user enters another number, say "4", it should print the previous 3 + 1.. I am working on this but can't understand how do it. Here is my code. I just need guidance, no need to try to solve it for me (but suggestions would be much appreciated).
Right now I am able to send one through the pipe and return it but then the pipe closes and it does not allow for the other processes to get in there.
Suggestion #1: Use functions
Use functions, even for little jobs such as:
void create_fifo(const char *name)
{
/* Create the first named - pipe */
int ret_val = mkfifo(name, 0666);
if ((ret_val == -1) && (errno != EEXIST))
{
perror("Error creating the named pipe");
exit(1);
}
}
Now you can simply write in your main program:
create_fifo(PIPE1);
create_fifo(PIPE5);
This cuts down on the clutter in your main program. It also adheres to the Agile principle DRY - Don't Repeat Yourself.
Suggestion #2: Error check system calls.
You did that for creating the FIFOs, which is good. You don't for the open() calls, or the read() or write() calls. You probably should. I use a function similar to the following in my programs:
#include <stdarg.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
static const char *arg0 = "did not call err_setarg0(argv[0])";
void err_setarg0(const char *argv0)
{
arg0 = argv0;
}
void err_exit(const char *fmt, ...)
{
int errnum = errno; /* Capture errno before it is changed */
va_lists args;
fprintf(stderr, "%s: ", arg0);
va_start(args, fmt);
vfprintf(stderr, fmt, args);
va_end(args);
if (errnum != 0)
fprintf(stderr, "%d: %s\n", errnum, strerror(errnum));
exit(1);
}
You can then use:
if ((rdfd1 = open(PIPE1, O_RDONLY)) < 0)
err_exit("Failed to open FIFO %s for reading: ", PIPE1);
if ((wrfd1 = open(PIPE5, O_WRONLY)) < 0)
err_exit("Failed to open FIFO %s for writing: ", PIPE5);
Suggestion #3: Make an iterative server
Your server program currently opens the FIFOs once, then reads from one, write to the other, and terminates. You need a loop around some portion of this code, maybe two nested loops. You have to decide whether you need an inner loop to read until EOF. You also need to know how you will terminate the server.
Suggestion #4: Maybe the server needs pipe names as arguments
Your server currently works on fixed FIFO names. You probably need it to take input and output file names as command line arguments, so that when your client spawns multiple servers, each server can have its own set of FIFOs, rather than all processes sharing the same two FIFOs, which is going to lead to confusion and chaos.
Indeed, the need for generating names calls the whole design into question - are you sure using FIFOs is the best way to do this? It looks to me like a case where anonymous pipes would serve you better; you wouldn't have to invent names, and the server would simply read from its standard input and write the (modified?) data to its standard output, so you could even simply use cat or tr or sed or ... as your server.
Clearly, if you use pipes, you will need to do some careful plumbing, but you also need to do careful plumbing with the pairs of FIFOs per server.