Interpreting commands sent over ssh - c

Say from a PC, a SSH client will be sending commands (such as custom commands for my program e.g. ("show list of devices")). On my Linux end I want to have a program which will recieve these commands sent over SSH and execute them.
I'm trying to write a c program running on the linux side to interpret the custom commands sent and process them as per my requirements. Any suggestions as to how I would achieve this?
UPDATE:
There are 2 programs here. The 1st program running on a PC1 which gives a command line interface, by which the user can issue commands. The second program is on the Linux end(PC2) which has to receive these commands and process it. Right now I'm thinking on the second program on how to get those commands.

You can do this in at least two different ways:
Execute the C program (say client) through ssh and send commands as arguments. The client parses arguments and does whatever. You need to run the client for each command.
Your C program reads commands from the standard input, so you execute the C program over ssh once, and pipe your commands to ssh.
If your commands are not so frequent then do the 1st one. You can even execute ssh instances in the background and effectively run client commands in parallel. If you have a lot commands in sequence then implement the 2nd way. It will be harder to run them in parallel and relatively harder to parse commands since 1st way will give you each parameter as a different argument. With the second method, it will be much more efficient and faster to process frequent commands since you will not be establishing a connection and forking a process per command.

This is really about communicating with another program, and has very little to do with ssh - since ssh is just the "pipework" - what you need to do is open two pipes (one for stdin, one for stdout) to your application [which happens to be ssh], and write to the stdin pipe, and read from the stdout pipe. If you can do this with a regular (local) program, then all you need to do is add "ssh" to the line you are executing, and you're done.
If you don't understand what this means, look up exec, popen, pipes, etc.
You obviously also need the program at the other end to read from its stdin and write to its stdout. But that's normal command line programming, so I don't see that as a huge problem.

Related

C : how to to execute remotely a command line programm and interact with it from a server?

I made a simple tcp client in C (in windows I precise), which is controlled by netcat. I would like to be able to run a command line executable (such as Strings for example) remotely, and above all to be able to interact from netcat or my server with this programme.( (in order to perform actions on the remote computer in particular).
What would be the best solution to do that ?
edit : Here is an example : I want to run String programm on the remote computer. To do that, I can simply write "string" in netcat, this command would be interpreted by client, and this client execute strings binary. The output of strings should be displayed on netcat.
I precise that the binary of the programm can be on the remote computer, but it would be great if there is a way to execute it as a "real" remote programm, without need to get the executable on the remote machine.
First of all, your terminology is a bit off. You said you write a tcp client. But it seems you wrote a server. Because this programs should receive incoming tcp connection and request to then send responses.
In order to execute commands, you can use the exec* syscalls.
But then you would need to have the executables available in the machine.
Then you would need to build some for for loop around the tcp read that execute things for each line send, and a bit of setup to ensure that you redirect the output in the tcp connection. See the dupsyscall.
Ultimately, if you do not want to write a full shell-like program, you could just execthe system shell (cmd.exe on windows I think), and redirect all inputs/output to it.

can bash be executed with stdin and stdout being a tcp socket

To give some context, I am trying to learn about pseudo-terminals (pty). A pseudo-terminal appears to a user process (bash for example) as if it was a real one. This allows to do all sorts of good stuff like telnet, ssh, etc.
My question is, for something like telnet, is it possible to just "exec" bash and set the stdin and stdout to be the tcp connection of the remote client machine. Because if that is possible, then I don't fully understand the value of using a pseudo-terminal
Yes, it's possible - and in fact this is how lots of "shellcode" exploits against network services traditionally gave the attacker a shell - but you won't be able to control it interactively to the extent you normally would. This is because a socket is not a tty. It can't translate bytes sent over the line into signals for the attached process (things like ^C, ^Z, etc.), it can't send EOFs as data, it can't do job control (suspend on ^Z, suspend on input when in background, etc.), and it can't convey mode switches (canonical/"cooked" mode versus raw mode).

C program that connects to remote host and then executes system commands

I've looked around online about executing system commands through a c program, but none of them touched on executing the command after connecting to a remote host such as (this connection prompts for a user password):
sprintf(buffer1,"ssh -l %s %s ",userName,hostName);
system((char*)buffer1);
//Nothing below this executes because the connection has been established
sprintf(buffer2,"shasum sfin.exe > t.sha");
system((char*)buffer2);
Once the connection is closed the program then continues to execute, is there a simple way to keep the execution going?
You'll want to use the function popen instead of system.
http://linux.die.net/man/3/popen
It runs a command, returning a file object that you can write to with functions like fprintf, fwrite, etc., and those commands will go through the ssh process to the remote computer.

Attach a program to a process stdout/stderr

I have a program where I need to attach to stdout/stderr of a given pid (fetched from a file).
How this should be done?Is this even possible?
EDIT 1:
I have a monitor program that starts/stops a server program. However, the monitor can be closed/reopened and should hook to existent server stdout to read errors that are written on stdout (and also some output based on some monitor requests).
EDIT 2:
I built server and monitor so I have sources of both, the problem is that the server "answer" to some monitor requests on the stdout, I don't want to add another interprocess comunication part
While a process is running, there isn't a standard Unix way to intercept its output from another process and start capturing it after the target process has been started.
If you are starting these processes yourself, via execve, you can simply set up a pipe via pipe(2) and redirect its descriptors (via dup2(2)) to the child process' stdin and stdout. This way the parent will be able to write/read to the child's stdin/stdout through the pipe.
Regarding your question after the edit: this seems like a good fit for a Unix fifo file.
A fifo file (or a named pipe) appears like a file, but is implemented as a pipe under the bonnet.
So just create a fifo file (with the mkfifo(1) command), start the server application by redirecting its stdin and stdout descriptors to that file (with the < and > operators of the shell), and you'll be able to read from it anytime.
Never tried, but you may look at the /proc/$pid/ directory if it is possible (with proper permissions) to attach to the file descriptor entries there. Otherwise I couldn't imagine how this would be possible.
EDIT (after getting more details)
You state, that your process will be respnsible to start/stop that server process - THIS makes things a lot easier :)
As this is homework, I'll just draw the picture:
create named pipes for ther server's stdin and stdout
when starting the server, connect its stdin/stdout with the named pipes
when starting your client, read/write from/to the named pipes
Do you have the option of configuring the server so that it sends output to a log file instead, or in addition to, stdout? On a unix box, you could run the server through tee to log stdout to a file:
$ server | tee server.log
Then it is a simple matter to tail server.log to get the latest output.
$ tail -f server.log

Problem with bin/sh -i in a forked process, error: 'can't access tty, job control turned off'

I'm writing a cgi-bin program for my Sheevaplug (running the default Ubuntu install) that displays a shell in a browser page. It is a single C program that is placed in the cgi-bin folder and viewed in a browser. It automatically launches a daemon and the daemon forks an instance of the shell. The cgi-bin communicates with the daemon via shared memory block, and the daemon communicates with the shell by redirecting its stdin/stdout to the shell's stdout/stdin. When you leave the page it automatically turns off the daemon.
It works if I launch it using "/bin/sh" and I send a whole command line at a time from the browser to it. But using that design it's not interactive.
So I changed it to send a character at a time to "/bin/sh" and added "-i" so the shell runs in interactive mode.
When the shell starts up it displays the error "can't access TTY, job control turned off."
It then displays the '$' when it is ready for input and seems to work, but sending delete characters to it just confuses it and it doesn't properly handle deleting. I'm not really sure if it is in interactive mode or not. When I type 'su root' I get the error "must be run from a terminal'.
Any ideas what I am doing wrong?
PS: When I have it done it will be released under the GPL.
For interactive mode, sh wants to be talking to a terminal or something that emulates one (a pseudo-terminal), not just direct IO pipes. Consider using forkpty to start the process you launch the shell from, and talking to the streams provided by that.

Resources