Problem with bin/sh -i in a forked process, error: 'can't access tty, job control turned off' - c

I'm writing a cgi-bin program for my Sheevaplug (running the default Ubuntu install) that displays a shell in a browser page. It is a single C program that is placed in the cgi-bin folder and viewed in a browser. It automatically launches a daemon and the daemon forks an instance of the shell. The cgi-bin communicates with the daemon via shared memory block, and the daemon communicates with the shell by redirecting its stdin/stdout to the shell's stdout/stdin. When you leave the page it automatically turns off the daemon.
It works if I launch it using "/bin/sh" and I send a whole command line at a time from the browser to it. But using that design it's not interactive.
So I changed it to send a character at a time to "/bin/sh" and added "-i" so the shell runs in interactive mode.
When the shell starts up it displays the error "can't access TTY, job control turned off."
It then displays the '$' when it is ready for input and seems to work, but sending delete characters to it just confuses it and it doesn't properly handle deleting. I'm not really sure if it is in interactive mode or not. When I type 'su root' I get the error "must be run from a terminal'.
Any ideas what I am doing wrong?
PS: When I have it done it will be released under the GPL.

For interactive mode, sh wants to be talking to a terminal or something that emulates one (a pseudo-terminal), not just direct IO pipes. Consider using forkpty to start the process you launch the shell from, and talking to the streams provided by that.

Related

WebExtensions Native messaging shuts down targeted application too quickly

In a background.js file I use the following to start a program on my computer:
var sending = browser.runtime.sendNativeMessage("program",json_obj);
It shuts down the program only after a few seconds even though the program needs to run for a little longer. On other computers that I have tested, the program runs fast enough to complete execution.
The documentation says:
A new instance of the application is launched for call to runtime.sendNativeMessage(). The browser will terminate the native application after getting a reply. To terminate a native application, the browser will close the pipe, give the process a few seconds to exit gracefully, and then kill it if it has not exited.
So it seems like a reply message from the program is leading to the shutdown.
I am using an example similar to the one shown here: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Native_messaging
Except that I am running a jar file instead of a python one. I have not put #echo off in the batch file because then the Java program does not start at all. I need to delay the reply from the native application so that the Java program can finish executing.
Thanks.
If you use connectNative() instead of sendNativeMessage(), the native application will be kept alive as long as the created Port is alive in the browser (ie until you explicitly close it or the page from which it was created is unloaded)

Why would a command called from system() wait to execute?

Working on an embedded system, running customized stripped down version of Linux 2.6.39.4, with a web page front end using javascript code to communicate with a back end coded in C via CGI scripts. (Inherited this project, its a nightmare)
The Situation: A web page calls a cgi script, which in turn fires up some C code, which makes a few system() calls, and then sleeps waiting for data. For reference, this particular section of code is initializing a WLAN module.
The Issue: At the beginning of the C code, a system() call is made to start the udhcpc process. After the calling function exists, it calls a second function which calls sleep() to wait for data. However, it appears that the udhcpc process started prior does not execute until after the sleep. I am lost.
The specific system command being executed is:
system("udhcpc -n -i eth1 -q");
I was under the impression that a system call should wait there until the command finishes, at which point it exits and continues on. In my terminal window, I do not see the exepcted output from udhcpc until after my sleep executes (known from logging some debug statements to follow execution path)
Am I misunderstanding how the system call works? Or does this have anything to do with the fact that its a C module being executed via CGI?
For clarify, this is what the order of operations currently:
Button is clicked on web page, which calls CGI script
CGI script calls into compiled C application
C application uses system call to start udhcpc
----> I would expect to see output from udhcpc in terminal here
C application sleeps and waits for data
----> Output from udhcpc is not seen until now, after sleep finishes

C program that connects to remote host and then executes system commands

I've looked around online about executing system commands through a c program, but none of them touched on executing the command after connecting to a remote host such as (this connection prompts for a user password):
sprintf(buffer1,"ssh -l %s %s ",userName,hostName);
system((char*)buffer1);
//Nothing below this executes because the connection has been established
sprintf(buffer2,"shasum sfin.exe > t.sha");
system((char*)buffer2);
Once the connection is closed the program then continues to execute, is there a simple way to keep the execution going?
You'll want to use the function popen instead of system.
http://linux.die.net/man/3/popen
It runs a command, returning a file object that you can write to with functions like fprintf, fwrite, etc., and those commands will go through the ssh process to the remote computer.

Interpreting commands sent over ssh

Say from a PC, a SSH client will be sending commands (such as custom commands for my program e.g. ("show list of devices")). On my Linux end I want to have a program which will recieve these commands sent over SSH and execute them.
I'm trying to write a c program running on the linux side to interpret the custom commands sent and process them as per my requirements. Any suggestions as to how I would achieve this?
UPDATE:
There are 2 programs here. The 1st program running on a PC1 which gives a command line interface, by which the user can issue commands. The second program is on the Linux end(PC2) which has to receive these commands and process it. Right now I'm thinking on the second program on how to get those commands.
You can do this in at least two different ways:
Execute the C program (say client) through ssh and send commands as arguments. The client parses arguments and does whatever. You need to run the client for each command.
Your C program reads commands from the standard input, so you execute the C program over ssh once, and pipe your commands to ssh.
If your commands are not so frequent then do the 1st one. You can even execute ssh instances in the background and effectively run client commands in parallel. If you have a lot commands in sequence then implement the 2nd way. It will be harder to run them in parallel and relatively harder to parse commands since 1st way will give you each parameter as a different argument. With the second method, it will be much more efficient and faster to process frequent commands since you will not be establishing a connection and forking a process per command.
This is really about communicating with another program, and has very little to do with ssh - since ssh is just the "pipework" - what you need to do is open two pipes (one for stdin, one for stdout) to your application [which happens to be ssh], and write to the stdin pipe, and read from the stdout pipe. If you can do this with a regular (local) program, then all you need to do is add "ssh" to the line you are executing, and you're done.
If you don't understand what this means, look up exec, popen, pipes, etc.
You obviously also need the program at the other end to read from its stdin and write to its stdout. But that's normal command line programming, so I don't see that as a huge problem.

Attach a program to a process stdout/stderr

I have a program where I need to attach to stdout/stderr of a given pid (fetched from a file).
How this should be done?Is this even possible?
EDIT 1:
I have a monitor program that starts/stops a server program. However, the monitor can be closed/reopened and should hook to existent server stdout to read errors that are written on stdout (and also some output based on some monitor requests).
EDIT 2:
I built server and monitor so I have sources of both, the problem is that the server "answer" to some monitor requests on the stdout, I don't want to add another interprocess comunication part
While a process is running, there isn't a standard Unix way to intercept its output from another process and start capturing it after the target process has been started.
If you are starting these processes yourself, via execve, you can simply set up a pipe via pipe(2) and redirect its descriptors (via dup2(2)) to the child process' stdin and stdout. This way the parent will be able to write/read to the child's stdin/stdout through the pipe.
Regarding your question after the edit: this seems like a good fit for a Unix fifo file.
A fifo file (or a named pipe) appears like a file, but is implemented as a pipe under the bonnet.
So just create a fifo file (with the mkfifo(1) command), start the server application by redirecting its stdin and stdout descriptors to that file (with the < and > operators of the shell), and you'll be able to read from it anytime.
Never tried, but you may look at the /proc/$pid/ directory if it is possible (with proper permissions) to attach to the file descriptor entries there. Otherwise I couldn't imagine how this would be possible.
EDIT (after getting more details)
You state, that your process will be respnsible to start/stop that server process - THIS makes things a lot easier :)
As this is homework, I'll just draw the picture:
create named pipes for ther server's stdin and stdout
when starting the server, connect its stdin/stdout with the named pipes
when starting your client, read/write from/to the named pipes
Do you have the option of configuring the server so that it sends output to a log file instead, or in addition to, stdout? On a unix box, you could run the server through tee to log stdout to a file:
$ server | tee server.log
Then it is a simple matter to tail server.log to get the latest output.
$ tail -f server.log

Resources