I have a program where I need to attach to stdout/stderr of a given pid (fetched from a file).
How this should be done?Is this even possible?
EDIT 1:
I have a monitor program that starts/stops a server program. However, the monitor can be closed/reopened and should hook to existent server stdout to read errors that are written on stdout (and also some output based on some monitor requests).
EDIT 2:
I built server and monitor so I have sources of both, the problem is that the server "answer" to some monitor requests on the stdout, I don't want to add another interprocess comunication part
While a process is running, there isn't a standard Unix way to intercept its output from another process and start capturing it after the target process has been started.
If you are starting these processes yourself, via execve, you can simply set up a pipe via pipe(2) and redirect its descriptors (via dup2(2)) to the child process' stdin and stdout. This way the parent will be able to write/read to the child's stdin/stdout through the pipe.
Regarding your question after the edit: this seems like a good fit for a Unix fifo file.
A fifo file (or a named pipe) appears like a file, but is implemented as a pipe under the bonnet.
So just create a fifo file (with the mkfifo(1) command), start the server application by redirecting its stdin and stdout descriptors to that file (with the < and > operators of the shell), and you'll be able to read from it anytime.
Never tried, but you may look at the /proc/$pid/ directory if it is possible (with proper permissions) to attach to the file descriptor entries there. Otherwise I couldn't imagine how this would be possible.
EDIT (after getting more details)
You state, that your process will be respnsible to start/stop that server process - THIS makes things a lot easier :)
As this is homework, I'll just draw the picture:
create named pipes for ther server's stdin and stdout
when starting the server, connect its stdin/stdout with the named pipes
when starting your client, read/write from/to the named pipes
Do you have the option of configuring the server so that it sends output to a log file instead, or in addition to, stdout? On a unix box, you could run the server through tee to log stdout to a file:
$ server | tee server.log
Then it is a simple matter to tail server.log to get the latest output.
$ tail -f server.log
Related
I'm creating a C application daemon that I want to be able to interact with through named pipes. For example, I write in the shell echo hello > /tmp/appname/interface, and the application reads it and does stuff with what I just echoed into the named pipe.
I've successfully managed to implement this with a singular named pipe with read(). However, I want to be able to listen to many different named pipes at the same time. I don't think spawning a new thread for each named pipe is a very good solution. I am also concerned about whether constantly listening for something to be written to the named pipes with read() will use an unnecessarily high amount of CPU. Is there a better way to approach this?
In a Unix Process, I am planning to write code to access terminal. So, I can login to process and run few commands.
For example,
I can do telnet 0:2000 to get my terminal and from there I can dump my commands to dump process information.
On my research, I saw that I can use /dev/pts or /dev/tty to the access terminal for the process. User can login to terminal to these but not clear on how it is works.
To create a new pseudoterminal, tou need to call the following functions in order:
posix_openpt (To get a new master)
grantpt (To fix permissions for the new slave)
unlockpt (To unlock the slave)
ptsname (To get the name of the slave)
open (To open the slave)
setsid (optional, to enter a new session and process group - typically after fork when you are running a separate process on the slave)
I want to redirect stdout and stderr to a socket that I can then use to remotely monitor the status of my application over Ethernet. Currently I accomplish this by using ssh and watching the output in the shell console. I'd prefer to remove the middle man if possible and just send the entire stdout and stderr output to a udp or tcp/ip port and have my monitoring computer connect to it.
I cannot use UART or any other wired connection. It has to be Ethernet. Also, if possible, I'd like to accomplish this via a bash script, to prevent having to rebuild my application.
Thanks for the help.
The way you describe it, it sounds like your going to need either your existing application to open a passive socket and wait for connections, or your going to have to wrap your application in something that sets up a listening socket. This post suggests that is not possible in just Bash, however it does show ways to do it from the command line with netcat or perl. For example you could do something like this with netcat: nc -l -p <port> -c "tail -F /var/log/blah"
On the monitored application side, there is a way to redirect both outputs to an outbound connection, using netcat:
$ ./application 2>&1 | nc <remote-host> <remote-port>
This way, you're redirecting stderr to stdout and then pipe it all together to netcat, which will take care of setting up the socket, establish connection with the remote host and all that stuff.
However, bear in mind that you can suffer from printf()'s buffering, if that's the function you're using to write to stdout. In my local tests, I've seen that the data sent to stderr by the application is seen immediately on the other listening end, but on the other hand the data sent to stdout is only sent when the application exits or there's enough data in the buffer to flush it all at once. So, if you care about the order and the availability of the info on the monitoring side, I'd suggest you to place calls to fflush(stdout); whenever you print something interesting to stdout, or replace the calls to printf(), fprintf() and the like to write(), which does not buffer. The downside is that you have to touch the code of the application, of course, but I don't know any way to externally force flushing of an application's output buffers (i.e. from bash).
Say from a PC, a SSH client will be sending commands (such as custom commands for my program e.g. ("show list of devices")). On my Linux end I want to have a program which will recieve these commands sent over SSH and execute them.
I'm trying to write a c program running on the linux side to interpret the custom commands sent and process them as per my requirements. Any suggestions as to how I would achieve this?
UPDATE:
There are 2 programs here. The 1st program running on a PC1 which gives a command line interface, by which the user can issue commands. The second program is on the Linux end(PC2) which has to receive these commands and process it. Right now I'm thinking on the second program on how to get those commands.
You can do this in at least two different ways:
Execute the C program (say client) through ssh and send commands as arguments. The client parses arguments and does whatever. You need to run the client for each command.
Your C program reads commands from the standard input, so you execute the C program over ssh once, and pipe your commands to ssh.
If your commands are not so frequent then do the 1st one. You can even execute ssh instances in the background and effectively run client commands in parallel. If you have a lot commands in sequence then implement the 2nd way. It will be harder to run them in parallel and relatively harder to parse commands since 1st way will give you each parameter as a different argument. With the second method, it will be much more efficient and faster to process frequent commands since you will not be establishing a connection and forking a process per command.
This is really about communicating with another program, and has very little to do with ssh - since ssh is just the "pipework" - what you need to do is open two pipes (one for stdin, one for stdout) to your application [which happens to be ssh], and write to the stdin pipe, and read from the stdout pipe. If you can do this with a regular (local) program, then all you need to do is add "ssh" to the line you are executing, and you're done.
If you don't understand what this means, look up exec, popen, pipes, etc.
You obviously also need the program at the other end to read from its stdin and write to its stdout. But that's normal command line programming, so I don't see that as a huge problem.
I found an interesting problem on the net. I'll reproduce it here for reference.
I'm writing a daemon process to execute programs and then restart
them if they exit with a status of something other than
EXIT_SUCCESS; but these programs will probably not want to be daemon
processes themselves. If I use fork() and then call execv() will
the new child process be a daemon process too?
I tried running firefox and it didn't work. So, in which case, how
can I start the child processes as normal processes?
The solutions offered in that site somehow doesn't convince me. Any ideas?
If by daemon process you mean the file descriptors of stdin, stdout and stderr are not connected to any tty or pts, then yes. So just opening something for stdin, stdout and stderr should work.
However, you should have tried it yourself first, firefox (here) opens perfectly with stdin, stdout and stderr redirected to /dev/null. I think the main thing is that you call execv() or execve() and keep the DISPLAY variable.
Edit
If your asking how to reconnect to the original descriptor destinations then there's at least no portable solution. Obviously you can't reconnect to a pipe. You can however reconnect (under linux at least) to the tty/pts you came from, or even the file (using the /proc filesystem and readlink()). You will have to guess the "seek" though (e.g. if the original command was foo 2>> bar).