C run external program and get the result - c

In C, how should I execute external program and get its results as if it was ran in the console?
if there is an executable called dummy, and it displays 4 digit number in command prompt when executed, I want to know how to run that executable and get the 4 digit number that it had generated. In C.

popen() handles this quite nicely. For instance if you want to call something and read the results line by line:
char buffer[140];
FILE *in;
extern FILE *popen();
if(! (in = popen(somecommand, "r"""))){
exit(1);
}
while(fgets(buff, sizeof(buff), in) != NULL){
//buff is now the output of your command, line by line, do with it what you will
}
pclose(in);
This has worked for me before, hopefully it's helpful. Make sure to include stdio in order to use this.

You can use popen() on UNIX.

This is not actually something ISO C can do on its own (by that I mean the standard itself doesn't provide this capability) - possibly the most portable solution is to simply run the program, redirecting its standard output to a file, like:
system ("myprog >myprog.out");
then use the standard ISO C fopen/fread/fclose to read that output into a variable.
This is not necessarily the best solution since that may depend on the underlying environment (and even the ability to redirect output is platform-specific) but I thought I'd add it for completeness.

There is popen() on unix as mentioned before, which gives you a FILE* to read from.
Alternatively on unix, you can use a combination of pipe(), fork(), exec(), select(), and read(), and wait() to accomplish the task in a more generalized/flexible way.
The popen library call invokes fork and pipe under the hood to do its work. Using it, you're limited to simply reading whatever the process dumps to stdout (which you could use the underlying shell to redirect). Using the lower-level functions you can do pretty much whatever you want, including reading stderr and writing stdin.
On windows, see calls like CreatePipe() and CreateProcess(), with the IO members of STARTUPINFO set to your pipes. You can get a file descriptor to do read()'s using _open_ofshandle() with the process handle. Depending on the app, you may need to read multi-threaded, or it may be okay to block.

Related

Stdout redirecting (to a file for instance) with a static library in C

I know already how to implement methods regarding usual freopen(), popen() or similar stdout/stdin/stderr -based redirecting mechanisms, but I wondered how should I apply the said mechanism to static (own) libraries in C? Say, I want to use a library to capture any program with printf() commands or so into a file (for instance) without letting it appear on the console - are there some things I need to acknowledge before applying simple fd dups and just calling the library in the main program? Even piping seems to be complex seeing as execing here is risky...
thanks in advance.
There's an old-timers' trick to force the entire process, regardless of what library the code comes from, to have one of the standard IO ports connected to a different filehandle. You simply close the filehandle in question, then open a new one. If you close(1), then open('some_file', 'w'), then ALL calls that would result in a write to stdout will go to some_file from that point forward.
This works because open() always uses the first file descriptor that isn't currently in use. Presuming that you haven't closed stdin (fd=0), the call to open will get a file descriptor of 1.
There are some caveats. FILE outputs that haven't flushed their buffers will have undefined behavior, but you probably won't be doing this in the middle of execution. Set it up as your process starts and you'll be golden.

Executing bash command and getting the output in C

Hello I have seen some solutions on the internet, all of them are basically creating a file, however I want to store them in an array of char. Speed is really important for me and I don't want to spend any time for working on hard drive. So popen() is not a real solution for me.
Here is a working code snippet:
char bash_cmd[256] = "ls -l";
char buffer[1000];
FILE *pipe;
int len;
pipe = popen(bash_cmd, "r");
if (NULL == pipe) {
perror("pipe");
exit(1);
}
fgets(buffer, sizeof(buffer), pipe);
len = strlen(buffer);
buffer[len-1] = '\0';
pclose(pipe);
If you would read the manpage of popen, you would notice the following:
The popen() function opens a process by creating a pipe, forking,
and invoking the shell. [...] The return value from popen() is a
normal standard I/O stream in all respects save that it must be
closed with pclose() rather than fclose(3). [...] reading from a
"popened" stream reads the command's standard output, and the
command's standard input is the same as that of the process that
called popen().
(emphasis mine)
As you can see, a call to popen results in the stdout of the command being piped into your program through an I/O stream, which has nothing to do with disk I/O at all, but rather with interprocess communication managed by the operating system.
(As a sidenote: It's generally a good idea to rely on the basic functionality of the operating system, within reason, to solve common problems. And since popen is part of POSIX.1-2001 you can rely on it to be available on all standards compliant operarting systems, even windows)
EDIT: if you want to know more, read this: http://linux.die.net/man/3/popen
Never forget Knuth's saying that "premature optimization is the root of all evil". Don't worry about performance until it matters, and then measure before doing anything. Except for very rare situations, the value of your time is much higher than the cost of the program runs.
Jon Bentley's "Writing efficient programs" (sadly out of print, in his "Programming Pearls" one chapter is a summary) is a detailed discussion on how to make programs run faster (if it is worthwhile); and only as the very last measure, to squeeze out the last possible 2% of performance (after cutting run time down by half) it recommends using changes like you propose. The cited book includes some very entertaining war stories of "performance optimizations" that were a complete waste (optimize code that isn't ever used, oprimize the code run while the operating system twiddles its thumbs, ...).
If speed is important to you, you can write your own version of popen.
It may make sense, since popen()
- creates a pipe
- forks
- executes the shell (very expensive!)
- the shell than creates a pipe, forks, executes your program
Your customized version could reduce the procedure to:
- creates a pipe
- forks
- executes your program
You could even extend popen to control the commands STDOUT, STDERR and STDIN seperately.
I wrote such a routine, see https://github.com/rockdaboot/mget/blob/master/libmget/pipe.c
It is GPL'ed.
You call mget_popen3() with FILE pointers or mget_fd_popen3() with file descriptors.
At least, it should give you an idea on how to do it.
Do you mind having more than one C programs?If you don't ,you can make use of the command line arguments. In the fist C program you can do the following
system("YourCommand | SecondProgram");
The SecondProgram will be the "executable" of the second C program you will be writing. In the second C program you can receive the output of the command YourCommand as a command line argument in the SecondProgram. For that purpose you may begin the main() of second C program as below
main(int argc,char *argv[])
The array argv will have the output of the YourCommand and argc will contain the number of elements in the array argv.

C - Proper way to close files when using both open() and fdopen()

So I'm building a Unix minishell in C, and am implementing input, output, and err redirection, and have come across a problem with files. I open my files in a loop where I find redirection operators, and use open(), which returns an fd. I then assign the child's fd accordingly, and call an execute function.
When my shell is just going out and finding programs, and executing them with execvp(), I don't have much of a problem. The only problem is knowing whether or not I need to call close() on the file descriptors before prompting for the next command line. I'm worried about having an fd leak, but don't exactly understand how it works.
My real problem arises when using builtin commands. I have a builtin command called "read", that takes one argument, an environmental variable name(could be one that doesn't yet exist). Read then prompts for a value, and assigns that value to the variable. Here's an example:
% read TESTVAR
test value test value test value
% echo ${TESTVAR}
test value test value test value
Well lets say that I try something like this:
% echo here's another test value > f1
% read TESTVAR < f1
% echo ${TESTVAR}
here's another test value
This works great, keep in mind that read executes inside the parent process, I don't call read with execvp since it's builtin. Read uses gets, which requires a stream variable, not an fd. So after poking around on the irc forums a bit I was told to use fdopen, to get the stream from the file descriptor. So before calling gets, I call:
rdStream = fdopen(inFD, "r");
then call
if(fgets(buffer, envValLen, rdStream) != buffer)
{
if(inFD) fclose(rdStream);
return -1;
}
if(inFD) fclose(rdStream);
As you can see, at the moment I'm closing the stream with fclose(), unless it is equal to stdin(which is 0). Is this necessary? Do I need to close the stream? Or just the file descriptor? Or both? I'm quite confused on which I should close, since they both refer to the same file, in a different manner. At the moment I'm not closing the fd, however I think that I definitely should. I would just like somebody to help make sure my shell isn't leaking any files, as I want it to be able to execute several thousand commands in a single session without leaking memory.
Thanks, if you guys want me to post anymore code just ask.
The standard says:
The fclose() function shall perform the equivalent of a close() on the
file descriptor that is associated with the stream pointed to by
stream.
So calling fclose is enough; it will also close the descriptor.
FILE is a buffering object from standard C library. When you do fclose (standard C function) it will eventually call close (Unix system function) but only after making sure C library buffers are flushed. So, I would say, if you use fopen andfwrite then you should use fclose, and not just close, otherwise you risk loosing some data.

Streams printing and redirection

I have a program which prints (by printf) to the stdout some data and also calls to function *foo*
which also prints to the stdout some data [the way (implementation) of how printing is done from foo is unknown and I can`t see the code of foo].
I have to redirect everything from stdout to buffer or file. I tried to do it in several ways
freopen(file.txt, stdout) - only my code prints are written to the file.txt. What was printed from foo is lost.
setbuf(buffer, stdout) - only my code prints are written to the buffer. What was printed from foo is appears in the stdout.(It appears on the screen)
What can explain this behavior? How can the problem be solved?
Note:This code has to work in cross-OS( lunux/wind && mac OS).I use gcc in order compile the code and I have cygwin
It's likely that foo isn't using stdio for printing and directly calling the OS for this.
I don't know about win32, but on POSIX you could use dup2 to take care of it.
/* Before the function foo is called, make `STDOUT_FILENO` refer to `fd` */
int fd;
fd = open(...);
dup2(fd, STDOUT_FILENO);
EDIT
Much to my surprise, win32 has _dup2 but it does something else.
How do you know that foo() is printing to stdout? Have you tried redirecting standard output to a file at the shell and seeing whether the output from foo() still appears on the screen?
If the file redirection sends foo()'s output to the file, then you may have to rejig the file descriptor level, as in cnicutar's answer.
If the file redirection does not send foo()'s output to the file, then it may be writing to stderr or it may be opening and using /dev/tty or something similar. You can test for stderr by redirecting it separately from stdout:
your_program >/tmp/stdout.me 2>/tmp/stderr.me
If it is opening /dev/tty, the output will still appear on your screen.
Which platform are you on? If you can track system calls (strace on Linux, truss on Solaris, ...), then you may be able to see in that what the foo() function is doing. You can help things by writing a message before and after calling the function, and ensuring you flush the output:
printf("About to call foo()\n");
fflush(0);
foo();
printf("Returned from foo()\n");
fflush(0);
The printf/fflush calls will be visible in the trace output, so what appears between is done by foo().
What can explain this behavior?
I have seen this sort of behavior when the code you are calling into uses a different C library than yours. On Windows I used to see this sort of thing when one DLL is compiled with GCC and another with Visual C++. The implementation of stdio for these is apparently different enough such that this can be problematic.
Another is that the code you are calling is not using stdio. If you are on Unix you can use dup2 to get around this, eg. dup2(my_file_descriptor, 1). On many implementations if you have a FILE* you can say dup2(fileno(f), 1). This may not be portable.

How can I capture another process's output using C?

How can I capture another process's output using pure C? Can you provide sample code?
EDIT: let's assume Linux. I would be interested in "pretty portable" code. All I want to do is to execute a command, capture it's output and process it in some way.
There are several options, but it does somewhat depend on your platform. That said popen should work in most places, e.g.
#include <stdio.h>
FILE *stream;
stream = popen("acommand", "r");
/* use fread, fgets, etc. on stream */
pclose(stream);
Note that this has a very specific use, it creates the process by running the command acommand and attaches its standard out in a such as way as to make it accessible from your program through the stream FILE*.
If you need to connect to an existing process, or need to do richer operations, you may need to look into other facilities. Unix has various mechanisms for hooking up a processes stdout etc.
Under windows you can use the CreateProcess API to create a new process and hook up its standard output handle to what you want. Windows also supports popen.
There's no plain C way to do this that I know of though, so it's always going somewhat dependent on platform specific APis.
Based on your edits popen seems ideal, it is "pretty portable", I don't think there's a unix like OS without it, indeed it is part of the Single Unix Specification, and POSIX, and it lets you do exactly what you want, execute a process, grab its output and process it.
If you can use system pipes, simply pipe the other process's output to your C program, and in your C program, just read the standard input.
otherprocess | your_c_program
Which OS are you using? On *nix type OS if you are process is outputting to STDOUT or STDERR you can obviously use pipes

Resources