Hello I have seen some solutions on the internet, all of them are basically creating a file, however I want to store them in an array of char. Speed is really important for me and I don't want to spend any time for working on hard drive. So popen() is not a real solution for me.
Here is a working code snippet:
char bash_cmd[256] = "ls -l";
char buffer[1000];
FILE *pipe;
int len;
pipe = popen(bash_cmd, "r");
if (NULL == pipe) {
perror("pipe");
exit(1);
}
fgets(buffer, sizeof(buffer), pipe);
len = strlen(buffer);
buffer[len-1] = '\0';
pclose(pipe);
If you would read the manpage of popen, you would notice the following:
The popen() function opens a process by creating a pipe, forking,
and invoking the shell. [...] The return value from popen() is a
normal standard I/O stream in all respects save that it must be
closed with pclose() rather than fclose(3). [...] reading from a
"popened" stream reads the command's standard output, and the
command's standard input is the same as that of the process that
called popen().
(emphasis mine)
As you can see, a call to popen results in the stdout of the command being piped into your program through an I/O stream, which has nothing to do with disk I/O at all, but rather with interprocess communication managed by the operating system.
(As a sidenote: It's generally a good idea to rely on the basic functionality of the operating system, within reason, to solve common problems. And since popen is part of POSIX.1-2001 you can rely on it to be available on all standards compliant operarting systems, even windows)
EDIT: if you want to know more, read this: http://linux.die.net/man/3/popen
Never forget Knuth's saying that "premature optimization is the root of all evil". Don't worry about performance until it matters, and then measure before doing anything. Except for very rare situations, the value of your time is much higher than the cost of the program runs.
Jon Bentley's "Writing efficient programs" (sadly out of print, in his "Programming Pearls" one chapter is a summary) is a detailed discussion on how to make programs run faster (if it is worthwhile); and only as the very last measure, to squeeze out the last possible 2% of performance (after cutting run time down by half) it recommends using changes like you propose. The cited book includes some very entertaining war stories of "performance optimizations" that were a complete waste (optimize code that isn't ever used, oprimize the code run while the operating system twiddles its thumbs, ...).
If speed is important to you, you can write your own version of popen.
It may make sense, since popen()
- creates a pipe
- forks
- executes the shell (very expensive!)
- the shell than creates a pipe, forks, executes your program
Your customized version could reduce the procedure to:
- creates a pipe
- forks
- executes your program
You could even extend popen to control the commands STDOUT, STDERR and STDIN seperately.
I wrote such a routine, see https://github.com/rockdaboot/mget/blob/master/libmget/pipe.c
It is GPL'ed.
You call mget_popen3() with FILE pointers or mget_fd_popen3() with file descriptors.
At least, it should give you an idea on how to do it.
Do you mind having more than one C programs?If you don't ,you can make use of the command line arguments. In the fist C program you can do the following
system("YourCommand | SecondProgram");
The SecondProgram will be the "executable" of the second C program you will be writing. In the second C program you can receive the output of the command YourCommand as a command line argument in the SecondProgram. For that purpose you may begin the main() of second C program as below
main(int argc,char *argv[])
The array argv will have the output of the YourCommand and argc will contain the number of elements in the array argv.
Related
First I have following macro
#define MSG_UPDATE_DATA 70
Then open a pipe with popen
SensServer = popen("./SensServer", "w") ;
In the following code that uses the putc(...) function to write to pipe, the function makes the program block and the lines of code following do not execute
void requestTempAndPress(int pid) {
printf("Temp and presure requested. msg_type: %d\n", MSG_UPDATE_DATA);
int n = putc(MSG_UPDATE_DATA, SensServer);
printf("Data sent: %d\n", MSG_UPDATE_DATA);
}
It outputs Temp and presure requested. msg_type: 70 fine. But not the "Data sent..." line.
As per the man page,
pipe is a fd, type int.
putc() needs a FILE* stream as argument.
So, most possibly, in your code, you are supplying the wrong type of argument to putc(), creating the issue.
Given the information (and lack of a sample program), this sounds like a question asking how to make pipes non-blocking. This has been discussed before, usually for nonblocking reads, e.g.,
Non-blocking pipe using popen?
Correct Code - Non-blocking pipe with popen (C++)
The first link mentions fcntl and the O_NONBLOCK flag which the manual page says can be applied to both reads and writes.
However, using popen makes the pipe using buffered I/O, while the operations addressed by fcntl are non-buffered read and write (you really cannot mix the two). If the program were changed to use the low-level pipe (as in the example for the first link), and consistently used the non-buffered I/O, it would give the intended behavior.
Here are links to more general discussion on the topic:
Introduction to non-blocking I/O
Blocking and Non-Blocking I/0
On the other hand (noting comments), if the program fragment is for example part of some larger system doing handshaking (expecting a timely response back from the server), that will run into problems. The fragment is writing a single character across the pipe. However, popen opens a (block-)buffered stream. Nothing will be sent directly to the server as single-character writes unless some help is provided. For instance, one could flush the output stream after each putc, e.g.,
fflush(SensServer);
Alternatively, one could make the stream unbuffered by changing it immediately after the successful call to popen, e.g., using setvbuf:
setvbuf(SensServer, NULL, _IONBF, 0);
Here are links for further reading about buffering in pipes:
Turn off buffering in pipe
Unix buffering delays output to stdout, ruins your day
Force line-buffering of stdout when piping to tee
The problem was due to the fact I initialised the variable SensServer in the parent process but not the child. Which meant the pointer was 0 or I guess a random memory location.
This seems like a bit of a computing systems 101 question, but I'm stumped.
I am integrating existing code from C/C++ project A into my own project B. Both A and B will be linked into a single executable, threaded process. Project A's code makes extensive use of printf for output. This is fine, but I want also to capture that output into my own buffers. Is there a way I can read from stdout once the printf calls have written to it? I cannot fork the process or pipe. And my efforts to poll() stdout, or to dup() it, have not succeeded (I may be doing something wrong here).
You can use freopen to change the descriptor.
#include<stdio.h>
main(int argc, char** argv) {
FILE *fp = freopen("output.txt", "w", stdout);
printf("Hello\n");
fclose(fp);
}
If you run that you'll see the printf output in output.txt and nothing will go to your screen.
You can now open the file to read the data or you could even mmap it into your memory space and process it that way.
Before you printf(), you could close fd 1, and dup2() a pipe that you've created into fd 1.
Not to mention: there is now a handy U-Streams C source code library that makes redirecting stdout and stderr quite trivial. And you can even redirect them very easily to multiple destinations. And, you can create your own streams besides that can be used in exactly the same way stdout and stderr behave.
Look for the U-Streams C Library... handy indeed.
Once it's gone out, it's gone. If you want to compile it all into a single executable, you'll have to go through the code for A with a search and replace and change all those printf calls into ones to your own stream, where you can copy them and then pass them on to stdout.
In C, how should I execute external program and get its results as if it was ran in the console?
if there is an executable called dummy, and it displays 4 digit number in command prompt when executed, I want to know how to run that executable and get the 4 digit number that it had generated. In C.
popen() handles this quite nicely. For instance if you want to call something and read the results line by line:
char buffer[140];
FILE *in;
extern FILE *popen();
if(! (in = popen(somecommand, "r"""))){
exit(1);
}
while(fgets(buff, sizeof(buff), in) != NULL){
//buff is now the output of your command, line by line, do with it what you will
}
pclose(in);
This has worked for me before, hopefully it's helpful. Make sure to include stdio in order to use this.
You can use popen() on UNIX.
This is not actually something ISO C can do on its own (by that I mean the standard itself doesn't provide this capability) - possibly the most portable solution is to simply run the program, redirecting its standard output to a file, like:
system ("myprog >myprog.out");
then use the standard ISO C fopen/fread/fclose to read that output into a variable.
This is not necessarily the best solution since that may depend on the underlying environment (and even the ability to redirect output is platform-specific) but I thought I'd add it for completeness.
There is popen() on unix as mentioned before, which gives you a FILE* to read from.
Alternatively on unix, you can use a combination of pipe(), fork(), exec(), select(), and read(), and wait() to accomplish the task in a more generalized/flexible way.
The popen library call invokes fork and pipe under the hood to do its work. Using it, you're limited to simply reading whatever the process dumps to stdout (which you could use the underlying shell to redirect). Using the lower-level functions you can do pretty much whatever you want, including reading stderr and writing stdin.
On windows, see calls like CreatePipe() and CreateProcess(), with the IO members of STARTUPINFO set to your pipes. You can get a file descriptor to do read()'s using _open_ofshandle() with the process handle. Depending on the app, you may need to read multi-threaded, or it may be okay to block.
How can I capture another process's output using pure C? Can you provide sample code?
EDIT: let's assume Linux. I would be interested in "pretty portable" code. All I want to do is to execute a command, capture it's output and process it in some way.
There are several options, but it does somewhat depend on your platform. That said popen should work in most places, e.g.
#include <stdio.h>
FILE *stream;
stream = popen("acommand", "r");
/* use fread, fgets, etc. on stream */
pclose(stream);
Note that this has a very specific use, it creates the process by running the command acommand and attaches its standard out in a such as way as to make it accessible from your program through the stream FILE*.
If you need to connect to an existing process, or need to do richer operations, you may need to look into other facilities. Unix has various mechanisms for hooking up a processes stdout etc.
Under windows you can use the CreateProcess API to create a new process and hook up its standard output handle to what you want. Windows also supports popen.
There's no plain C way to do this that I know of though, so it's always going somewhat dependent on platform specific APis.
Based on your edits popen seems ideal, it is "pretty portable", I don't think there's a unix like OS without it, indeed it is part of the Single Unix Specification, and POSIX, and it lets you do exactly what you want, execute a process, grab its output and process it.
If you can use system pipes, simply pipe the other process's output to your C program, and in your C program, just read the standard input.
otherprocess | your_c_program
Which OS are you using? On *nix type OS if you are process is outputting to STDOUT or STDERR you can obviously use pipes
I've coded a program in C that sends messages to the stdout using printf and I'm having trouble redirecting the output to a file (running from bash).
I've tried:
./program argument >> program.out
./program argument > program.out
./program >> program.out argument
./program > program.out argument
In each case, the file program.out is created but it remains empty. After the execution ends the file size is 0.
If I omit the redirection when executing the program:
./program argument
Then, all messages sent to stdout using printf are shown in the terminal.
I have other C programs for which I've no problem redirecting the output this way.
Does it have to do with the program itself? with the argument passing?
Where should look for the problem?
Some details about the C program:
It does not read anything from stdin
It uses BSD Internet Domain sockets
It uses POSIX threads
It assigns a special handler function for SIGINT signal using sigaction
It sends lots of newlines to stdout (for those of you thinking I should flush)
Some code:
int main(int argc, char** argv)
{
printf("Execution started\n");
do
{
/* lots of printf here */
} while (1);
/* Code never reached */
pthread_exit(EXIT_SUCCESS);
}
Flushing after newlines only works when printing to a terminal, but not necessarily when printing to a file. A quick Google search revealed this page with further information: http://www.pixelbeat.org/programming/stdio_buffering/
See the section titled "Default Buffering modes".
You might have to add some calls to fflush(stdout), after all.
You could also set the buffer size and behavior using setvbuf.
Flushing the buffers is normally handled by the exit() function, which is usually called implicitly by a return from main(). You are ending your program by raising SIGINT, and apparently the default SIGINT handler does not flush the buffers.
Take a look at this article:
Applying Design Patterns to Simplify Signal Handling. The article is mostly C++, but there is a useful C example in the 2nd section, which shows how to use SIGINT to exit your program gracefully.
As for why the behavior of a terminal differs from a file,
take a look at Stevens' Advanced Programing in the UNIX Environment Section 5.4 on Buffering. He says that:
Most implementations default to the following types of buffering.
Standard error is always unbuffered.
All other streams are line buffered if they refer to a terminal device; otherwise, they are fully buffered.
The four platforms discussed in this book follow these conventions for standard I/O buffering: standard error is unbuffered, streams open to terminal devices are line buffered, and all other streams are fully buffered.
Has the program terminated by the time you check the contents of the redirected file? If it's still running, your output might still be buffered somewhere up the chain, so you don't see it in the file.
Apart from that, and the other answers provided so far, I think it's time to show a representative example of the problem code. There's too many esoteric possibilities.
EDIT
From the look of the sample code, if you've got a relatively small amount of printing happening, then you're getting caught in the output buffer. Flush after each write to be sure that it's gone to disk. Typically you can have up to a page size's worth of unwritten data lying around otherwise.
In the absence of a flush, the only time you can be sure you've got everything on disk is when the program exits. Even a thread terminating won't do it, since output buffers like that aren't per-thread, they're per-process.
Suggestions:
Redirect stderr to a file as well.
Try tail -f your output file(s).
Open a file and fprintf your logging (to help figure out what's going on).
Search for any manual closes/duplication/piping of std* FILE handles or 1-3 file descriptors.
Reduce complexity; cut out big chunks of functionality until printfs work. Then readd them until it breaks again. Continue until you identify the culprit code.
Just for the record, in Perl you would use:
use IO::Handle;
flush STDOUT;
autoflush STDOUT;