I have been using dup and freopen to rerout stdout to a file as below:
fflush(stdout);
fgetpos(stdout, &pos);
fd = dup(fileno(stdout));
freopen("stdout.out", "w", stdout);
What I would like to do would be rerout it to a char[], so that I can manipulate it. obviously this isn't very useful when writing with printf, but when using libraries that write to stdout, it would be helpful to get the output in the code so I can manipulate it, if necessary.
Assigning to stdout is not guaranteed to work, but maybe it will work on your platform, otherwise see this answer based on shmem_open, mmap and fmem_open: https://stackoverflow.com/a/25327235/2332068
Related
this is my first step on Stackoverflow !
So, I'm trying to use setbuf() to redirect stdout into a char buffer[BUFSIZ]. It works perfectly when I use printf(), but not at all when I use the system call write().
Here is an example of code :
#include <stdio.h>
#include <unistd.h>
int main(void)
{
char buffer[BUFSIZ];
freopen("/dev/null", "a", stdout);
setbuf(stdout, buffer);
printf("This works\n");
write(stdout->_file, "This doesn't.\n", 14);
fflush(stdout);
freopen("/dev/tty", "a", stdout);
printf("Buffer content :\n%s", buffer);
return 0;
}
And the output is
Buffer content :
This works
Do you have any idea why ?
Because for now I don't see how this work, I'll pipe stdout to stdin and then read the result - not the cleanest way of doing this I think.
Thank you, and have a nice day !
The write function is a low-level POSIX function that operates at a lower-level than the C standard output functions.
By using write directly you bypass the stdio buffering. If you want to use the buffer use the standard C fwrite function instead.
Also note that stdout is a FILE*, and FILE is an opaque data structure. You should never attempt to use members of it directly.
anyone know how I can fix this?
char bash_cmd[256] = "curl";
char buffer[1000];
FILE *pipe;
int len;
pipe = popen(bash_cmd, "r");
if (NULL == pipe) {
perror("pipe");
exit(1);
}
fgets(buffer, sizeof(buffer), pipe);
printf("OUTPUT: %s", buffer);
pclose(pipe);
The above code snippit is returning the following:
OUTPUT: (�3B
instead of what it should be returning which is:
curl: try 'curl --help' or 'curl --manual' for more information
Something is wrong, I can't figure out what. When I replace "curl" with, say, "ls -la" it works fine, but for whatever reason only when I use curl, it doesn't properly save the output into buffer. What could I do to fix this?? thanks in advance
Also, replacing "curl" with the full path to curl, (/usr/bin/curl) doesn't work either. ;(
When I run your code, I find that the output is indeed approximately what you describe, but that the output you expect is also printed immediately previous. It seems highly likely, therefore, that curl is printing the usage message to its stderr rather than to its stdout, as indeed it should do.
You do not check the return value of fgets(); I suspect you would find that it is NULL, indicating that the end of the stream occurred before any data was read. In that case, I do not think fgets() modifies the provided buffer.
If you want to capture curl's stderr in addition to its stdout, then you can apply I/O redirection to the problem:
char bash_cmd[256] = "curl 2>&1";
That would not work (directly) with the execve()-family functions, but popen() runs the given command via a shell, which should handle the redirection operator just fine.
For general purposes, however, combining curl's output and error streams may not be what you want. If both real output and real diagnostics were emitted then they would be intermingled.
The output you expect from curl is going to stderr not stdout. In fact nothing is written to stdout. The output you are printing is the uninitialized contents of the buffer.
Your code should check the return value of fgets, which will be null if no characters were read (or if an error occurred).
have a look at this code:
#include<stdio.h>
#include <unistd.h>
int main()
{
int pipefd[2],n;
char buf[100];
if(pipe(pipefd)<0)
printf("Pipe error");
printf("\nRead fd:%d write fd:%d\n",pipefd[0],pipefd[1]);
if(write(pipefd[1],"Hello Dude!\n",12)!=12)
printf("Write error");
if((n=read(pipefd[0],buf,sizeof(buf)))<=0)
printf("Read error");
write(1,buf,n);
return 0;
}
I expect the printf to print Read fd and write fd before Hello Dude is read from the pipe. But thats not the case... see here. When i tried the same program in our college computer lab my output was
Read fd:3 write fd:4
Hello Dude!
also few of our friends observed that, changing the printf statement to contain more number of \n characters changed the output order... for example..printf("\nRead fd:%d\n write fd:%d\n",pipefd[0],pipefd[1]); meant that Read fd is printed then the message Hello Dude! then the write fd is printed. What is this behaviour??
Note: Out lab uses a linux server on which we run terminals, i don't remember the compiler version though.
It's because printf to the standard output stream is buffered but write to the standard output file descriptor is not.
That means the behaviour can change based on what sort of buffering you have. In C, standard output is line buffered if it can be determined to be connected to an interactive device. Otherwise it's fully buffered (see here for a treatise on why this is so).
Line buffered means it will flush to the file descriptor when it sees a newline. Fully buffered means it will only flush when the buffer fills (for example, 4K worth of data), or when the stream is closed (or when you fflush).
When you run it interactively, the flush happens before the write because printf encounters the \n and flushes automatically.
However, when you run it otherwise (such as by redirecting output to a file or in an online compiler/executor where it would probably do the very same thing to capture data for presentation), the flush happens after the write (because printf is not flushing after every line).
In fact, you don't need all that pipe stuff in there to see this in action, as per the following program:
#include <stdio.h>
#include <unistd.h>
int main (void) {
printf ("Hello\n");
write (1, "Goodbye\n", 8);
return 0;
}
When I execute myprog ; echo === ; myprog >myprog.out ; cat myprog.out, I get:
Hello
Goodbye
===
Goodbye
Hello
and you can see the difference that the different types of buffering makes.
If you want line buffering regardless of redirection, you can try:
setvbuf (stdin, NULL, _IOLBF, BUFSIZ);
early on in your program - it's implementation defined whether an implementation supports this so it may have no effect but I've not seen many where it doesn't work.
You shouldn't mix calls to write and printf on single file descriptor. Change write to fwrite.
Functions which use FILE are buffered. Functions which use file descriptors are not. This is why you may get mixed order.
You can also try calling fflush before write.
When you write onto the same file, or pipe, or whatever by two means at once (direct IO and output stream) you can get this behaviour. The reason is that the output stream is buffered.
With fflush() you can control that behaviour.
What is happening is that printf writes to stdout in a buffered way -- the string is kept in a buffer before being output -- while the 'write' later on writes to stdout unbuffered. This can have the effect that the output from 'write' appears first if the buffer from the printf is only flushed later on.
You can explicitly flush using fflush() -- but even better would be not to mix buffered and non-buffered writes to the same output. Type man printf, man fflush, man fwrite etc. on your terminal to learn more about what these commands do exactly.
stdout is line-buffered when connected to a terminal, but I remember reading somewhere that reading (at least from stdin) will automatically flush stdout. All C implementations that I have used have done this, but I can't find it in the standard now.
It does make sense that it works that way, otherwise code like this:
printf("Type some input: ");
fgets(line, sizeof line, stdin);
would need an extra fflush(stdout);
So is stdout guaranteed to be flushed here?
EDIT:
As several replies have said, there seems to be no guarantee in the standard that the output to stdout in my example will appear before the read from stdin, but on the other hand, this intent is stated in (my free draft copy of) the standard:
The input and output dynamics of
interactive devices shall take place
as specified in 7.19.3. The intent of
these requirements is that unbuffered
or line-buffered output appear as soon
as possible, to ensure that prompting
messages actually appear prior to a
program waiting for input.
(ISO/IEC 9899:TC2 Committee Draft -- May 6, 2005, page 14).
So it seems that there is no guarantee, but it will probably work in most implementations anyway. (Famous last words...)
No, it does not.
To answer your question, you do need the extra fflush(stdout); after your printf() call to make sure the prompt appears before your program tries to read input. Reading from stdin doesn't fflush(stdout); for you.
No. You need to fflush(stdout); Many implementations will flush at every newline of they are sending output to a terminal.
No. stdin/stdout are buffered. You need to explicity fflush(stdout) in order for the buffered data in the video memory/unix terminal's memory to be pushed out on to a view device such as a terminal. The buffering of the data can be set by calling setvbuf.
Edit: Thanks Jonathan, to answer the question, reading from stdin does not flush stdout. I may have gone off a tangent here by specifying the code demonstrating how to use setvbuf.
#include
int main(void)
{
FILE *input, *output;
char bufr[512];
input = fopen("file.in", "r+b");
output = fopen("file.out", "w");
/* set up input stream for minimal disk access,
using our own character buffer */
if (setvbuf(input, bufr, _IOFBF, 512) != 0)
printf("failed to set up buffer for input file\n");
else
printf("buffer set up for input file\n");
/* set up output stream for line buffering using space that
will be obtained through an indirect call to malloc */
if (setvbuf(output, NULL, _IOLBF, 132) != 0)
printf("failed to set up buffer for output file\n");
else
printf("buffer set up for output file\n");
/* perform file I/O here */
/* close files */
fclose(input);
fclose(output);
return 0;
}
Hope this helps,
Best regards,
Tom.
No, that's not part of the standard. It's certainly possible that you've used a library implementation where the behavior you described did happen, but that's a non-standard extension that you shouldn't rely on.
No. Watch out for inter-process deadlocks when dealing with std streams when either read on stdin or write on stdout blocks.
I have a C application with many worker threads. It is essential that these do not block so where the worker threads need to write to a file on disk, I have them write to a circular buffer in memory, and then have a dedicated thread for writing that buffer to disk.
The worker threads do not block any more. The dedicated thread can safely block while writing to disk without affecting the worker threads (it does not hold a lock while writing to disk). My memory buffer is tuned to be sufficiently large that the writer thread can keep up.
This all works great. My question is, how do I implement something similar for stdout?
I could macro printf() to write into a memory buffer, but I don't have control over all the code that might write to stdout (some of it is in third-party libraries).
Thoughts?
NickB
I like the idea of using freopen. You might also be able to redirect stdout to a pipe using dup and dup2, and then use read to grab data from the pipe.
Something like so:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define MAX_LEN 40
int main( int argc, char *argv[] ) {
char buffer[MAX_LEN+1] = {0};
int out_pipe[2];
int saved_stdout;
saved_stdout = dup(STDOUT_FILENO); /* save stdout for display later */
if( pipe(out_pipe) != 0 ) { /* make a pipe */
exit(1);
}
dup2(out_pipe[1], STDOUT_FILENO); /* redirect stdout to the pipe */
close(out_pipe[1]);
/* anything sent to printf should now go down the pipe */
printf("ceci n'est pas une pipe");
fflush(stdout);
read(out_pipe[0], buffer, MAX_LEN); /* read from pipe into buffer */
dup2(saved_stdout, STDOUT_FILENO); /* reconnect stdout for testing */
printf("read: %s\n", buffer);
return 0;
}
If you're working with the GNU libc, you might use memory streams string streams.
You can "redirect" stdout into file using freopen().
man freopen says:
The freopen() function opens the file
whose name is the string pointed to
by path and associates the stream
pointed to by stream with it. The
original stream (if it exists) is
closed. The mode argument is used
just as in the fopen() function.
The primary use of the freopen()
function is to change the file
associated with a standard text
stream (stderr, stdin, or stdout).
This file well could be a pipe - worker threads will write to that pipe and writer thread will listen.
Why don't you wrap your entire application in another? Basically, what you want is a smart cat that copies stdin to stdout, buffering as necessary. Then use standard stdin/stdout redirection. This can be done without modifying your current application at all.
~MSalters/# YourCurrentApp | bufcat
You can change how buffering works with setvbuf() or setbuf(). There's a description here: http://publications.gbdirect.co.uk/c_book/chapter9/input_and_output.html.
[Edit]
stdout really is a FILE*. If the existing code works with FILE*s, I don't see what prevents it from working with stdout.
One solution ( for both things your doing ) would be to use a gathering write via writev.
Each thread could for example sprintf into a iovec buffer and then pass the iovec pointers to the writer thread and have it simply call writev with stdout.
Here is an example of using writev from Advanced Unix Programming
Under Windows you would use WSAsend for similar functionality.
The method using the 4096 bigbuf will only sort of work. I've tried this code, and while it does successfully capture stdout into the buffer, it's unusable in a real world case. You have no way of knowing how long the captured output is, so no way of knowing when to terminate the string '\0'. If you try to use the buffer you get 4000 characters of garbage spit out if you had successfully captured 96 characters of stdout output.
In my application, I'm using a perl interpreter in the C program. I have no idea how much output is going to be spit out of what ever document is thrown at the C program, and hence the code above would never allow me to cleanly print that output out anywhere.