I have the following C code:
#include <gio/gio.h>
int main(void) {
GSubprocess *process;
gchar *output;
gchar *error;
process = g_subprocess_new(G_SUBPROCESS_FLAGS_STDOUT_PIPE, NULL, "./for.sh", NULL);
g_subprocess_communicate_utf8(process, NULL, NULL, &output, &error, NULL);
g_print("%s", output);
}
and the following bash code:
#!/bin/bash
for i in {1..3}
do
echo "$i"
sleep 1
done
The issue I have with this is that the C code will wait until the bash code (or any other code for that matter) finishes to print its output. And what I'd prefer is to have it printed in real time, line by line.
I have found a solution for this in Python but I don't know how to translate it to C. Any pointers will be appreaciated.
The answer in code is this:
#include <gio/gio.h>
int main(void) {
GSubprocess *process;
GInputStream *stream;
char buffer[4];
process = g_subprocess_new(G_SUBPROCESS_FLAGS_STDOUT_PIPE, NULL, "./for.sh", NULL);
stream = g_subprocess_get_stdout_pipe(process);
while(g_input_stream_read(stream, buffer, sizeof(buffer), NULL, NULL))
g_print("%s", buffer);
return 0;
}
The funny things is that depending on the value of buffer[], the output is correct or accompanied by junk. If I change buffer[4] to buffer[8] I get junk. If I use buffer[12] then it's correct. Weird.
You need to get the stdout pipe from the GSubprocess using g_subprocess_get_stdout_pipe(), and then read from it using g_input_stream_read() (or some other input stream reading function) in a loop until the subprocess exits.
You might want to do the same thing with stderr, but in order to do that you’d need to poll both using a GMainLoop, since you can’t block on reading two streams at once.
Seeing your Python code might allow people to help you with the C translation of it.
Related
I hope to see the output of the command executed by popen as soon as possible.So I change the buffering type of the file stream returned by popen to line buffered. As per the document, setvbuf seems work for this goal. I did a simple test on Ubuntu16.4, it does not make any difference indeed.
Here is the code snippet which I used to do the said test:
#include <stdio.h>
#include <string.h>
int main(void)
{
char buffer[1024];
memset(buffer, 0, 1024);
FILE* file = popen("bash -c \"for i in 1 2 3 4 5;do echo -e -n 'thanks a lot\n'; sleep 1; done\" ", "r");
if(NULL != file)
{
int size;
printf("setvbuf returns %d\n", setvbuf(file, NULL, _IOLBF, 0));
while((size = fread(buffer, 1, 1024, file))>0)
{
printf("size=%d\n", size);
memset(buffer, 0, 1024);
}
}
return 0;
}
Here is the output of the code snippet which runs on Ubuntu16.04:
setvbuf returns 0
//about five seconds later
size=65
As per the document, which says that:
The function setvbuf() returns 0 on success.
As per the output above, setvbuf(file, NULL, _IOLBF, 0) has successfully set the buffing type of file returned by popen to line buffered.But the output of the aforementioned code snippet indicates it still uses the default block buffered.
But when I tried getline, it could achieve the goal, which is really out of my expectation.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void)
{
FILE* file = popen("bash -c \"for i in 1 2 3 4 5;do echo -e -n 'thanks a lot\n'; sleep 1; done\" ", "r");
char* line=NULL;
size_t len;
if(NULL != file)
{
// std::cout << setvbuf(file, NULL, _IOLBF, 0) << std::endl; //setvbuf has not been called
while(getline(&line, &len, file)>0)
{
printf("strlen(line)=%lu\n", strlen(line));
free(line);
line = NULL;
}
}
free(line);
return 0;
}
Here is the output:
//one second later
strlen(line)=13
//one second later
strlen(line)=13
//one second later
strlen(line)=13
//one second later
strlen(line)=13
//one second later
strlen(line)=13
I am really conscious about why getline could acquire the output of the pipe as soon as possible, whereas setvbuf & fread does not work.
getline stops reading once it gets to a newline. fread keeps reading until it reads as much data as you specified (in your case, 1024 bytes) or it encounters EOF or an error. This has nothing to do with buffering. You might want to look at read to see if it's closer to what you want.
#Jeremy Friesner'answer:
In my experience, the call to setvbuf(stdout, NULL, _IONBF, 0) needs to be executed inside the child process in order to get the behavior you want. (Of course that's much easier to achive if the child process's executable is one you control rather than /bin/bash). popen() in the parent process can't give you data that the child process hasn't supplied to it yet.
On Linux (Raspbian on a Raspberry Pi) I would like to make it so that anything my C application prints using printf is sent back to me in a callback.
(No, I'm not talking about shell redirection with > some_file.txt. I'm talking about a C program making the decision by itself to send stdout (and therefore printf) to a callback within that same program.)
(Yes, I really do want to do this. I'm making a full-screen program using OpenGL and want to present any printf'd text to the user within that program, using my own rendering code. Replacing all printf calls with something else is not feasible.)
I feel like this should be easy. There are variations of this question on StackOverflow already, but none that I could find are exactly the same.
I can use fopencookie to get a FILE* that ends up calling my callback. So far, so good. The challenge is to get stdout and printf to go there.
I can't use freopen because it takes a string path. The FILE* I want to redirect to is not a file on the filesystem but rather just exists at runtime.
I can't use dup2 because the FILE* from fopencookie does not have a file descriptor (fileno returns -1).
The glibc documentation suggests that I can simply reassign stdout to my new FILE*: "stdin, stdout, and stderr are normal variables which you can set just like any others.". This does almost work. Anything printed with fprintf (stdout, "whatever") does go to my callback, and so does any printf that has any format specifiers. However, any call to printf with a string with no format specifiers at all still goes to the "original" stdout.
How can I achieve what I'm trying to do?
PS: I don't care about portability. This will only ever run on my current environment.
#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <stdarg.h>
#include <alloca.h>
#include <string.h>
static ssize_t my_write_func (void * cookie, const char * buf, size_t size)
{
fprintf (stderr, "my_write_func received %d bytes\n", size);
char * copy = (char*) alloca (size + 1);
assert (copy);
copy[size] = 0;
strncpy (copy, buf, size);
fprintf (stderr, "Text is: \"%s\"\n", copy);
fflush (stderr);
return size;
}
static FILE * create_opencookie ()
{
cookie_io_functions_t funcs;
memset (&funcs, 0, sizeof (funcs));
funcs.write = my_write_func;
FILE * f = fopencookie (NULL, "w", funcs);
assert (f);
return f;
}
int main (int argc, char ** argv)
{
FILE * f = create_opencookie ();
fclose (stdout);
stdout = f;
// These two DO go to my callback:
fprintf (stdout, "This is a long string, fprintf'd to stdout\n");
printf ("Hello world, this is a printf with a digit: %d\n", 123);
// This does not go to my callback.
// If I omit the fclose above then it gets printed to the console.
printf ("Hello world, this is plain printf.\n");
fflush (NULL);
return 0;
}
This appears to be a bug in GLIBC.
The reason that printf("simple string") works differently from printf("foo %d", 123) is that GCC transforms the former into a puts, with the notion that they are equivalent.
As far as I can tell, they should be equivalent. This man page states that puts outputs to stdout, just like printf does.
However, in GLIBC printf outputs to stdout here, but puts outputs to _IO_stdout here, and these are not equivalent. This has already been reported as a glibc bug (upstream bug).
To work around this bug, you could build with -fno-builtin-printf flag. That prevents GCC from transforming printf into puts, and on my system produces:
$ ./a.out
my_write_func received 126 bytes
Text is: "This is a long string, fprintf'd to stdout
Hello world, this is a printf with a digit: 123
Hello world, this is plain printf.
"
This workaround is of course incomplete: if you call puts directly, or link in object files that call printf("simple string") and were not compiled with -fno-builtin-printf (perhaps from 3rd-party library), then you'll still have a problem.
Unfortunately you can't assign to _IO_stdout (which is a macro). The only other thing you could do (that I can think of) is link in your own puts, which just returns printf("%s", arg). That should work if you are linking against libc.so.6, but may cause trouble if you link against libc.a.
You can redirect to a pipe instead and process the written data in a separate thread.
#include <pthread.h>
#include <ctype.h>
#include <unistd.h>
#include <stdio.h>
// this is the original program which you can't change
void print(void) {
printf("Hello, %d\n", 123);
puts("world");
printf("xyz");
}
int p[2];
void *render(void *arg) {
int nread;
char buf[1];
while((nread = read(p[0], buf, sizeof buf)) > 0) {
// process the written data, in this case - make it uppercase and write to stderr
for(int i = 0; i < nread; i++)
buf[i] = toupper(buf[i]);
write(2, buf, nread);
}
return NULL;
}
int main() {
setvbuf(stdout, NULL, _IONBF, 0);
pipe(p);
dup2(p[1], 1);
close(p[1]);
pthread_t t;
pthread_create(&t, NULL, render, NULL);
print();
close(1);
pthread_join(t, NULL);
}
On debian stretch putting:
setvbuf (f, NULL, _IOLBF, 0); // line buffered
after the create_opencookie call worked.
I have a sample program that takes in an input from the terminal and executes it in a cloned child in a subshell.
#define _GNU_SOURCE
#include <stdlib.h>
#include <sys/wait.h>
#include <sched.h>
#include <unistd.h>
#include <string.h>
#include <signal.h>
int clone_function(void *arg) {
execl("/bin/sh", "sh", "-c", (char *)arg, (char *)NULL);
}
int main() {
while (1) {
char data[512] = {'\0'};
int n = read(0, data, sizeof(data));
// fgets(data, 512, stdin);
// int n = strlen(data);
if ((strcmp(data, "exit\n") != 0) && n > 1) {
char *line;
char *lines = strdup(data);
while ((line = strsep(&lines, "\n")) != NULL && strcmp(line, "") != 0) {
void *clone_process_stack = malloc(8192);
void *stack_top = clone_process_stack + 8192;
int clone_flags = CLONE_VFORK | CLONE_FS;
clone(clone_function, stack_top, clone_flags | SIGCHLD, (void *)line);
int status;
wait(&status);
free(clone_process_stack);
}
} else {
exit(0);
}
}
return 0;
}
The above code works in an older Linux system (with minimal RAM( but not in a newer one. Not works means that if I type a simple command like "ls" I don't see the output on the console. But with the older system I see it.
Also, if I run the same code on gdb in debugger mode then I see the output printed onto the console in the newer system as well.
In addition, if I use fgets() instead of read() it works as expected in both systems without an issue.
I have been trying to understand the behavior and I couldn't figure it out. I tried doing an strace. The difference I see is that the wait() return has the output of the ls program in the cases it works and nothing for the cases it does not work.
Only thing I can think of is that read(), since its not a library function has undefined behavior across systems. But I can't agree as to how its affecting the output.
Can someone point me out to why I might be observing this behavior?
EDIT
The code is compiled as:
gcc test.c -o test
strace when it's not working as expected is shown below
strace when it's working as expected (only difference is I added a printf("%d\n", n); following the call for read())
Thank you
Shabir
There are multiple problems in your code:
a successful read system call can return any non zero number between 1 and the buffer size depending on the type of handle and available input. It does not stop at newlines like fgets(), so you might get line fragments, multiple lines, or multiple lines and a line fragment.
furthermore, if read fills the whole buffer, as it might when reading from a regular file, there is no trailing null terminator, so passing the buffer to string functions has undefined behavior.
the test if ((strcmp(data, "exit\n") != 0) && n > 1) { is performed in the wrong order: first test if read was successful, and only then test the buffer contents.
you do not set the null terminator after the last byte read by read, relying on buffer initialization, which is wasteful and insufficient if read fills the whole buffer. Instead you should make data one byte longer then the read size argument, and set data[n] = '\0'; if n > 0.
Here are ways to fix the code:
using fgets(), you can remove the line splitting code: just remove initial and trailing white space, ignore empty and comment lines, clone and execute the commands.
using read(), you could just read one byte at a time, collect these into the buffer until you have a complete line, null terminate the buffer and use the same rudimentary parser as above. This approach mimics fgets(), by-passing the buffering performed by the standard streams: it is quite inefficient but avoids reading from handle 0 past the end of the line, thus leaving pending input available for the child process to read.
It looks like 8192 is simply too small a value for stack size on a modern system. execl needs more than that, so you are hitting a stack overflow. Increase the value to 32768 or so and everything should start working again.
I am writing a C program where I am printing to stderr and also using putchar() within the code. I want the output on the console to show all of the stderr and then finally flush the stdout before the program ends. Does anyone know of a method that will stop stdout from flushing when a putchar('\n') occurs?
I suppose i could just do an if statement to make sure it doesn't putchar any newlines but I would prefer some line or lines of code to put at the top of the program to stop all flushing until i say fflush(stdout) at the bottom of the program
What you're trying to do is horribly fragile. C provides no obligation for an implementation of stdio not to flush output, under any circumstances. Even if you get it to work for you, this behavior will be dependent on not exceeding the buffer size. If you really need this behavior, you should probably buffer the output yourself (possibly writing it to a tmpfile() rather than stdout) then copying it all to stdout as the final step before your program exits.
Run your command from the console using pipeling:
my_command >output.txt
All output witten to stderr will appear immediately. The stuff written to stdout will go to output.txt.
Windows only. I'm still looking for the Unix solution myself if anyone has it!
Here is a minimal working example for Windows that sends a buffer to stdout without flushing. You can adjust the maximum buffer size before a flush occurs by changing max_buffer, though I imagine there's some upper limit!
#include <windows.h>
#include <string.h>
int main()
{
const char* my_buffer = "hello, world!";
HANDLE hStdout = GetStdHandle(STD_OUTPUT_HANDLE);
int max_buffer = 1000000;
int num_remaining = strlen(my_buffer);
while (num_remaining)
{
DWORD num_written = 0;
int buffer_size = num_remaining < max_buffer ? num_remaining : max_buffer;
int retval = WriteConsoleA(hStdout, my_buffer, buffer_size, &num_written, 0);
if (retval == 0 || num_written == 0)
{
// Handle error
}
num_remaining -= num_written;
if (num_remaining == 0)
{
break;
}
my_buffer += num_written;
}
}
You can use setvbuf() to fully buffer output to stdout and provide a large enough buffer size for your purpose:
#include <stdio.h>
int main() {
// issue this call before any output
setvbuf(stdout, NULL, _IOFBF, 16384);
...
return 0;
}
Output to stderr is unbuffered by default, so it should go to the console immediately.
Output to stdout is line buffered by default when attached to the terminal. Setting it to _IOFBF (fully buffered) should prevent putchar('\n') from flushing the pending output.
What I am trying to do is create a program that will, while running, open examplecliprogram.exe with "--exampleparameter --exampleparameter2" as cli input, wait for examplecliprogram.exe to terminate, and then take the output and do something useful with it. I would like examplecliprogram.exe to run in the background (instead of being opened in another window) while the output from examplecliprogram.exe is displayed in the window running the overhead program.
So far I've explored options such as popen(), ShellExecute(), and CreateProcess() but I can't seem to get any of them working properly.
Primarily, I want this program to be able to run independently in a Windows environment, and compatibility with Linux would be a bonus.
edit: I have found one solution by calling system("arguments"). I don't know if this is a good solution that will transfer well to a gui, but at the very least it solves the fundamental problem.
This code runs on Windows and Unix (I tested in Visual Studio, GCC on Cygwin, and GCC on Mac OS X).
I had to use a macro to define popen depending on the platform, because on Windows, the function is _popen, while on other platforms the function name is popen (note the underscore in the former).
#include <stdlib.h>
#include <stdio.h>
/* Change to whichever program you want */
//#define PROGRAM "program.exe --param1 --param2"
#define PROGRAM "dir"
#define CHUNK_LEN 1024
#ifdef _WIN32
#define popen _popen
#define pclose _pclose
#endif
int main(int argc, char **argv) {
/* Ensure that output of command does interfere with stdout */
fflush(stdin);
FILE *cmd_file = (FILE *) popen(PROGRAM, "r");
if (!cmd_file) {
printf("Error calling popen\n");
}
char *buf = (char *) malloc(CHUNK_LEN);
long cmd_output_len = 0;
int bytes_read = 0;
do {
bytes_read = fread(buf + cmd_output_len, sizeof(char), CHUNK_LEN, cmd_file);
cmd_output_len += bytes_read;
buf = (char *) realloc(buf, cmd_output_len + CHUNK_LEN);
} while (bytes_read == CHUNK_LEN);
/* Nul terminate string */
*((char *) buf + cmd_output_len) = '\0';
/* Close file pointer */
pclose(cmd_file);
/* Do stuff with buffer */
printf("%s\n", buf);
/* Free buffer */
free(buf);
return 0;
}
You may want to have a look to this Microsoft example code. It was useful to me.
http://msdn.microsoft.com/en-us/library/ms682499%28VS.85%29.aspx
I used CreateProcess, unfortunately I can't recommend you anything other than 'carefull reading of msdn' and 'starting from simple and progress to complex'.
As for the portability - if you havent need to use some cross-platform toolkit until now, i wouldnt recommend you to start to use one just because of this. I would recommend you to write some 'start process' wrapper and implement it on each platform by its native way.
The cleanest and most portable way of doing this is to use GLib's g_spawn_sync().
You can find the docs online.
gchar * std_out = NULL;
gchar * std_err = NULL;
gint exit_stat = 0;
const char *argv[] = {"--foo", "123", "--bar", "22323", NULL};
if(!g_spawn_sync (NULL, argv, NULL, NULL, NULL, NULL, &std_out, &std_err, &exit_stat, NULL)){
fprintf(stderr, "Failed to spawn!\n");
};
/* std_out and std_err should now point to the respective output.*/