C & bash redirection processus communication - c

look at this bash :
mkfifo fifo
./processA <fifo | processB >fifo
In my process A, i generate a file which is send by process B. Then I want to process the result of processB.
So in A I just send info to B with printfs into std out. Then I create a thread who just read(stdin). After creating this thread, I send infos to B via printf.
I do not understand why this whole sh block. The read never recieive anything. Why? the two process are tested and work fine separatly. The whole sh also work perfectly (dont block) if I dont read (but then I cant process B output).
Can somebody explain me what i am understanding wrong?
Sorry for my approximative English. I am also intersted by your clean solution if you have one (but it would prefer understanding why this one is not working).
//edit
Here is the main (process A):
//managing some arguments threatment, constructing object...
pthread_t thread;//creation of the thread supposed to read
if(pthread_create(&thread, NULL,IsKhacToolKit::threadRead, solver) != 0) {
fprintf(stderr,"\nSomething went wrong while creating Reader thread\n" );
}
solver->generateDimacFile();//printing to stdout
pthread_exit(0);
}
the function executed by the thread is just supposed to read stdin and printing into stderr the string obtened (for now). Nothing is printed in stderr right now.
generateDimacFile print a char* into stdout (and flush(stdout) at the end) that processB use. The process B is that one: http://www.labri.fr/perso/lsimon/glucose/
here ise the function executed by the thread now :
char* satResult=(char*)malloc(sizeof(char)* solutionSize);
for (int i=0; i<2; i++){
read(0, satResult, solutionSize );
fprintf(stderr, "\n%s\n", satResult);
}
DEBUGFLAG("Getting result from glucose");
Ok so now thanks to Maxim Egorushkin, I discovered that the first read dont block, but the next one block using that bash instead:
./processA <fifo | stdbuf -o0 ./processB >fifo
and if I use that one :
stdbuf -o0 ./processA <fifo | stdbuf -o0 ./processB >fifo
Most of the time I can read twice whitout blocking (some time it block). I still can't read 3 times. I dont understand why it change anything because I flush stdout in generateDimacFile.
Look at what's actually printed when it dont block(reading twice) in stderr:
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c Reading from standard input... Use '--help' for help.
s to MiniSAT team)
c
The coresponding expected result:
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c Reading from standard input... Use '--help' for help.
c | |
s UNSATISFIABLE

You have a potentially blocking race condition. If the processB needs to read a large amount of data before it produces anything, then it is possible that processA will be data starved before it produces enough data. Once that happens, there's a deadlock. Or, if processA never generates any data until it reads something, then both processes will just sit there. It really depends on what processA and processB are doing.
If the processes are sufficiently simple, what you are doing should work. For instance:
$ cat a.sh
#!/bin/sh
echo "$$"
while read line; do echo $(($line + 1 )); echo $$ read: $line >&2; sleep 1; done
$ ./a.sh < fifo | ./a.sh > fifo
26385 read: 26384
26384 read: 26385
26385 read: 26386
26384 read: 26385
26385 read: 26386
26384 read: 26387
26385 read: 26388
26384 read: 26387
^C

Using | or > in bash makes the process block-buffered, so that it does not output anything until the buffer is full or fflush is invoked.
Try disabling all buffering with stdbuf -o0 ./processA <fifo | stdbuf -o0 processB >fifo.
stderr does not get redirected in your command line, I am not sure why you write into it. Write into stdout.
Another issue is that
read(0, satResult, solutionSize);
fprintf(stderr, "\n%s\n", satResult);
is incorrect, satResult is not zero-terminated and the errors are not handled. A fixL
ssize_t r = read(0, satResult, solutionSize);
if(r > 0)
fwrite(satResult, r, 1, stdout);
else
// Handle read error.

Related

Prevent read() systemcall returing with 0 when run as background process

I have a piece of software that is able to read commands from stdin for debug purposes in a separate thread. When my software runs as foreground process read behaves as expected, its blocking and waits for input by the user, i.e the thread sleeps.
When the software is run as a background process, read constantly returns 0 (possible EOF detected?).
The problem here is, that this specific read is in a while(true) loop. It runs as fast as it can and steals precious CPU load on my embedded device.
I tried redirecting /dev/null to the process but the behavior was the same. I am running my custom Linux on an ARM Cortex A5 board.
The problematic piece of code follows and is run inside its own thread:
char bufferUserInput[256];
const int sizeOfBuffer = SIZE_OF_ARRAY(bufferUserInput);
while (1)
{
int n = read(0, bufferUserInput, sizeOfBuffer); //filedes = 0 equals to reading from stdin
printf("n is: %d\n", n);
printf("Errno: %s",strerror(errno));
if (n == 1)
{
continue;
}
if ((1 < n)
&& (n < sizeOfBuffer)
&& ('\n' == bufferUserInput[n - 1]))
{
printf("\r\n");
bufferUserInput[n - 1] = '\0';
ProcessUserInput(&bufferUserInput[0]);
} else
{
n = 0;
}
}
I am looking for a way to prevent read from constantly returning when running in the background and wait for user input (which of course will never come).
If you start your program in the "background" (as ./program &) from a shell script, it's stdin will be redirected from /dev/null (with some exceptions).
Trying to read from /dev/null will always return 0 (EOF).
Example (on linux):
sh -c 'ls -l /proc/self/fd/0 & wait'
... -> /dev/null
sh -c 'dd & wait'
... -> 0 bytes copied, etc
The fix from the link above should also work for you:
#! /bin/sh
...
exec 3<&0
./your_program <&3 &
...
When stdin is not a terminal, read is returning with 0 because you are at the end of the file. read only blocks after reading all available input when there could be more input in the future, which is considered to be possible for terminals, pipes, sockets, etc. but not for regular files nor for /dev/null. (Yes, another process could make a regular file bigger, but that possibility isn't considered in the specification for read.)
Ignoring the various problems with your read loop that other people have pointed out (which you should fix anyway, as this will make reading debug commands from the user more reliable) the simplest change to your code that will fix the problem you're having right now is: check on startup whether stdin is a terminal, and don't launch the debug thread if it isn't. You do that with the isatty function, declared in unistd.h.
#include <stdio.h>
#include <unistd.h>
// ...
int main(void)
{
if (isatty(fileno(stdin)))
start_debug_thread();
// ...
}
(Depending on your usage context, it might also make sense to run the debug thread when stdin is a pipe or a socket, but I would personally not bother, I would rely on ssh to provide a remote (pseudo-)terminal when necessary.)
read() doesn't return 0 when reading from the terminal in a backgrounded process.
It either continues to block while causing a SIGTTIN to be sent to the process (which may break the blocking and cause retval=-1,errno=EINTR to be returned or it causes retval=-1, errno EIO if SIGTTIN is ignore.
The snippet below demonstrates this:
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
int main()
{
char c[256];
ssize_t nr;
signal(SIGTTIN,SIG_IGN);
nr = read(0,&c,sizeof(c));
printf("%zd\n", nr);
if(0>nr) perror(0);
fflush(stdout);
}
The code snippet you've shown can't possibly test reveal 0-returns since you never test for zero-ness in the return value.

Why doesn't stdbuf line buffer the output of some simple c programs

I'm trying to use stdbuf to line buffer the output of a program but I can't seem to make it work as I would expect. Using this code:
#include <stdio.h>
#include <unistd.h>
int main (void)
{
int i=0;
for(i=0; i<10; i++)
{
printf("This is part one");
fflush(stdout);
sleep(1);
printf(" and this is part two\n");
}
return 0;
}
I see This is part one, a one second wait then and this is part two\nThis is part one.
I expected that running it as
stdbuf --output=L ./test.out
would cause the output to be a 1 second delay and then This is part one and this is part two\n repeating at one second intervals. Instead I see the same output as in the case when I don't use stdbuf.
Am I using stdbuf incorrectly or does the call to fflush count as "adjusting" the buffering as described in the sdtbuf man page?
If I can't use stdbuf to line buffer in this way is there another command line tool that makes it possible?
Here are a couple of options that work for me, given the sample code, and run interactively (the output was to a pseudo-TTY):
./program | grep ^
./program | while IFS= read -r line; do printf "%s\n" "$line"; done
In a couple of quick tests, both output a complete line at a time. If you need to do pipe it further, grep's --line-buffered option should be useful.

Terminate cat command immediately after using in C

So I am communicating with a device by using echo to send and cat to receive. Here's a snippet of my code:
fp = popen("echo "xyz" > /dev/ttyACM0 | cat - /dev/ttyACM0", "r");
while (fgets(ret_val, sizeof(ret_val)-1, fp) != NULL)
{
if (strcmp(ret_val, "response") == 0)
{
close(fp);
return ret_val;
}
}
Ok, The problem is, cat seems to stay open, because when I run this code in a loop, it works the first time, then hangs at the spot I call popen. Am I correct in assuming cat is the culprit?
Is there a way to terminate cat as soon as I run the command, so I just get the response from my device? Thanks!
In the command:
echo "xyz" > /dev/ttyACM0 | cat - /dev/ttyACM0
TTY devices normally do not open until carrier is present, or CLOCAL is set. The cat could be waiting on open. Assuming the device opens, then the cat will hang waiting to read characters until either (1) it receives an EOF character such as control-D, or (2) carrier is lost or (3) you kill it.
Another problem here is that the pipe between echo and cat immediately closes, because the output of the echo is redirected to the same TTY device, and the redirection closes the pipe.
Generally TTY devices are ornery beasts and require special handling to get the logic right. Probably you are better to read up on TTY devices especially:
man termios
If you are doing something REALLY SIMPLE, you might get by with:
fp = popen("echo 'xyz' >/dev/ttyACM0 & (read x; echo \"$x\")");
Keep in mind that both the echo and the read might hang waiting for carrier and that you will get at most one line of output from the popen, and the read could hang waiting for an EOL character.
This whole approach is fraught with problems. TTY devices require delicate care. You are using a hammer in the dark.
There's no easy way to kill the process launched by popen, as there's no API to get the pid -- there's only pclose which waits until it ends of its own account (and youe should ALWAYS use pclose instead of fclose to close a FILE * opened by popen.)
Instead, you're probably better off not using popen at all -- just use fopen and write what you want with fputs:
fp = fopen("/dev/ttyACM0", "r+");
fputs("xyz\n", fp); // include the newline explicitly
fflush(fp); // always flush after writing before reading
while (fgets(ret_val, sizeof(ret_val)-1, fp) != NULL) {
:

Linux C: what happens to unused file descriptors?

(apologies for not taking care of my accepts lately - will do so as soon as I get some time; just wanted to ask this question now that it occurred)
Consider the following C program:
int main(void) {
write(3, "aaaaaa\n", 7);
write(2, "bbbbbb\n", 7);
write(1, "cccccc\n", 7);
return 0;
}
I build and run it from the bash shell like this:
$ gcc -o wtest wtest.c
$ ./wtest 3>/dev/stdout
aaaaaa
bbbbbb
cccccc
The way I see it, in this case, due to the shell redirection of fd 3 to stdout, that file descriptor is now "used" (not sure about "opened", since there is no opening of files, in the C code at least) - and so we get the cccccc string output to terminal, as expected.
If I don't use the redirection, then the output is this:
$ ./wtest
aaaaaa
bbbbbb
Now fd 3 is not redirected - and so cccccc string is not output, again as expected.
My question is - what happened to those cccccc bytes? Did they dissapear in the same sense, as if I redirected fd 3 to /dev/null? (as in:
$ ./wtest 3>/dev/null
)
In addition, assuming that in a particular case I'd like to "hide" the fd 3 output: would there be a performance difference between redirecting "3>/dev/null" vs. not addressing fd 3 in the shell at all (in terms of streaming data; that is, if fd 3 outputs a really long byte stream, would there be an instruction penalty per byte written in the "3>/dev/null" case, as opposed to not addressing fd 3)?
Many thanks in advance for any answers,
Cheers!
My question is - what happened to those cccccc bytes?
nothing. you failed to capture the return code of write, it should tell you that there was an error and errno should tell you what the error was
You also seem to have a questionable concept of what is persistent, the "bytes" are still sitting in the string literal where the compiler put them from the beginning. write copies byte to a stream.
Jens is right. If you run your program under strace on both situations, you'll see that when you redirect, the write works - because the shell called pipe() on your behalf before fork'ing your executable.
When you look at the strace without the redirection:
write(3, "aaaaaa\n", 7) = -1 EBADF (Bad file descriptor)
write(2, "bbbbbb\n", 7bbbbbb) = 7
write(1, "cccccc\n", 7cccccc) = 7
Which reminds us of the best practice - always check your return values.

Why doesn't printf work when piped in Bash?

I have a Bash script work.sh that get something from STDIN and echo it to STDOUT.
I also have a C programme, return_input, that also get something from STDIN and printf to STDOUT
But when I chain them this way:
./work.sh |./return_input
printf in return_input only output to screen when exiting. Why?
Simplified:
[root# test]# cat work.sh
#!/bin/bash
for i in {1..5}
do
echo test
read
done
Output of cat return_input.c,
#include <stdio.h>
void return_input (void){
char array[30];
gets (array);
printf("%s\n", array);
printf("%#p\n", *(long *)(array+40));
}
main() {
while(1 == 1)return_input();
return 0;
}
All I/O operations are usually buffered. This is why you get the output only after you program finishes if there are not much data to overflow the buffer and output during the execution.
You can use fflush function which forces to finish I/O operation and clear buffers if you want to see output in the "real time"
You should post some code.
Try making sure that the output is flushed (using fflush(stdout); in C after you've written to it), and/or that the text contains line-feeds since typically those force the output to be flushed.
Otherwise the output might be "stuck" in a buffer, which is an optimization rather than sending single bytes across the pipeline between the processes.

Resources