First I have following macro
#define MSG_UPDATE_DATA 70
Then open a pipe with popen
SensServer = popen("./SensServer", "w") ;
In the following code that uses the putc(...) function to write to pipe, the function makes the program block and the lines of code following do not execute
void requestTempAndPress(int pid) {
printf("Temp and presure requested. msg_type: %d\n", MSG_UPDATE_DATA);
int n = putc(MSG_UPDATE_DATA, SensServer);
printf("Data sent: %d\n", MSG_UPDATE_DATA);
}
It outputs Temp and presure requested. msg_type: 70 fine. But not the "Data sent..." line.
As per the man page,
pipe is a fd, type int.
putc() needs a FILE* stream as argument.
So, most possibly, in your code, you are supplying the wrong type of argument to putc(), creating the issue.
Given the information (and lack of a sample program), this sounds like a question asking how to make pipes non-blocking. This has been discussed before, usually for nonblocking reads, e.g.,
Non-blocking pipe using popen?
Correct Code - Non-blocking pipe with popen (C++)
The first link mentions fcntl and the O_NONBLOCK flag which the manual page says can be applied to both reads and writes.
However, using popen makes the pipe using buffered I/O, while the operations addressed by fcntl are non-buffered read and write (you really cannot mix the two). If the program were changed to use the low-level pipe (as in the example for the first link), and consistently used the non-buffered I/O, it would give the intended behavior.
Here are links to more general discussion on the topic:
Introduction to non-blocking I/O
Blocking and Non-Blocking I/0
On the other hand (noting comments), if the program fragment is for example part of some larger system doing handshaking (expecting a timely response back from the server), that will run into problems. The fragment is writing a single character across the pipe. However, popen opens a (block-)buffered stream. Nothing will be sent directly to the server as single-character writes unless some help is provided. For instance, one could flush the output stream after each putc, e.g.,
fflush(SensServer);
Alternatively, one could make the stream unbuffered by changing it immediately after the successful call to popen, e.g., using setvbuf:
setvbuf(SensServer, NULL, _IONBF, 0);
Here are links for further reading about buffering in pipes:
Turn off buffering in pipe
Unix buffering delays output to stdout, ruins your day
Force line-buffering of stdout when piping to tee
The problem was due to the fact I initialised the variable SensServer in the parent process but not the child. Which meant the pointer was 0 or I guess a random memory location.
Related
I'm writing a library that should execute a program in a child process, capture the output, and make the output available in a line by line (string vector) way. There is one vector for STDOUT, one for STDERR, and one for "STDCOMBINED", i.e. all output in the order it was printed by the program. The child process is connected via two pipes to a parent process. One pipe for STDOUT and one for STDERR. In the parent process I read from the read-ends of the pipes, in the child process I dup2()'ed STDOUT/STDERR to the write ends of the pipes.
My problem:
I'd like to capture STDOUT, STDERR, and "STDCOMBINED" (=both in the order they appeared). But the order in the combined vector is different to the original order.
My approach:
I iterate until both pipes show EOF and the child process exited. At each iteration I read exactly one line (or EOF) from STDOUT and exactly one line (or EOF) from STDERR. This works so far. But when I capture out the lines as they come in the parent process, the order of STDOUT and STDERR is not the same as if I execute the program in a shell and look at the output.
Why is this so and how can I fix this? Is this possible at all? I know in the child process I could redirect STDOUT and STDERR both to a single pipe but I need STDOUT and STDERR separately, and "STDCOMBINED".
PS: I'm familiar with libc/unix system calls, like dup2(), pipe(), etc. Therefore I didn't post code. My question is about the general approach and not a coding problem in a specific language. I'm doing it in Rust against the raw libc bindings.
PPS: I made a simple test program, that has a mixup of 5 stdout and 5 stderr messages. That's enough to reproduce the problem.
At each iteration I read exactly one line (or EOF) from STDOUT and exactly one line (or EOF) from STDERR.
This is the problem. This will only capture the correct order if that was exactly the order of output in the child process.
You need to capture the asynchronous nature of the beast: make your pipe endpoints nonblocking, select* on the pipes, and read whatever data is present, as soon as select returns. Then you'll capture the correct order of the output. Of course now you can't be reading "exactly one line": you'll have to read whatever data is available and no more, so that you won't block, and maintain a per-pipe buffer where you append new data, extract any lines that are present, shove the unprocessed output to the beginning, and repeat. You could also use a circular buffer to save a little bit of memcpy-ing, but that's probably not very important.
Since you're doing this in Rust, I presume there's already a good asynchronous reaction pattern that you could leverage (I'm spoiled with go, I guess, and project the hopes on the unsuspecting).
*Always prefer platform-specific higher-performance primitives like epoll on Linux, /dev/poll on Solaris, pollset &c. on AIX
Another possibility is to launch the target process with LD_PRELOAD, with a dedicated library that it takes over glibc's POSIX write, detects writes to the pipes, and encapsulates such writes (and only those) in a packet by prepending it with a header that has an (atomically updated) process-wide incrementing counter stored in it, as well as the size of the write. Such headers can be easily decoded on the other end of the pipe to reorder the writes with a higher chance of success.
I think it's not possible to strictly do what you want to do.
If you think about how it's done when running a command in an interactive shell, what happens is that both stdout and stderr point to the same file descriptor (the TTY), so the total ordering is correct by means of synchronization against the same file.
To illustrate, imagine what happens if the child process has 2 completely independent threads, one only writing to stderr, and to other only writing to stdout. The total ordering would depend on however the scheduler decided to schedule these threads, and if you wanted to capture that, you'd need to synchronize those threads against something.
And of course, something can write thousands of lines to stdout before writing anything to stderr.
There are 2 ways to relax your requirements into something workable:
Have the user pass a flag waiving separate stdout and stderr streams in favor of a correct stdcombined, and then redirect both to a single file descriptor. You might need to change the buffering settings (like stdbuf does) before you execute the process.
Assume that stdout and stderr are "reasonably interleaved", an assumption pointed out by #Nate Eldredge, in which case you can use #Unslander Monica's answer.
I have a C program that receives data from another program over a (Linux) pipe. I want the program to behave differently if the pipe was closed before writing any data.
The natural way to do this is to try to read from the pipe and check if I get EOF, but that consumes some data from the pipe if there is any available, and (as far as I know) there's no way to put data "back" in a pipe.
The part of the program where I want to check if the pipe is empty is pretty far away from where I process the data, so I'd rather not have to deal with saving the data from my first read until then.
Is there any way to check if a pipe is empty (read would return EOF) without consuming any data in the case it's not empty?
Note: I do want this to block if the pipe has not been written to or closed yet.
If you used Unix domain stream sockets instead of pipes – meaning you replace your pipe(fds) calls with socketpair(AF_UNIX, SOCK_STREAM, 0, fds) –, you could use recv(fd, dummybuffer, 1, MSG_PEEK) to read/receive one byte of data, without removing it from the receive buffer.
You can combine MSG_PEEK with MSG_DONTWAIT if you don't want to block, or with MSG_WAITALL if you want to block until the entire buffer can be filled.
The differences between an Unix domain stream socket and a pipe are minimal. The stream socket is bidirectional, but you can use shutdown(fd, SHUT_WR) (or SHUT_RD) to close the "write end" (resp. "read end"), meaning if the other end tries to read from the socket, they'll get an immediate end-of-stream (read(), recv() etc. return 0). (Closing the "read end" means that when the other end tries to write to the socket, they'll get EPIPE.)
Right now, I cannot even think of a reason why a program that works with a pipe would not work with an Unix domain stream socket pair.
If you use named pipes, you do need to change mkfifo() and open() to socket(AF_UNIX, SOCK_STREAM, 0) followed by a bind() to the socket address. read(), write(), and even the higher-level standard I/O facilities work just fine on top of an Unix domain stream socket (use fdopen() to convert the socket descriptor to a FILE handle).
If you cannot modify the readers, you can create a minimal dynamic library that interposes openat() (that's what current C library uses underneath fopen()), calling original openat() for all except the socket path, say named in an environment variable, and instead creates a socket and binds to the socket path for that one. When executing the reader binaries, you just set LD_PRELOAD to point to this interposing library.
In other words, I do believe there are no real obstacles for switching from pipes to Unix domain stream sockets.
You cannot use recv() with pipes, because pipes are implemented in Linux using a special filesystem, not sockets.
No, there is no way to do what you describe. The way to determine whether you have reached the end of a non-seekable file such as a pipe is to attempt to read from it. This is not just the natural way, it is the way.
but that consumes some data from the pipe if there is
any available,
Yes.
and (as far as I know) there's no way to put data
"back" in a pipe.
That depends. If you are reading with POSIX read(), then no. If you are wrapping the the pipe end in a FILE and using stdio functions to read it then there is ungetc().
Nevertheless, this:
The part of the program where I want to check if the
pipe is empty is pretty far away from where I process the data
seems like a design problem. You cannot know whether you will ever get data until you actually do get data or see EOF. The process(es) at the write end of the pipe can delay an arbitrary amount of time before doing anything with the pipe, and even if that process is provided by you, you cannot be fully in control of this aspect of its behavior. Thus, it doesn't make much sense to try to check for EOF before you're ready, in some sense, to consume data, because you cannot rely on getting an answer without blocking.
, so I'd
rather not have to deal with saving the data from my first read until
then.
I suppose you must want to avoid performing some kind of heavyweight initialization in the event that there is no data to process. Ok, but I don't see what the big deal is. You need to provide storage into which to read the data anyway. What's wrong with something like this:
void consume_pipe_data(int fd) {
char buffer[BUFFER_SIZE];
ssize_t count;
count = read(fd, buffer, BUFFER_SIZE);
if (count == 0) {
handle_no_data();
return;
} else if (count > 0) {
perform_expensive_initialization();
}
do {
if (count == -1) {
handle_error();
return;
}
consume_data(buffer);
count = read(fd, buffer, BUFFER_SIZE);
} while (count);
}
The point is not that that's necessarily an appropriate structure for your program, but rather that it is possible to structure the program so that storing the data, if any, from the initial read is pretty clean and natural.
You can do a dummy-write. If your stdin has reached eof, it is a non-blocking mechanism to determine if you've reached EOF, without any more sophisticated tools.
if( write( fileno(stdin), 0, 0 ) != 0 )
return 1; // Is end-of-file.
I have a POSIX thread which reads from the non-blocking anonymous pipe (marked with O_NONBLOCK flag). When thread is stopping (because of errors for example) I want to check if there is something left in pipe (in its internal buffer). If pipe has data - run new thread with the same read descriptor (it is shared between threads) so the new thread can continue reading from pipe. If pipe is empty - close pipe and do nothing.
So I need to check if pipe is empty without removing data from pipe (as regular read will do). Is there any way to do it?
P.S. I think setting count = 0 in read(int fd, void *buf, size_t count); may help but the documentation sais that it is some kind of undefined behavior:
If count is zero, read() may detect the errors described below. In
the absence of any errors, or if read() does not check for errors, a
read() with a count of 0 returns zero and has no other effects.
I believe you want poll or select, called with a zero timeout.
Short description from the select() docs:
select() and pselect() allow a program to monitor multiple file
descriptors, waiting until one or more of the file descriptors become
"ready" for some class of I/O operation (e.g., input possible).
...and the poll() docs:
poll() performs a similar task to select(2): it waits for one of a
set of file descriptors to become ready to perform I/O.
I've a new file, opened as read/write then 1 thread will receive from network and append binary data to that file, the other thread will read from the same file to process the binary data, but the read() always return 0, so I can't read the data, but if I using cat in command line to append data, then the program can read the data and process. I don't know why it can't notice the new data coming from network. I'm using open(), read(), and write() in this program.
Use a pipe instead of an HDD-file. Depending on your system (which you didnt tell us) there are only minor modifications to your code (which you didnt give us) to do that.
file operations are buffered. try flushing the stream?
Assuming that your read() and write() functions are the POSIX one, they share the file position, even if they are used in different threads. So your read after write was trying to read after the position at which write had written. Don't use file IO to communicate between threads. In most contexts, I'd not even use pipe or sockets for that (one context I'd use them is when the reading thread is using poll/select with other file descriptors) but simple shared memory and mutex.
I've coded a program in C that sends messages to the stdout using printf and I'm having trouble redirecting the output to a file (running from bash).
I've tried:
./program argument >> program.out
./program argument > program.out
./program >> program.out argument
./program > program.out argument
In each case, the file program.out is created but it remains empty. After the execution ends the file size is 0.
If I omit the redirection when executing the program:
./program argument
Then, all messages sent to stdout using printf are shown in the terminal.
I have other C programs for which I've no problem redirecting the output this way.
Does it have to do with the program itself? with the argument passing?
Where should look for the problem?
Some details about the C program:
It does not read anything from stdin
It uses BSD Internet Domain sockets
It uses POSIX threads
It assigns a special handler function for SIGINT signal using sigaction
It sends lots of newlines to stdout (for those of you thinking I should flush)
Some code:
int main(int argc, char** argv)
{
printf("Execution started\n");
do
{
/* lots of printf here */
} while (1);
/* Code never reached */
pthread_exit(EXIT_SUCCESS);
}
Flushing after newlines only works when printing to a terminal, but not necessarily when printing to a file. A quick Google search revealed this page with further information: http://www.pixelbeat.org/programming/stdio_buffering/
See the section titled "Default Buffering modes".
You might have to add some calls to fflush(stdout), after all.
You could also set the buffer size and behavior using setvbuf.
Flushing the buffers is normally handled by the exit() function, which is usually called implicitly by a return from main(). You are ending your program by raising SIGINT, and apparently the default SIGINT handler does not flush the buffers.
Take a look at this article:
Applying Design Patterns to Simplify Signal Handling. The article is mostly C++, but there is a useful C example in the 2nd section, which shows how to use SIGINT to exit your program gracefully.
As for why the behavior of a terminal differs from a file,
take a look at Stevens' Advanced Programing in the UNIX Environment Section 5.4 on Buffering. He says that:
Most implementations default to the following types of buffering.
Standard error is always unbuffered.
All other streams are line buffered if they refer to a terminal device; otherwise, they are fully buffered.
The four platforms discussed in this book follow these conventions for standard I/O buffering: standard error is unbuffered, streams open to terminal devices are line buffered, and all other streams are fully buffered.
Has the program terminated by the time you check the contents of the redirected file? If it's still running, your output might still be buffered somewhere up the chain, so you don't see it in the file.
Apart from that, and the other answers provided so far, I think it's time to show a representative example of the problem code. There's too many esoteric possibilities.
EDIT
From the look of the sample code, if you've got a relatively small amount of printing happening, then you're getting caught in the output buffer. Flush after each write to be sure that it's gone to disk. Typically you can have up to a page size's worth of unwritten data lying around otherwise.
In the absence of a flush, the only time you can be sure you've got everything on disk is when the program exits. Even a thread terminating won't do it, since output buffers like that aren't per-thread, they're per-process.
Suggestions:
Redirect stderr to a file as well.
Try tail -f your output file(s).
Open a file and fprintf your logging (to help figure out what's going on).
Search for any manual closes/duplication/piping of std* FILE handles or 1-3 file descriptors.
Reduce complexity; cut out big chunks of functionality until printfs work. Then readd them until it breaks again. Continue until you identify the culprit code.
Just for the record, in Perl you would use:
use IO::Handle;
flush STDOUT;
autoflush STDOUT;