read() doesn't block on empty FIFOs opened without O_NONBLOCK flag - c

pipe(7) says:
If a process attempts to read from an empty pipe, then read(2) will block until data is available. If a process attempts to write to a full pipe (see below), then write(2) blocks until sufficient data has been read from the pipe to allow the write to complete. Nonblocking I/O is possible by using the fcntl(2) F_SETFL operation to enable the O_NONBLOCK open file status flag.
Below I have two simple C programs compiled on linux with gcc:
reader.c:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
signed int main(int argc, char **argv) {
int fifo_fd = open(FIFO_PATH, O_RDONLY); // blocking... - notice no O_NONBLOCK flag
if (fifo_fd != -1) {
fprintf(stdout, "open() call succeeded\n");
}
while (1) {
char buf[STACKBUF_SIZE] = {0};
ssize_t bread = read(fifo_fd, buf, STACKBUF_SIZE);
fprintf(stdout, "%d - %s\n", bread, buf);
sleep(1);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
writer.c:
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
#define DATA "data"
int main(void) {
int fifo_fd = open(FIFO_PATH, O_WRONLY); // blocks until reader opens on the reader end, however we always first open the reader so...
if(fifo_fd != -1) {
ssize_t bwritten = write(fifo_fd, DATA, 5);
fprintf(stdout, "writer wrote %ld bytes\n", bwritten);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
The files are compiled into two separate binaries with gcc writer.c -Og -g -o ./writer, same for the reader.
From the shell I first execute the reader binary, and as expected, the initial open() call blocks until I also execute the writer. I then execute the writer, whose open() call immediately succeeds and it writes 5 bytes to the FIFO (which are correctly displayed by the reader), after which it closes the fd, leaving the FIFO empty (?).
However, the following read() calls in the while loop of the reader don't block at all, and instead just return 0.
Unless I am missing something (I probably am) this is in clash with the semantics outlined by the pipe(7) manpage, as the FIFO fd was open without the O_NONBLOCK flag both in the reader and the writer.

The section of the manual that you quoted only applies to pipes with open writers. Two paragraphs down, it says this:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0).

Related

How to use a pseudo-terminal returned from posix_openpt?

I'm trying to use posix_openpt on Mac. The issue I'm seeing is that I get a file descriptor back from posix_openpt. I use the file descriptor for reading and create a copy using dup for writing. The issue I'm running into is that when I write to the master file descriptor, I read that data back out from the master. So no data ends up at the slave. I confirmed this by using posix_spawnp to run a program with stdin/stdout/stderr set to the slave file. The program hangs indefinitely waiting for input. Here is my code (note, all error handling was removed for legibility):
int master_fd = posix_openpt(O_RDWR);
grantpt(master_fd);
unlockpt(master_fd);
char *slave_filename_orig = ptsname(master_fd);
size_t slave_filename_len = strlen(slave_filename_orig);
char slave_filename[slave_filename_len + 1];
strcpy(slave_filename, slave_filename_orig);
posix_spawn_file_actions_t fd_actions;
posix_spawn_file_actions_init(&fd_actions);
posix_spawn_file_actions_addopen(&fd_actions, STDIN_FILENO, slave_filename, O_RDONLY, 0644);
posix_spawn_file_actions_addopen(&fd_actions, STDOUT_FILENO, slave_filename, O_WRONLY, 0644);
posix_spawn_file_actions_adddup2(&fd_actions, STDOUT_FILENO, STDERR_FILENO);
pid_t pid;
posix_spawnp(&pid, "wc", &fd_actions, NULL, NULL, NULL);
int master_fd_write = dup(master_fd);
char *data = "hello world";
write(master_fd_write, data, strlen(data));
close(master_fd_write);
char buffer[1024];
read(master_fd, buffer, 1024); // <- Issue Here
// buffer now contains hello world. It should contain the output of `wc`
(Note: The above was only tested on Linux; I don't have a Mac to work on, but I have no reason to believe it's any different in the details here.)
There are several problems with your code:
At least on Linux, calling posix_spawn() with a null pointer causes a crash. You need to provide all the arguments. Even if Macs accept it the way you have it, doing this is a Good Idea.
Next, wc reading from standard input will wait until an attempt to read more data gives an End Of File condition before it prints out the statistics it gathers; your code doesn't do this. With a pty, if you write a specific byte (Typically with the value 4, but it can be different, so best to use what the terminal says instead of hardcoding it) to it, the terminal driver will recognize that as signalling EOF without having to close the master like you would when using a pipe (Making it impossible to read the output of wc).
Second, the terminal's default settings include echoing the input; that's what you're reading.
A cleaned up version that addresses these issues and more (Like yours, with most error checking omitted; real code should be checking all these functions for errors):
#define _XOPEN_SOURCE 700
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <fcntl.h>
#include <spawn.h>
#include <termios.h>
#include <unistd.h>
#include <wait.h>
int main(void) {
int master_fd = posix_openpt(O_RDWR);
grantpt(master_fd);
unlockpt(master_fd);
char *slave_filename_orig = ptsname(master_fd);
size_t slave_filename_len = strlen(slave_filename_orig);
char slave_filename[slave_filename_len + 1];
strcpy(slave_filename, slave_filename_orig);
//printf("slave pty filename: %s\n", slave_filename);
// Open the slave pty in this process
int slave_fd = open(slave_filename, O_RDWR);
// Set up slave pty to not echo input
struct termios tty_attrs;
tcgetattr(slave_fd, &tty_attrs);
tty_attrs.c_lflag &= ~ECHO;
tcsetattr(slave_fd, TCSANOW, &tty_attrs);
posix_spawn_file_actions_t fd_actions;
posix_spawn_file_actions_init(&fd_actions);
// Use adddup2 instead of addopen since we already have the pty open.
posix_spawn_file_actions_adddup2(&fd_actions, slave_fd, STDIN_FILENO);
posix_spawn_file_actions_adddup2(&fd_actions, slave_fd, STDOUT_FILENO);
// Also close the master and original slave fd in the child
posix_spawn_file_actions_addclose(&fd_actions, master_fd);
posix_spawn_file_actions_addclose(&fd_actions, slave_fd);
posix_spawnattr_t attrs;
posix_spawnattr_init(&attrs);
pid_t pid;
extern char **environ;
char *const spawn_argv[] = {"wc" , NULL};
posix_spawnp(&pid, "wc", &fd_actions, &attrs, spawn_argv, environ);
close(slave_fd); // No longer needed in the parent process
const char *data = "hello world\n";
ssize_t len = strlen(data);
if (write(master_fd, data, len) != len) {
perror("write");
}
// Send the terminal's end of file interrupt
cc_t tty_eof = tty_attrs.c_cc[VEOF];
if (write(master_fd, &tty_eof, sizeof tty_eof) != sizeof tty_eof) {
perror("write EOF");
}
// Wait for wc to exit
int status;
waitpid(pid, &status, 0);
char buffer[1024];
ssize_t bytes = read(master_fd, buffer, 1024);
if (bytes > 0) {
fwrite(buffer, 1, bytes, stdout);
}
close(master_fd);
return 0;
}
When compiled and run, outputs
1 2 12
There are two problems with this code.
First, you are seeing "hello world" on master_fd because by default terminals echo. You need to set the terminal to raw mode to suppress that.
Second, wc won't output anything until it sees an EOF, and it will not see an EOF until you close the master. Not just master_fd_write mind you, but all copies of master_fd, including master_fd itself. However, once you close the master, you cannot read from it.
Choose some other program that wc to demonstrate the functionality of posix_openpt.
Edit: It is possible to raise the end-of-file condition on the slave without closing the master by writing ^D (EOT, ascii 4).

Why doesn't my multi-process writing program trigger concurrent conflict?

I'm trying to trigger some concurrent conflicts by having several processes writing to the same file, but couldn't:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <sys/wait.h>
void concurrent_write()
{
int create_fd = open("bar.txt", O_CREAT | O_TRUNC, 0644);
close(create_fd);
int repeat = 20;
int num = 4;
for (int process = 0; process < num; process++)
{
int rc = fork();
if (rc == 0)
{
// child
int write_fd = open("bar.txt", O_WRONLY | O_APPEND, 0644);
for (int idx = 0; idx < repeat; idx++)
{
sleep(1);
write(write_fd, "child writing\n", strlen("child writing\n"));
}
close(write_fd);
exit(0);
}
}
for (int process = 0; process < num; process++)
{
wait(NULL);
// wait for all children to exits
}
printf("write to `bar.txt`\n%d lines written by %d process\n", repeat * num, num);
printf("wc:");
if (fork() == 0)
{
// child
char *args[3];
args[0] = strdup("wc");
args[1] = strdup("bar.txt");
args[2] = NULL;
execvp(args[0], args);
}
}
int main(int argc, char *argv[])
{
concurrent_write();
return 0;
}
This program fork #num children and then have all of them write #repeat lines to a file. But every time (however I change #repeat and #num) I got the same result that the length of bar.txt (output file) matched the number of total written lines. Why is there no concurrent conflicts triggered?
Writing to a file can be divided into a two-step process:
Locate where you want to write.
Write data into the file.
You open a file with flag O_APPEND and it ensures that the two-step process is atomic. So, you can always find the lines of the file as the count you set.
See the open(2) man page:
O_APPEND
The file is opened in append mode. Before each write(2),
the file offset is positioned at the end of the file, as
if with lseek(2). The modification of the file offset and
the write operation are performed as a single atomic step.
In essence, one of the major design features of O_APPEND is precisely to prevent the sort of "concurrent conflicts" you mention. The typical example would be a log file that several processes must write to. Using O_APPEND ensures their messages do not overwrite each other.
Moreover, all data written by a single write call is written atomically, so provided that your write("child writing\n") successfully writes all its bytes (which for a regular file it usually would), they will not be interleaved with the bytes of any other such message.
First, write() calls with the O_APPEND flag should be atomic. Per POSIX write():
If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation.
But that's not enough when there are multiple threads or processes making parallel write() calls on the same file - that does not guarantee that parallel write() calls are atomic.
POSIX does guarantee that parallel write() calls are also atomic:
All of the following functions shall be atomic with respect to each
other in the effects specified in POSIX.1-2017 when they operate on
regular files or symbolic links:
...
write()
...
See also Is file append atomic in UNIX?
Beware, though. Reading that question and its answers shows that Linux filesystems such as ext3 are not POSIX compliant once you get past a relatively small size operation, or possibly if you cross page and/or file system sector boundaries. I suspect XFS and ZFS will support write() atomicity much better given their origins.
And none of this applies to Windows.

FIFO read() function gets stuck in c

I'm trying to read a text file's string from a process, then deliver the string to another process via named pipes on LINUX. The problem is when i type './reader text.txt = receiver' to the console the recieving process' read() function returns an error if i put the line
fcntl(fd, F_SETFL, O_NONBLOCK);
or gets stuck on read() function if i remove it.
heres the process that reads the string (reader)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
#include <fcntl.h>
#include <sys/wait.h>
int main(int argc,char *argv1[]){
if(argc==4 && strcmp(argv1[2],"=") == 0){
char mesaj[99999]; //message to be delivered
char line[150];
FILE *fp =fopen(argv1[1],"r"); //reading from a text file
if(fp==NULL){
printf("text file error!");
exit(1);
}
while(fgets(line,sizeof(line),fp)){
strcat(mesaj,line); //append every line to message
}
fclose(fp);
mesaj[strlen(mesaj)-1]='\0';
int n =strlen(mesaj)+1;
//printf("got the text %s\n",mesaj);
if(mkfifo("myFifo",0777)== -1 && errno!= EEXIST){
printf("Pipe error");
exit(1);
}
printf("opening\n");
int fd= open("myFifo",O_RDWR);
if(fd==-1){
printf("open error");
exit(1);
}
printf("opened");
if( write(fd, mesaj,sizeof(char)*n)==-1){
printf("write error");
exit(1);
}
printf("written");
close(fd);
printf("closed");
fflush(stdout);
char mesajSizeChar[n];
sprintf(mesajSizeChar, "%d", n);
char *args[]={mesajSizeChar,NULL}; //send the message size as parameter for the other process
char program[]="./";
strcat(program,argv1[3]); // recieved process name as parameter
execv(program,args); // call the other process
perror("execv");
return 0;
}
}
and heres the recieving process (reciever)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
#include <fcntl.h>
int main(int argc,char *argv1[]){
int mesajSize=atoi(argv1[0]); //convert message size to integer
char mesaj[99999];
printf("\ncame here\n");
int fd= open("myFifo",O_RDWR);
fcntl(fd, F_SETFL, O_NONBLOCK);
printf("\nopen \n");
if(fd==-1)
printf("pipe error\n");
if(read(fd,mesaj,sizeof(char)*mesajSize)==-1)
printf("read error\n");
printf("read \n");
printf("\nworked: %s \n",mesaj);
close(fd);
return 0;
}
The problem is that you closed the pipe in the first process. A pipe doesn't have any permanent storage, it can only hold data while it's open by at least one process. When you close the pipe, the data that you've written to it is discarded.
As a result, when the second process tries to read from the pipe, there's nothing available.
You need to keep the pipe FD open when you execute the second process. Get rid of close(fd); in the reader program.
To use a FIFO or pipe, sender and receiver must run concurrently, but you are trying to run them sequentially. A FIFO or pipe has no persistent storage, so the system does not allow you to write to one unless unless at least one process has the read end open, so as to be able to read it.
Ordinarily, attempts to open a FIFO for writing will block while there are no readers, and vice versa. Your reader is working around this by opening the FIFO for both reading and writing, even though it intends only to write. You will find that if it tries to send too much data to the FIFO then it blocks, because nothing is reading the data, and pipes / FIFOs have limited buffer capacity. When it closes the FIFO's fd, leaving no process with it open, all data previously written to it are lost.
Your receiver also erroneously opens the FIFO for both reading and writing, whereas it should open it only for reading. There being no data to read from it, I would expect attempts to read from it to block indefinitely, unless you put it into non-blocking mode. This seems to be exactly what you describe.
To fix it, I would suggest
taking the code that starts the receiver out of the reader. Instead, start the reader and receiver separately. Alternatively, the reader may start out by fork()ing, with the resulting child process execv()ing the receiver.
The reader should open the FIFO with flag O_WRONLY, and the receiver should open it with mode O_RDONLY.
You should find a different way to convey the message length from reader to receiver, or, better, to avoid needing to tell it the message length in advance at all. You could, for instance, send an initial fixed-length message that conveys the length of the main message data, but more typical would be for the receiver to just keep reading data until it sees EOF.
The reader will cause the receiver to see EOF on the FIFO by closing it, either explicitly or by terminating. This depends on the receiver having it open in read-only mode, however, and there being no other writers.
The reader probably should not attempt to buffer the whole message in memory at once. It should not, in any case, assume that a write() call will transfer the full number of bytes requested -- the return value will tell you how many actually were transferred. You need to be prepared to use multiple write() calls in a loop to transfer all the data.
Similarly, the receiver cannot rely on a single read() call to transfer the full number of bytes requested in one call, even if it has some way to know how many are coming. As with write(), you need to be prepared to use multiple read()s to transfer all the data.

Strange behavior performing library functions on STDOUT and STDIN's file descriptors

Throughout my years as a C programmer, I've always been confused about the standard stream file descriptors. Some places, like Wikipedia[1], say:
In the C programming language, the standard input, output, and error streams are attached to the existing Unix file descriptors 0, 1 and 2 respectively.
This is backed up by unistd.h:
/* Standard file descriptors. */
#define STDIN_FILENO 0 /* Standard input. */
#define STDOUT_FILENO 1 /* Standard output. */
#define STDERR_FILENO 2 /* Standard error output. */
However, this code (on any system):
write(0, "Hello, World!\n", 14);
Will print Hello, World! (and a newline) to STDOUT. This is odd because STDOUT's file descriptor is supposed to be 1. write-ing to file descriptor 1
also prints to STDOUT.
Performing an ioctl on file descriptor 0 changes standard input[2], and on file descriptor 1 changes standard output. However, performing termios functions on either 0 or 1 changes standard input[3][4].
I'm very confused about the behavior of file descriptors 1 and 0. Does anyone know why:
writeing to 1 or 0 writes to standard output?
Performing ioctl on 1 modifies standard output and on 0 modifies standard input, but performing tcsetattr/tcgetattr on either 1 or 0 works for standard input?
I guess it is because in my Linux, both 0 and 1 are by default opened with read/write to the /dev/tty which is the controlling terminal of the process. So indeed it is possible to even read from stdout.
However this breaks as soon as you pipe something in or out:
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
int main() {
errno = 0;
write(0, "Hello world!\n", 14);
perror("write");
}
and run with
% ./a.out
Hello world!
write: Success
% echo | ./a.out
write: Bad file descriptor
termios functions always work on the actual underlying terminal object, so it doesn't matter whether 0 or 1 is used for as long as it is opened to a tty.
Let's start by reviewing some of the key concepts involved:
File description
In the operating system kernel, each file, pipe endpoint, socket endpoint, open device node, and so on, has a file description. The kernel uses these to keep track of the position in the file, the flags (read, write, append, close-on-exec), record locks, and so on.
The file descriptions are internal to the kernel, and do not belong to any process in particular (in typical implementations).
File descriptor
From the process viewpoint, file descriptors are integers that identify open files, pipes, sockets, FIFOs, or devices.
The operating system kernel keeps a table of descriptors for each process. The file descriptor used by the process is simply an index to this table.
The entries to in the file descriptor table refer to a kernel file description.
Whenever a process uses dup() or dup2() to duplicate a file descriptor, the kernel only duplicates the entry in the file descriptor table for that process; it does not duplicate the file description it keeps to itself.
When a process forks, the child process gets its own file descriptor table, but the entries still point to the exact same kernel file descriptions. (This is essentially a shallow copy, will all file descriptor table entries being references to file descriptions. The references are copied; the referred to targets remain the same.)
When a process sends a file descriptor to another process via an Unix Domain socket ancillary message, the kernel actually allocates a new descriptor on the receiver, and copies the file description the transferred descriptor refers to.
It all works very well, although it is a bit confusing that "file descriptor" and "file description" are so similar.
What does all that have to do with the effects the OP is seeing?
Whenever new processes are created, it is common to open the target device, pipe, or socket, and dup2() the descriptor to standard input, standard output, and standard error. This leads to all three standard descriptors referring to the same file description, and thus whatever operation is valid using one file descriptor, is valid using the other file descriptors, too.
This is most common when running programs on the console, as then the three descriptors all definitely refer to the same file description; and that file description describes the slave end of a pseudoterminal character device.
Consider the following program, run.c:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
static void wrerrp(const char *p, const char *q)
{
while (p < q) {
ssize_t n = write(STDERR_FILENO, p, (size_t)(q - p));
if (n > 0)
p += n;
else
return;
}
}
static inline void wrerr(const char *s)
{
if (s)
wrerrp(s, s + strlen(s));
}
int main(int argc, char *argv[])
{
int fd;
if (argc < 3) {
wrerr("\nUsage: ");
wrerr(argv[0]);
wrerr(" FILE-OR-DEVICE COMMAND [ ARGS ... ]\n\n");
return 127;
}
fd = open(argv[1], O_RDWR | O_CREAT, 0666);
if (fd == -1) {
const char *msg = strerror(errno);
wrerr(argv[1]);
wrerr(": Cannot open file: ");
wrerr(msg);
wrerr(".\n");
return 127;
}
if (dup2(fd, STDIN_FILENO) != STDIN_FILENO ||
dup2(fd, STDOUT_FILENO) != STDOUT_FILENO) {
const char *msg = strerror(errno);
wrerr("Cannot duplicate file descriptors: ");
wrerr(msg);
wrerr(".\n");
return 126;
}
if (dup2(fd, STDERR_FILENO) != STDERR_FILENO) {
/* We might not have standard error anymore.. */
return 126;
}
/* Close fd, since it is no longer needed. */
if (fd != STDIN_FILENO && fd != STDOUT_FILENO && fd != STDERR_FILENO)
close(fd);
/* Execute the command. */
if (strchr(argv[2], '/'))
execv(argv[2], argv + 2); /* Command has /, so it is a path */
else
execvp(argv[2], argv + 2); /* command has no /, so it is a filename */
/* Whoops; failed. But we have no stderr left.. */
return 125;
}
It takes two or more parameters. The first parameter is a file or device, and the second is the command, with the rest of the parameters supplied to the command. The command is run, with all three standard descriptors redirected to the file or device named in the first parameter. You can compile the above with gcc using e.g.
gcc -Wall -O2 run.c -o run
Let's write a small tester utility, report.c:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
int main(int argc, char *argv[])
{
char buffer[16] = { "\n" };
ssize_t result;
FILE *out;
if (argc != 2) {
fprintf(stderr, "\nUsage: %s FILENAME\n\n", argv[0]);
return EXIT_FAILURE;
}
out = fopen(argv[1], "w");
if (!out)
return EXIT_FAILURE;
result = write(STDIN_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "write(STDIN_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "write(STDIN_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
result = read(STDOUT_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "read(STDOUT_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "read(STDOUT_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
result = read(STDERR_FILENO, buffer, 1);
if (result == -1) {
const int err = errno;
fprintf(out, "read(STDERR_FILENO, buffer, 1) = -1, errno = %d (%s).\n", err, strerror(err));
} else {
fprintf(out, "read(STDERR_FILENO, buffer, 1) = %zd%s\n", result, (result == 1) ? ", success" : "");
}
if (ferror(out))
return EXIT_FAILURE;
if (fclose(out))
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
It takes exactly one parameter, a file or device to write to, to report whether writing to standard input, and reading from standard output and error work. (We can normally use $(tty) in Bash and POSIX shells, to refer to the actual terminal device, so that the report is visible on the terminal.) Compile this one using e.g.
gcc -Wall -O2 report.c -o report
Now, we can check some devices:
./run /dev/null ./report $(tty)
./run /dev/zero ./report $(tty)
./run /dev/urandom ./report $(tty)
or on whatever we wish. On my machine, when I run this on a file, say
./run some-file ./report $(tty)
writing to standard input, and reading from standard output and standard error all work -- which is as expected, as the file descriptors refer to the same, readable and writable, file description.
The conclusion, after playing with the above, is that there is no strange behaviour here at all. It all behaves exactly as one would expect, if file descriptors as used by processes are simply references to operating system internal file descriptions, and standard input, output, and error descriptors are duplicates of each other.

Should a read from FIFO block after all the data was just read from that FIFO?

I'm learning about pipe programming in Linux, and am having trouble understanding pipe / FIFO management.
I wrote a small program which opens a FIFO I created (I did mkfifo newfifo in my terminal before executing the program). I then repeatedly read and dump my character buffer. I'm filling the FIFO using echo "message" > newfifo from another terminal's cmd line.
The problem is that when I write to the FIFO, I can read that data in the buffer, but then the read doesn't block anymore. My understanding was that after I read the data from the FIFO, the FIFO should be empty and the read should block. Am I thinking about this wrong, or am I incorrectly managing the FIFO?
Code is below:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#define NEWPIPE "./newfifo"
void main()
{
int great_success = 0;
int fd;
char buffer[20];
fd = open(NEWPIPE, O_RDONLY);
while (1) {
great_success = read(fd, buffer, 20);
if (great_success < 0) {
printf("pipe failed\n");
} else {
printf("buffer : %s\n", buffer);
printf("great_success = %d\n", great_success);
great_success = 0;
}
}
}
Your understanding of how fifos works is incorrect. They are much like pipes: if the write end is closed (the echo command has terminated), the read end will read end-of-file (EOF), i.e. return 0.
Note that when you open the fifo, it isn't read that is blocking. The blocking system call is the open() system call, as explained in http://linux.die.net/man/4/fifo
Because the process(echo "message" > newfifo) is a short program, it terminated quickly. Once the process terminated, there is no write end for the pipe, so the read end in another process gets an EOF.

Resources