When reading process quits, how do i determine it from writing process before write call blocks ? Normally when read side closes, write call on the write side should return an error right?
client
while(!timeout)
{
read(fd, message, BUFFER_SIZE);
}
server
while(1)
{
length = write(fd, message, strlen(message));
if(length <= 0)
{
break;
}
}
Read carefully fifo(7):
When a process tries to write to a FIFO that is not opened for read
on the other side, the process is sent a SIGPIPE signal.
You could -and probably should- use poll(2) to test dynamic readability or writability of a fifo or pipe or socket file descriptor (see this answer about a simplistic event loop using poll). See also write(2) & Advanced Linux Programming.
Related
I'm trying to understand how pipe() function works and I have the following program example
int main(void)
{
int fd[2], nbytes;
pid_t childpid;
char string[] = "Hello, world!\n";
char readbuffer[80];
pipe(fd);
if((childpid = fork()) == -1)
{
perror("fork");
exit(1);
}
if(childpid == 0)
{
/* Child process closes up input side of pipe */
close(fd[0]);
/* Send "string" through the output side of pipe */
write(fd[1], string, (strlen(string)+1));
exit(0);
}
else
{
/* Parent process closes up output side of pipe */
close(fd[1]);
/* Read in a string from the pipe */
nbytes = read(fd[0], readbuffer, sizeof(readbuffer));
printf("Received string: %s", readbuffer);
}
return(0);
}
My first question is what benefits do we get from closing the file descriptor using close(fd[0]) and close(fd[1]) in child and parent processes. Second, we use write in child and read in parent, but what if parent process reaches read before child reaches write and tries to read from pipe which has nothing in it ? Thanks!
Daniel Jour gave you 99% of the answer already, in a very succinct and easy to understand manner:
Closing: Because it's good practice to close what you don't need. For the second question: These are potentially blocking functions. So reading from an empty pipe will just block the reader process until something gets written into the pipe.
I'll try to elaborate.
Closing:
When a process is forked, its open files are duplicated.
Each process has a limit on how many files descriptors it's allowed to have open. As stated in the documentation: each side of the pipe is a single fd, meaning a pipe requires two file descriptors and in your example, each process is only using one.
By closing the file descriptor you don't use, you're releasing resources that are in limited supply and which you might need further on down the road.
e.g., if you were writing a server, that extra fd means you can handle one more client.
Also, although releasing resources on exit is "optional", it's good practice. Resources that weren't properly released should be handled by the OS...
...but the OS was also written by us programmers, and we do make mistakes. So it only makes sense that the one who claimed a resource and knows about it will be kind enough to release the resource.
Race conditions (read before write):
POSIX defines a few behaviors that make read, write and pipes a good choice for thread and process concurrency synchronization. You can read more about it on the Rational section for write, but here's a quick rundown:
By default, pipes (and sockets) are created in what is known as "blocking mode".
This means that the application will hang until the IO operation is performed.
Also, IO operations are atomic, meaning that:
You will never be reading and writing at the same time. A read operation will wait until a write operation completes before reading from the pipe (and vice-versa)
if two threads call read in the same time, each will get a serial (not parallel) response, reading sequentially from the pipe (or socket) - this make pipes great tools for concurrency handling.
In other words, when your application calls:
read(fd[0], readbuffer, sizeof(readbuffer));
Your application will wait forever for some data to be available and for the read operation to complete (which it will once 80 (sizeof(readbuffer)) bytes were read, or if the EOF status changed during a read).
This is not a homework problem, I promise.
I'm writing a time series database implementation as a way to learn C.
I have a client/server pair that I've written. The server is currently an echo server listening to a socket on a port. The client connects to that port and sends lines of input to it. It uses readline to get input, sends it to the client socket, recvs a line from the client socket, and prints the line to the terminal. Rinse, repeat. The client returns when it gets an EOF from recv, at which point it knows the connection is closed.
The problem is that readline is blocking, such that if the server process is killed, i.e. I SIGINT it, the client is still blocking on readline. It isn't until after it sends, then recvs an EOF, that it will know the server is gone.
What I want to happen is for the client to get signaled if there's an EOF on recv and immediately exit.
What I think I need to do is to create 3 pthreads - 2 for a network client (send and recv) and 1 for the terminal. The terminal calls readline and blocks. It accepts input, and then uses a pthread_cond_t to signal the waiting network client send thread to send. The network client recv thread is constantly recving, which will block. If it is an EOF, it raises SIGINT, the handler to which pthread_kills all 3 threads, fprintfs something like "Connection closed by server.", and calls exit (yes, I know exit will terminate all threads anyways - it's an exercise in cleanliness and to see if I understand C).
Is this the appropriate approach? Obviously network client terminals do this all the time. What's the right approach?
If I'm understanding you correctly you would like to exit during a readline, which is before your write()/send(), when the server dies. Your approach is fine. The only way you will catch if the server has died is when you try to write()/send() you'll get a SIGPIPE which will cause an exit right after it. Because by default, no packets are sent on the connection unless there is data to send or acknowledge. So, if you are simply waiting for data from the peer, there is no way to tell if the peer has silently gone away, or just isn't ready to send/receive any more data yet. So you would need some poller for the SIGPIPE in a separate thread and exit when you get it. You could use this readline function to help you get started with the checking.
/**
* Simple utility function that reads a line from a file descriptor fd,
* up to maxlen bytes -- ripped from Unix Network Programming, Stevens.
*/
int
readline(int fd, char *buf, int maxlen)
{
int n, rc;
char c;
for (n = 1; n < maxlen; n++) {
if ((rc = read(fd, &c, 1)) == 1) {
*buf++ = c;
if (c == '\n')
break;
} else if (rc == 0) {
if (n == 1)
return 0; // EOF, no data read
else
break; // EOF, read some data
} else
return -1; // error
}
*buf = '\0'; // null-terminate
return n;
}
This question has been answered before. I got Stevens and chapter 5 suggested interleaving the readline call with a socket read to look for EOF. I did a Google search for libreadline and select/poll and saw this:
Interrupting c/c++ readline with signals
There's a way to use interleave I/O so libreadline doesn't just block:
http://www.delorie.com/gnu/docs/readline/rlman_41.html
Thanks #arayq2 and #G-- for suggesting Stevens!
I am trying to implement a client-server type of communication system using the poll function in C. The flow is as follows:
Main program forks a sub-process
Child process calls the exec function to execute some_binary
Parent and child send messages to each other alternately, each message that is sent depends on the last message received.
I tried to implement this using poll, but ran into problems because the child process buffers its output, causing my poll calls to timeout. Here's my code:
int main() {
char *buffer = (char *) malloc(1000);
int n;
pid_t pid; /* pid of child process */
int rpipe[2]; /* pipe used to read from child process */
int wpipe[2]; /* pipe used to write to child process */
pipe(rpipe);
pipe(wpipe);
pid = fork();
if (pid == (pid_t) 0)
{
/* child */
dup2(wpipe[0], STDIN_FILENO);
dup2(rpipe[1], STDOUT_FILENO);
close(wpipe[0]); close(rpipe[0]);
close(wpipe[1]); close(rpipe[1]);
if (execl("./server", "./server", (char *) NULL) == -1)
{
fprintf(stderr, "exec failed\n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
else
{
/* parent */
/* close the other ends */
close(wpipe[0]);
close(rpipe[1]);
/*
poll to check if write is good to go
This poll succeeds, write goes through
*/
struct pollfd pfds[1];
pfds[0].fd = wpipe[1];
pfds[0].events = POLLIN | POLLOUT;
int pres = poll(pfds, (nfds_t) 1, 1000);
if (pres > 0)
{
if (pfds[0].revents & POLLOUT)
{
printf("Writing data...\n");
write(wpipe[1], "hello\n", 6);
}
}
/*
poll to check if there's something to read.
This poll times out because the child buffers its stdout stream.
*/
pfds[0].fd = rpipe[0];
pfds[0].events = POLLIN | POLLOUT;
pres = poll(pfds, (nfds_t) 1, 1000);
if (pres > 0)
{
if (pfds[0].revents & POLLIN)
{
printf("Reading data...\n");
int n = read(rpipe[0], buffer, 1000);
buffer[n] = '\0';
printf("child says:\n%s\n", buffer);
}
}
kill(pid, SIGTERM);
return EXIT_SUCCESS;
}
}
The server code is simply:
int main() {
char *buffer = (char *) malloc(1000);
while (scanf("%s", buffer) != EOF)
{
printf("I received %s\n", buffer);
}
return 0;
}
How do I prevent poll calls from timing out because of buffering?
EDIT:
I would like the program to work even when the execed binary is external, i.e., I have no control over the code - like a unix command, e.g., cat or ls.
There seem to be two problems in your code. "stdout" is by default buffered,
so the server should flush it explicitly:
printf("I received %s\n", buffer);
fflush(stdout);
And the main program should not register for POLLOUT when trying to read
(but you may want register for POLLERR):
pfds[0].fd = rpipe[0];
pfds[0].events = POLLIN | POLLERR;
With these modifications you get the expected output:
$ ./main
Writing data...
Reading data...
child says:
I received hello
Generally, you should also check the return value of poll(), and repeat the call if
necessary (e.g. in the case of an interrupted system call or timeout).
You need, as I answered in a related answer to a previous question by you, to implement an event loop; as it name implies, it is looping, so you should code in the parent process:
while (1) { // simplistic event loop!
int status=0;
if (waitpid(pid, &status, WNOHANG) == pid)
{ // clean up, child process has ended
handle_process_end(status);
break;
};
struct pollpfd pfd[2];
memset (&pfd, 0, sizeof(pfd)); // probably useless but dont harm
pfd[0].fd = rpipe[0];
pfd[0].events = POLL_IN;
pfd[1].fd = wpipe[1];
pfd[0].event = POLL_OUT;
#define DELAY 5000 /* 5 seconds */
if (poll(pfd, 2, DELAY)>0) {
if (pfd[0].revents & POLL_IN) {
/* read something from rpipe[0]; detect end of file;
you probably need to do some buffering, because you may
e.g. read some partial line chunk written by the child, and
you could only handle full lines. */
};
if (pfd[1].revents & POLL_OUT) {
/* write something on wpipe[1] */
};
}
fflush(NULL);
} /* end while(1) */
you cannot predict in which order the pipes are readable or writable, and this can happen many times. Of course, a lot of buffering (in the parent process) is involved, I leave the details to you.... You have no influence on the buffering in the child process (some programs detect that their output is or not a terminal with isatty).
What an event polling loop like above gives you is to avoid the deadlock situation where the child process is blocked because its stdout pipe is full, while the parent is blocked writing (to the child's stdin pipe) because the pipe is full: with an event loop, you read as soon as some data is polled readable on the input pipe (i.e. the stdout of the child process), and you write some data as soon as the output pipe is polled writable (i.e. is not full). You cannot predict in advance in which order these events "output of child is readable by parent" and "input of child is writable by parent" happen.
I recommend reading Advanced Linux Programming which has several chapters explaining these issues!
BTW my simplistic event loop is a bit wrong: if the child process terminated and some data remains in its stdout pipe, its reading is not done. You could move the waitpid test after the poll
Also, don't expect that a single write (from the child process) into a pipe would trigger one single read in the parent process. In other words, there is no notion of message length. However, POSIX knows about PIPE_MAX .... See its write documentation. Probably your buffer passed to read and write should be of PIPE_MAX size.
I repeat: you need to call poll inside your event loop and very probably poll will be called several times (because your loop will be repeated many times!), and will report readable or writable pipe ends in an unpredictable (and non-reproducible) order! A first run of your program could report "rpipe[0] readable", you read 324 bytes from it, you repeat the event loop, poll says you "wpipe[1] writable", you can write 10 bytes to it, you repeat the event loop, poll tells that "rpipe[0] readable", you read 110 bytes from it, you repeat the event loop, poll tells again "rpipe[0] readable", you read 4096 bytes from it, etc etc etc... A second run of the same program in the same environment would give different events, like: poll says that "wpipe[1] writable", you write 1000 bytes to it, you repeat the loop, poll says that "rpipe[0] readable, etc....
NB: your issue is not the buffering in the child ("client") program, which we assume you cannot change. So what matters is not the buffered data in it, but the genuine input and output (that is the only thing your parent process can observe; internal child buffering is irrelevant for the parent), i.e. the data that your child program has been able to really read(2) and write(2). And if going thru a pipe(7), such data will become poll(2)-able in the parent process (and your parent process can read or write some of it after POLL_IN or POLL_OUT in the updated revents field after poll). BTW, if you did code the child, don't forget to call fflush at appropriate places inside it.
I am having a problem in my program that uses pipes.
What I am doing is using pipes along with fork/exec to send data to another process
What I have is something like this:
//pipes are created up here
if(fork() == 0) //child process
{
...
execlp(...);
}
else
{
...
fprintf(stderr, "Writing to pipe now\n");
write(pipe, buffer, BUFFER_SIZE);
fprintf(stderr, "Wrote to pipe!");
...
}
This works fine for most messages, but when the message is very large, the write into the pipe deadlocks.
I think the pipe might be full, but I do not know how to clear it. I tried using fsync but that didn't work.
Can anyone help me?
You need to close the read end of the pipe in the process doing the writing. The OS will keep data written to the pipe in the pipe's buffer until all processes that have the read end of the pipe open actually read what's there.
I am trying to use a named pipe for communication within a process.
Here is the code
#include <stdio.h>
#include <fcntl.h>
#include <signal.h>
void sigint(int num)
{
int fd = open("np", O_WRONLY);
write(fd, "y", 1);
close(fd);
}
main()
{
char ch[1];
int fd;
mkfifo("np", 0666);
signal(SIGINT, sigint);
fd = open("np", O_RDONLY);
read(fd, ch, 1);
close(fd);
printf("%c\n", ch[0]);
return;
}
What I want is for main to block till something is written to the pipe.
The problem is that the signal handler sigint() also blocks after opening the pipe. Is this supposed to happen given that the pipe is already opened for reading earlier in main() ?
You're blocking in open() , opening a fifo for reading blocks until someone opens it for writing.
And opening a fifo for writing blocks until someone opens it for reading.
the signal handler runs in the same thread as your main(), so you'll get a deadlock. Neither will be able to open the fifo.
You can check what's going on by running your program under strace.
From the man page:
Opening a FIFO for reading normally
blocks until some other process opens
the same FIFO for writing, and vice
versa.
and:
A process can open a FIFO in
non-blocking mode. In this case,
opening for read only will succeed
even if no-one has opened on the write
side yet; opening for write only will
fail with ENXIO (no such device or
address) unless the other end has
already been opened.
Under Linux, opening a FIFO for read
and write will succeed both in
blocking and non-blocking mode. POSIX
leaves this behaviour undefined. This
can be used to open a FIFO for writing
while there are no readers available.
A process that uses both ends of the
connection in order to communicate
with itself should be very careful to
avoid deadlocks.