I'm trying to understand the difference between select() and poll() better. For this I tried to implement a simple program that will open a file as write-only, add its file descriptor to the read set and than execute select in hopes that the function will block until the read permission is granted.
As this didnt work (and as far as I understood, this is intended behaviour) I tried to block access to the file using flock before the select() executen. Still, the program did not block its execution.
My sample code is as follows:
#include <stdio.h>
#include <poll.h>
#include <sys/file.h>
#include <errno.h>
#include <sys/select.h>
int main(int argc, char **argv)
{
printf("[+] Select minimal example\n");
int max_number_fds = FOPEN_MAX;
int select_return;
int cnt_pollfds;
struct pollfd pfds_array[max_number_fds];
struct pollfd *pfds = pfds_array;
fd_set fds;
int fd_file = open("./poll_text.txt", O_WRONLY);
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
printf("\t[+] Textfile fd: %d\n", fd_file);
//create and set fds set
FD_ZERO(&fds);
FD_SET(fd_file, &fds);
printf("[+] Locking file descriptor!\n");
if(flock(fd_file,LOCK_EX) == -1)
{
int error_nr = errno;
printf("\t[+] Errno: %d\n", error_nr);
}
printf("[+] Executing select()\n");
select_return = select(fd_file+1, &fds, NULL, NULL, &tv);
if(select_return == -1){
int error_nr = errno;
printf("[+] Select Errno: %d\n", error_nr);
}
printf("[+] Select return: %d\n", select_return);
}
Can anybody see my error in this code? Also: I first tried to execute this code with two FDs added to the read list. When trying to lock them I had to use flock(fd_file,LOCK_SH) as I cannot exclusively lock two FDs with LOCK_EX. Is there a difference on how to lock two FDs of the same file (compared to only one fd)
I'm also not sure why select will not block when a file, that is added to the Read-set is opened as Write-Only. The program can never (without a permission change) read data from the fd, so in my understanding select should block the execution, right?
As a clarification: My "problem" I want to solve is that I have to check if I'm able to replace existing select() calls with poll() (existing in terms of: i will not re-write the select() call code, but will have access to the arguments of select.). To check this, I wanted to implement a test that will force select to block its execution, so I can later check if poll will act the same way (when given similar instructions, i.e. the same FDs to check).
So my "workflow" would be: write tests for different select behaviors (i.e. block and not block), write similar tests for poll (also block, not block) and check if/how poll can be forced do exactly what select is doing.
Thank you for any hints!
When select tells you that a file descriptor is ready for reading, this doesn't necessarily mean that you can read data. It only means that a read call will not block. A read call will also not block when it returns an EOF or error condition.
In your case I expect that read will immediately return -1 and set errno to EBADF (fd is not a valid file descriptor or is not open for reading) or maybe EINVAL (fd is attached to an object which is unsuitable for reading...)
Edit: Additional information as requested in a comment:
A file can be in a blocking state if a physical operation is needed that will take some time, e.g. if the read buffer is empty and (new) data has to be read from the disk, if the file is connected to a terminal and the user has not yet entered any (more) data or if the file is a socket or a pipe and a read would have to wait for (new) data to arrive...
The same applies for write: If the send buffer is full, a write will block. If the remaining space in the send buffer is smaller than your amount of data, it may write only the part that currently fits into the buffer.
If you set a file to non-blocking mode, a read or write will not block but tell you that it would block.
If you want to have a blocking situation for testing purposes, you need control over the process or hardware that provides or consumes the data. I suggest to use read from a terminal (stdin) when you don't enter any data or from a pipe where the writing process does not write any data. You can also fill the write buffer on a pipe when the reading process does not read from it.
Related
I'm sending data from one process to another pipe and I want to clear the pipe after reading.
Is there a function in C that can do this ?
Yes. It's just the read function offered by the stdio library. You have to invoke it as many times as you need in order to be sure the pipe will be empty.
As the documentation suggests, the read function attempts reading count bytes from an I/O channel (a pipe in your case) for which you have passed the file descriptor as first argument, and places its content into a buffer with enough room to accommodate it.
Let's recall that the read function may return a value indicating a number of bytes read that is smaller than that of those requested. This is perfectly fine if there are less bytes to read than what you expected.
Also remeber that reading from a pipe is blocking if there's nothing to read and the writer has not yet closed the relative descriptor, thus meaning that you'll not get EOF until the counterpart closes its descriptor. Therefore you'll stuck while attempting to read from pipe. If you are intended to avoid the aforementioned possibility I suggest to follow the solution below based on the poll function to verify whether there's data to read from a file descriptor:
#include <poll.h>
struct pollfd pfd;
int main(void)
{
/* your operations */
pfd.fd = pipe_fd;
pfd.events = POLLIN;
while (poll(&pfd, 1, 0) == 1)
{
/* there's available data, read it */
}
return 0;
}
Similar to the problem asked a while ago on kernel 3.x, but I'm seeing it on 4.9.37.
The named fifo is created with mkfifo -m 0666. On the read side it is opened with
int fd = open(FIFO_NAME, O_RDONLY | O_NONBLOCK);
The resulting fd is passed into a call to select(). Everything works ok, till I run echo >> <fifo-name>.
Now the fd appears in the read_fds after the select() returns. A read() on the fd will return one byte of data. So far so good.
The next time when select() is called and it returns, the fd still appears in the read_fds, but read() will always return zero meaning with no data. Effectively the read side would consume 100% of the processor capacity. This is exactly the same problem as observed by the referenced question.
Has anybody seen the same issue? And how can it be resolved or worked-around properly?
I've figured out if I close the read end of the fifo, and re-open it again, it will work properly. This probably is ok because we are not sending a lot of data. Though this is not a nice or general work-around.
This is expected behaviour, because the end-of-input case causes a read() to not block; it returns 0 immediately.
If you look at man 2 select, it says clearly that a descriptor in readfds is set if a read() on that descriptor would not block (at the time of the select() call).
If you used poll(), it too would immediately return with POLLHUP in revents.
As OP notes, the correct workaround is to reopen the FIFO.
Because the Linux kernel maintains exactly one internal pipe object to represent each open FIFO (see man 7 fifo and man 7 pipe), the robust approach in Linux is to open another descriptor to the FIFO whenever an end of input is encountered (read() returning 0), and close the original. During the time when both descriptors are open, they refer to the same kernel pipe object, so there is no race window or risk of data loss.
In pseudo-C:
fifoflags = O_RDONLY | O_NONBLOCK;
fifofd = open(fifoname, fifoflags);
if (fifofd == -1) {
/* Error checking */
}
/* ... */
/* select() readfds contains fifofd, or
poll() returns POLLIN for fifofd: */
n = read(fifofd, buffer, sizeof buffer)
if (!n) {
int tempfd;
tempfd = open(fifopath, fifoflags);
if (tempfd == -1) {
const int cause = errno;
close(fifofd);
/* Error handling */
}
close(fifofd);
fifofd = tempfd;
/* A writer has closed the FIFO. */
} else
/* Handling for the other read() result cases */
The file descriptor allocation policy in Linux is such that tempfd will be the lowest-numbered free descriptor.
On my system (Core i5-7200U laptop), reopening a FIFO in this way takes less than 1.5 µs. That is, it can be done about 680,000 times a second. I do not think this reopening is a bottleneck for any sensible scenario, even on low-powered embedded Linux machines.
I don't know why I'm having a hard time finding this, but I'm looking at some linux code where we're using select() waiting on a file descriptor to report it's ready. From the man page of select:
select() and pselect() allow a program to monitor multiple file descriptors,
waiting until one or more of the file descriptors become "ready" for some
class of I/O operation
So, that's great... I call select on some descriptor, give it some time out value and start to wait for the indication to go. How does the file descriptor (or owner of the descriptor) report that it's "ready" such that the select() statement returns?
It reports that it's ready by returning.
select waits for events that are typically outside your program's control. In essence, by calling select, your program says "I have nothing to do until ..., please suspend my process".
The condition you specify is a set of events, any of which will wake you up.
For example, if you are downloading something, your loop would have to wait on new data to arrive, a timeout to occur if the transfer is stuck, or the user to interrupt, which is precisely what select does.
When you have multiple downloads, data arriving on any of the connections triggers activity in your program (you need to write the data to disk), so you'd give a list of all download connections to select in the list of file descriptors to watch for "read".
When you upload data to somewhere at the same time, you again use select to see whether the connection currently accepts data. If the other side is on dialup, it will acknowledge data only slowly, so your local send buffer is always full, and any attempt to write more data would block until buffer space is available, or fail. By passing the file descriptor we are sending to to select as a "write" descriptor, we get notified as soon as buffer space is available for sending.
The general idea is that your program becomes event-driven, i.e. it reacts to external events from a common message loop rather than performing sequential operations. You tell the kernel "this is the set of events for which I want to do something", and the kernel gives you a set of events that have occured. It is fairly common for two events occuring simultaneously; for example, a TCP acknowledge was included in a data packet, this can make the same fd both readable (data is available) and writeable (acknowledged data has been removed from send buffer), so you should be prepared to handle all of the events before calling select again.
One of the finer points is that select basically gives you a promise that one invocation of read or write will not block, without making any guarantee about the call itself. For example, if one byte of buffer space is available, you can attempt to write 10 bytes, and the kernel will come back and say "I have written 1 byte", so you should be prepared to handle this case as well. A typical approach is to have a buffer "data to be written to this fd", and as long as it is non-empty, the fd is added to the write set, and the "writeable" event is handled by attempting to write all the data currently in the buffer. If the buffer is empty afterwards, fine, if not, just wait on "writeable" again.
The "exceptional" set is seldom used -- it is used for protocols that have out-of-band data where it is possible for the data transfer to block, while other data needs to go through. If your program cannot currently accept data from a "readable" file descriptor (for example, you are downloading, and the disk is full), you do not want to include the descriptor in the "readable" set, because you cannot handle the event and select would immediately return if invoked again. If the receiver includes the fd in the "exceptional" set, and the sender asks its IP stack to send a packet with "urgent" data, the receiver is then woken up, and can decide to discard the unhandled data and resynchronize with the sender. The telnet protocol uses this, for example, for Ctrl-C handling. Unless you are designing a protocol that requires such a feature, you can easily leave this out with no harm.
Obligatory code example:
#include <sys/types.h>
#include <sys/select.h>
#include <unistd.h>
#include <stdbool.h>
static inline int max(int lhs, int rhs) {
if(lhs > rhs)
return lhs;
else
return rhs;
}
void copy(int from, int to) {
char buffer[10];
int readp = 0;
int writep = 0;
bool eof = false;
for(;;) {
fd_set readfds, writefds;
FD_ZERO(&readfds);
FD_ZERO(&writefds);
int ravail, wavail;
if(readp < writep) {
ravail = writep - readp - 1;
wavail = sizeof buffer - writep;
}
else {
ravail = sizeof buffer - readp;
wavail = readp - writep;
}
if(!eof && ravail)
FD_SET(from, &readfds);
if(wavail)
FD_SET(to, &writefds);
else if(eof)
break;
int rc = select(max(from,to)+1, &readfds, &writefds, NULL, NULL);
if(rc == -1)
break;
if(FD_ISSET(from, &readfds))
{
ssize_t nread = read(from, &buffer[readp], ravail);
if(nread < 1)
eof = true;
readp = readp + nread;
}
if(FD_ISSET(to, &writefds))
{
ssize_t nwritten = write(to, &buffer[writep], wavail);
if(nwritten < 1)
break;
writep = writep + nwritten;
}
if(readp == sizeof buffer && writep != 0)
readp = 0;
if(writep == sizeof buffer)
writep = 0;
}
}
We attempt to read if we have buffer space available and there was no end-of-file or error on the read side, and we attempt to write if we have data in the buffer; if end-of-file is reached and the buffer is empty, then we are done.
This code will behave clearly suboptimal (it's example code), but you should be able to see that it is acceptable for the kernel to do less than we asked for both on reads and writes, in which case we just go back and say "whenever you're ready", and that we never read or write without asking whether it will block.
From the same man page:
On exit, the sets are modified in place to indicate which file descriptors actually changed status.
So use FD_ISSET() on the sets passed to select to determine which FDs have become ready.
I have created a following program in which I wish to poll on the file descriptor of the file that I am opening in the program.
#define FILE "help"
int main()
{
int ret1;
struct pollfd fds[1];
ret1 = open(FILE, O_CREAT);
fds[0].fd = ret1;
fds[0].events = POLLIN;
while(1)
{
poll(fds,1,-1);
if (fds[0].revents & POLLIN)
printf("POLLING");
}
return 0;
}
It is going in infinite loop. I am expecting to run the loop when some operation happen to the file. (Its a ASCII file)
plz help
poll() actually doesn't work on opened files. Since a read() on a file will never block, poll() will always return that you can read non-blocking from the file.
This would (almost) work on character devices*, named pipes** or sockets, though, since those block when you read() from them when there is no data available. (you also need to actually read that data then, or else poll will tell again and again that data is available)
To "poll" a growing/shrinking file, see man inotify or implement your own polling using fstat() in a loop.
* block devices are a story apart; while technically a read from a harddisk can block for 10 ms or longer, this is not perceived as blocking I/O in linux.
** see also how to flush a named pipe using bash
No idea if this is the cause of your problems (probably not), but it is a particularly bad idea to redefine the standard macro FILE.
Didn't your compiler complain about this?
I need my program written in pure C to stop execution when stdin is closed.
There is indefinite work done in program main cycle, and there is no way I can use blocking checks (like getc()) there (no data is supposed to arrive on stdin - it just stays opened for unknown time).
I intend to use described functionality in realization of network daemon hosted in inetd, xinetd or their analogs - it should emit data on stdout while connection stays opened and correctly finish work when it closes. Now my program is killed by hosting service as it won't stop after connection termination.
I wonder if fctntl() with O_NONBLOCK flag applied to stdin descriptor would allow me to use read() function in non-blocking mode? Should I use select() somehow?
P.S. The data is not supposed but might arrive to stdin. A way of non-blocking readout woould be an answer for the question.
select() does exactly what you want: signal that an operation (read, in this case) on a file descriptor (file, socket, whatever) will not block.
#include <stdio.h>
#include <sys/select.h>
int is_ready(int fd) {
fd_set fdset;
struct timeval timeout;
int ret;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
timeout.tv_sec = 0;
timeout.tv_usec = 1;
//int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds,
struct timeval *timeout);
return select(fd+1, &fdset, NULL, NULL, &timeout) == 1 ? 1 : 0;
}
You can now check a file descriptor before use, for instance in order to empty the file descriptor:
void empty_fd(int fd) {
char buffer[1024];
while (is_ready(fd)) {
read(fd, buffer, sizeof(buffer));
}
}
In your case, use fileno(stdin) to get the file descriptor of stdin:
if (is_ready(fileno(stdin))) {
/* read stuff from stdin will not block */
}
I wonder if fctntl() with O_NONBLOCK flag applied to stdin descriptor would allow me to use read() function in non-blocking mode?
Running stdin with O_NONBLOCK has advantages over select. Select says that there is some data, but not how much. There are times that you want to get all available data, but not block, regardless of how much in in the queue. Running select for each character seems like a lot of busy work... O_NONBLOCK did not work for me. It is an internal flag, not exposed in most tty drivers.
Check out ioctl(..., FIONBIO). It seems to get the job done.
I'm not sure whether you can set O_NONBLOCK on stdin, but select() or poll() will definitely get the job done.
What's wrong with feof(stdin) ?
Yes, you can use select (with a zero timeout). You don't need to set the file descriptor non-blocking, though - if select tells you that the file descriptor is readable, then a read on it will definitely not block.
So, poll file descriptor 0 occaisionally with select, and if it's readable, read it. If read returns 0, then that means it's been closed.