FIFO pipe is always readable in select() - c

In C pseudo-code:
while (1) {
fifo = open("fifo", O_RDONLY | O_NONBLOCK);
fd_set read;
FD_SET(fifo, &read);
select(nfds, &read, NULL, NULL, NULL);
}
The process sleeps as triggered by select() until another process writes into fifo. Afterwards it will always find fifo as a readable file descriptor.
How to avoid this behavior (that is, after fifo has been read once, how to make it be found as unreadable until it gets another write?)

You opened that FIFO as read only (O_RDONLY), whenever there is no writer to the FIFO, the read end will receive an EOF.
Select system call will return on EOF and for every EOF you handle there will be a new EOF. This is the reason for the observed behavior.
To avoid this open that FIFO for both reading and writing (O_RDWR). This ensures that you have at least one writer on the FIFO thus there wont be an EOF and as a result select won't return unless someone writes to that FIFO.

The simple answer is to read until read() returns EWOULDBLOCK (or EAGAIN), or craps out with an error.
What you are saying simply cannot be happening unless the operating system (or runtime) that you are using is buggy. Otherwise you must be doing something wrong. For example, select() is using level-triggered I/O. I'd think that, most likely, you are not draining the socket completely, and so select() always indicates that you have something left in there (this does not happen with edge-triggered event notifications).
Below is a simple example that shows how one should read until the read() returns EWOULDBLOCK in order to avoid leaving descriptor in readable state (I've compiled and tested this on OS X, and there is also mostly no error checking, but you should get the idea):
/*
* FIFO example using select.
*
* $ mkfifo /tmp/fifo
* $ clang -Wall -o test ./test.c
* $ ./test &
* $ echo 'hello' > /tmp/fifo
* $ echo 'hello world' > /tmp/fifo
* $ killall test
*/
#include <sys/types.h>
#include <sys/select.h>
#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
int main()
{
int fd;
int n;
fd_set set;
ssize_t bytes;
size_t total_bytes;
char buf[1024];
fd = open("/tmp/fifo", O_RDWR | O_NONBLOCK);
if (fd == -1) {
perror("open");
return EXIT_FAILURE;
}
FD_ZERO(&set);
FD_SET(fd, &set);
for (;;) {
n = select(fd+1, &set, NULL, NULL, NULL);
if (!n)
continue;
if (n == -1) {
perror("select");
return EXIT_FAILURE;
}
if (FD_ISSET(fd, &set)) {
printf("Descriptor %d is ready.\n", fd);
total_bytes = 0;
for (;;) {
bytes = read(fd, buf, sizeof(buf));
if (bytes > 0) {
total_bytes += (size_t)bytes;
} else {
if (errno == EWOULDBLOCK) {
/* Done reading */
printf("done reading (%lu bytes)\n", total_bytes);
break;
} else {
perror("read");
return EXIT_FAILURE;
}
}
}
}
}
return EXIT_SUCCESS;
}
Basically, level-triggered I/O means that you get notified all the time if there is something to read, even though you might have been notified of this before. On a contrary, edge-triggered I/O means that you are getting notified only once every time new data arrives and it doesn't matter whether you read it or not. select() is a level-triggered I/O interface.
Hope it helps. Good Luck!

Related

Need Suggestion while handle huge pipe data

I'm practicing C code with pipe system call, it works well with small chunks of data. but as the data goes beyond the pipe capacity, dead lock occurs.
My test system is Debian Sid, but i believe it share the common ground with other Linux distributions. This piece of code works well while the input file '/tmp/a.out' is small enough to fit within the pipe, but blocked as the file is up to 1M.
#include <sys/errno.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <stdio.h>
#define CHUNK 2048
int main() {
int fd=open("/tmp/a.out",O_RDONLY);
int pin[2];
int pout[2];
int nread;
char buff[CHUNK];
pipe(pin);
pipe(pout);
int rc;
pid_t pid=fork();
if (pid == 0) {
close(pin[1]);
dup2(pin[0],STDIN_FILENO);
close(pout[0]);
dup2(pout[1],STDOUT_FILENO);
execlp("cat","cat",(char *)0);
} else if (pid > 0) {
close(pin[0]);
close(pout[1]);
/* I think dead lock occurs here, but i can't figure out a way to avoid it */
while ( (nread=read(fd,buff,CHUNK)) > 0) write(pin[1],buff,nread);
close(pin[1]);
while ( (nread=read(pout[0],buff,CHUNK)) >0) write(STDOUT_FILENO,buff,nread);
waitpid(pid,&rc,0);
exit(rc);
} else {
perror("fork");
exit(errno);
}
}
Any suggestions? I know Python's subprocess class have something like subprocess.communicate() to avoid this kind of dead lock, but i don't know how to deal with it in C.
Many thanks.
The first process pipes into cat and cat pipes back into the first process. Hence, for cat to not block on piping back, the first process must also drain that pipe. E.g.:
fcntl(pout[0], F_SETFL, fcntl(pout[0], F_GETFL) | O_NONBLOCK);
while((nread=read(fd, buff, CHUNK)) > 0) {
write(pin[1], buff, nread); // TODO: check errors and partial writes here.
while((nread=read(pout[0],buff,CHUNK)) > 0) // pout[0] must be set into non-blocking mode.
write(STDOUT_FILENO, buff, nread);
}
A more robust way is to set both pin[1] and pout[0] into non-blocking mode, use select to determine whether pin[1] is ready for write and pout[0] for read and then do write/read correspondingly and handle partial reads and writes.
From your suggestions at least I have 2 ways to solve this problem
1. Setting 'NON-BLOCK' mode by 'fcntl' or 'select/poll/epoll'
Use concurrency such as 'pthread' for stdin pipe
piece of code attached.
struct data {
int from_fd;
int to_fd;
};
and code for pipes should look like
pthread_t t;
struct data d;
d.from_fd=fd;
d.to_fd=pin[1];
pthread_create(&t,NULL,&fd_to_pipe,(void*) &d);
while ( (nread=read(pout[0],buff,CHUNK)) >0) write(STDOUT_FILENO,buff,nread);
waitpid(pid,&rc,0);
pthread_join(t,NULL);
Thank you !

why non-blocking write to disk doesn't return EAGAIN or EWOULDBLOCK?

I modified a program from APUE, the program first open a file, then mark the fd as non-blocking, then continue write to the fd until write return -1.
I think since disk I/O is slow, when write buffers in OS is nearly full, the write system call will return -1, and the errno should be EAGAIN or EWOULDBLOCK.
But I ran the program for about several minutes and I repeated running the program serveral times, the write system call didn't returned -1 even once! Why?
Here's the code:
#include "apue.h"
#include <errno.h>
#include <fcntl.h>
char buf[4096];
int
main(void)
{
int nwrite;
int fd = open("a.txt", O_RDWR);
if(fd<0){
printf("fd<0\n");
return 0;
}
int i;
for(i = 0; i<sizeof(buf); i++)
buf[i] = i*2;
set_fl(fd, O_NONBLOCK); /* set nonblocking */
while (1) {
nwrite = write(fd, buf, sizeof(buf));
if (nwrite < 0) {
printf("write returned:%d, errno=%d\n", nwrite, errno);
return 0;
}
}
clr_fl(STDOUT_FILENO, O_NONBLOCK); /* clear nonblocking */
exit(0);
}
The O_NONBLOCK flag is primarily meaningful for file descriptors representing streams (e.g, pipes, sockets, and character devices), where it prevents read and write operations from blocking when there is no data waiting to read, or buffers are too full to write anything more at the moment. It has no effect on file descriptors opened to regular files; disk I/O delays are essentially ignored by the system.
If you want to do asynchronous I/O to files, you may want to take a look at the POSIX AIO interface. Be warned that it's rather hairy and infrequently used, though.

Opening a serial port on OS X hangs forever without O_NONBLOCK flag

I have a serial to USB converter (FTDI, drivers installed from http://www.ftdichip.com/Drivers/VCP.htm) connecting a serial device to a MacBook Air. It shows up on the MacBook as both /dev/cu.usbserial-A4017CQY and /dev/tty.usbserial-A4017CQY. All behaviour I describe is identical regardless of which of these two I use.
Edit: Using /dev/cu.* did solve the problem. I'm not sure why it seemed not to work when I first posted this question. Thanks to duskwuff for pointing me in the right direction, though he has his TTY names backwards: /dev/tty.* will wait for flow control, while /dev/cu.* will not.
The first problem I encountered was that the syscall to open() would block forever if I did not use the O_NONBLOCK flag. Using the flag, I get a good file descriptor, but write() does not seem to actually write (though it returns just fine claiming to have written the bytes), and read() fails with the error "Resource temporarily unavailable".
stty -af /dev/cu.usbserial-A4017CQY shows the settings just fine, but if I try to change them with a command like stty -f /dev/cu.usbserial-A4017CQY -clocal, they do appear to be changed when displayed with a successive call to stty.
If I use select() to wait for the device to become ready before reading/writing, it reports after a short time that it is ready to write, but never to read. This gels with how write() completes without complaint, while read() fails. Note that the data written never does actually make it to the device.
The entire test program I wrote to debug this is below:
#include <stdio.h>
#include <fcntl.h>
#include <termios.h>
#include <unistd.h>
#include <sys/select.h>
#define SYSCALL(A) do { ret = A; if (ret == -1) { perror(#A); return -1; } else printf("%s returned %d\n", #A, ret); } while (0)
int ret; /* necessary for SYSCALL */
int main()
{
struct termios tio;
char buf[256];
int fd = open("/dev/cu.usbserial-A4017CQY",
O_RDWR | O_NOCTTY | O_NONBLOCK);
fd_set rfds, wfds, xfds;
struct timeval to;
to.tv_sec = 5;
to.tv_usec = 0;
SYSCALL(tcgetattr(fd, &tio));
cfmakeraw(&tio);
tio.c_cflag = CS8|CREAD|CLOCAL;
tio.c_cc[VMIN] = 1;
tio.c_cc[VTIME] = 1;
cfsetispeed(&tio, B115200);
cfsetospeed(&tio, B115200);
SYSCALL(tcsetattr(fd, TCSANOW, &tio));
FD_ZERO(&rfds);
FD_ZERO(&wfds);
FD_ZERO(&xfds);
FD_SET(fd, &rfds);
FD_SET(fd, &wfds);
FD_SET(fd, &xfds);
int ret = select(fd+1, &rfds, NULL, &xfds, &to);
if (ret == -1) perror("select");
else if (ret > 0)
{
if(FD_ISSET(fd, &rfds))
puts("Ready to read");
if(FD_ISSET(fd, &wfds))
puts("Ready to write");
if(FD_ISSET(fd, &xfds))
puts("Exception!");
}
else puts("Timed out!");
SYSCALL(write(fd, "/home\n", 5));
SYSCALL(read(fd, buf, 256));
return 0;
}
You have a flow control issue. Either loop back join RTS/CTS and DTR/DSR/CD on your cable, have the other end provide control signals, or, as #duskwuff suggests, use the device that ignores flow control.
I see that you are setting CLOCAL -- that should work, but some USB devices do not do the right thing. Your description is consistent with the device waiting for modem control signals.

How to detect empty epoll set

I'm learning to use epoll, and I wrote the following example
#include <assert.h>
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/epoll.h>
#include <unistd.h>
int main() {
int epfd;
struct epoll_event ev;
struct epoll_event ret;
char buf[200];
int n,k,t;
epfd = epoll_create(100);
assert(0 ==
fcntl(0, F_SETFL, fcntl(0, F_GETFL) | O_NONBLOCK)
);
ev.data.fd = 0;
ev.events = EPOLLIN | EPOLLET;
if(epoll_ctl(epfd, EPOLL_CTL_ADD, 0, &ev) != 0)
perror("epoll_ctl");
while((n = epoll_wait(epfd, &ret, 1, -1)) > 0) {
printf("tick!\n");
if(ret.data.fd == 0) {
k=0;
while((t=read(0, buf, 100)) > 0) {
k+=t;
}
if(k == 0) {
close(0);
printf("stdin done\n");
}
}
}
perror("epoll");
return 0;
}
If you try running it in the terminal it won't work properly since fds 0, 1 and 2 all point to same open file, so close(0) won't remove stdin from the epoll set. You can get around this by doing "cat | ./a.out". Dirty trick, I know, but setting up a small example with named pipes or sockets would be more complicated.
Now, everything works and the file is removed from the epoll set, but then the next epoll_wait call blocks permanently since it's on an empty set! So I would need to detect if the epoll file descriptor (epfd) is an empty epoll set.
How can I get around this? (in a general manner, not just calling exit when stdin is done)
Thanks!
Basically, if you're using epoll "correctly", then you should never have a situation of an unexpectedly empty epoll set. You should know when there is more to do or not. Well, or that's the theory at least. Let me review it:
You are using EPOLLET here (which is, imho, the right think in general). It means that the file descriptor 0 is removed from the epoll when it is returned in &ret. At this point you should handle it by reading some amount of data from 0, as you do, but then "re-arming" it by adding again file descriptor 0 into the epoll (unless it was closed of course). For an example of how this is supposed to work, remove the inner loop and just do:
k = read(0, buf, 100);
reading a maximum of 100 bytes. The idea is that if you pipe a file bigger than that, it should go several times through the whole loop. In order to make this work, if k > 0, after you handle the k bytes, you need to call epoll_ctl(..EPOLL_CTL_ADD..) again.
Note an annoying detail: it's possible occasionally that the read() returns 0 bytes without meaning the file or socket is at the end. Check if errno == EAGAIN || errno == EWOULDBLOCK. To detect that case, and then epoll_ctl(..EPOLL_CTL_ADD..) again.
The epoll set will be empty when you've removed everything that was added. As far as I know, you can't introspect the epoll set to find out whether there are any file descriptors present. So, it's up to you to determine when the epoll set becomes empty as outlined in Armin's answer.
Since you haven't explained what you expect from your program, I'll take a guess that you expect it exit when stdin is closed, because doing a close(0) will potentially cause file descriptor 0 to be removed from the epoll set. However, the code as listed is flawed. If you continue to wait on an epoll set that doesn't contain any file descriptors (whether removed automatically or by using EPOLL_CTL_DEL), the epoll_wait will wait forever.
The following code shows this nicely.
#include <errno.h>
#include <stdio.h>
#include <sys/epoll.h>
int main() {
int epfd;
int n;
struct epoll_event ret;
epfd = epoll_create(100);
while((n = epoll_wait(epfd, &ret, 1, -1)) > 0) {
/* Never gets here. */
printf("tick!\n");
}
return 0;
}
The epoll set doesn't contain any file descriptors, so the epoll_wait wait forever. If you happened to have a file connected to stdin in your program and no other file descriptor in your program was connected to stdin, the close(0) would have removed fd 0 from the set, the epoll set becomes empty, and the next epoll_wait waits forever.
In general, you manage file descriptors in the epoll set yourself, not rely on close calls to automatically remove your file descriptor from the set. It's up to you to decide whether to continue waiting on the epoll set after you've done the close(0).
I'd also suggest that you change the structure of your program to epoll_wait after the read. This guarantees that you'll obtain any data that may have arrived on stdin before your first call to epoll_wait.
Also, be careful with code like this:
k=0;
while((t=read(0, buf, 100)) > 0) {
k+=t;
}
if(k == 0) {
close(0);
printf("stdin done\n");
}
If you assume that the read in the loop consecutively returns 100 followed by 0 indicating some data plus an end of file, the close(0) will not be called. The program will loop and wait forever again on epoll_wait. Best to check the result of each read specifically for end-of-file and errors.

close() is not closing socket properly

I have a multi-threaded server (thread pool) that is handling a large number of requests (up to 500/sec for one node), using 20 threads. There's a listener thread that accepts incoming connections and queues them for the handler threads to process. Once the response is ready, the threads then write out to the client and close the socket. All seemed to be fine until recently, a test client program started hanging randomly after reading the response. After a lot of digging, it seems that the close() from the server is not actually disconnecting the socket. I've added some debugging prints to the code with the file descriptor number and I get this type of output.
Processing request for 21
Writing to 21
Closing 21
The return value of close() is 0, or there would be another debug statement printed. After this output with a client that hangs, lsof is showing an established connection.
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (ESTABLISHED)
CLIENT 17747 root 12u IPv4 32754228 TCP localhost:47530->localhost:9980 (ESTABLISHED)
It's as if the server never sends the shutdown sequence to the client, and this state hangs until the client is killed, leaving the server side in a close wait state
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (CLOSE_WAIT)
Also if the client has a timeout specified, it will timeout instead of hanging. I can also manually run
call close(21)
in the server from gdb, and the client will then disconnect. This happens maybe once in 50,000 requests, but might not happen for extended periods.
Linux version: 2.6.21.7-2.fc8xen
Centos version: 5.4 (Final)
socket actions are as follows
SERVER:
int client_socket;
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
while(true) {
client_socket = accept(incoming_socket, (struct sockaddr *)&client_addr, &client_len);
if (client_socket == -1)
continue;
/* insert into queue here for threads to process */
}
Then the thread picks up the socket and builds the response.
/* get client_socket from queue */
/* processing request here */
/* now set to blocking for write; was previously set to non-blocking for reading */
int flags = fcntl(client_socket, F_GETFL);
if (flags < 0)
abort();
if (fcntl(client_socket, F_SETFL, flags|O_NONBLOCK) < 0)
abort();
server_write(client_socket, response_buf, response_length);
server_close(client_socket);
server_write and server_close.
void server_write( int fd, char const *buf, ssize_t len ) {
printf("Writing to %d\n", fd);
while(len > 0) {
ssize_t n = write(fd, buf, len);
if(n <= 0)
return;// I don't really care what error happened, we'll just drop the connection
len -= n;
buf += n;
}
}
void server_close( int fd ) {
for(uint32_t i=0; i<10; i++) {
int n = close(fd);
if(!n) {//closed successfully
return;
}
usleep(100);
}
printf("Close failed for %d\n", fd);
}
CLIENT:
Client side is using libcurl v 7.27.0
CURL *curl = curl_easy_init();
CURLcode res;
curl_easy_setopt( curl, CURLOPT_URL, url);
curl_easy_setopt( curl, CURLOPT_WRITEFUNCTION, write_callback );
curl_easy_setopt( curl, CURLOPT_WRITEDATA, write_tag );
res = curl_easy_perform(curl);
Nothing fancy, just a basic curl connection. Client hangs in tranfer.c (in libcurl) because the socket is not perceived as being closed. It's waiting for more data from the server.
Things I've tried so far:
Shutdown before close
shutdown(fd, SHUT_WR);
char buf[64];
while(read(fd, buf, 64) > 0);
/* then close */
Setting SO_LINGER to close forcibly in 1 second
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
if (setsockopt(client_socket, SOL_SOCKET, SO_LINGER, &l, sizeof(l)) == -1)
abort();
These have made no difference. Any ideas would be greatly appreciated.
EDIT -- This ended up being a thread-safety issue inside a queue library causing the socket to be handled inappropriately by multiple threads.
Here is some code I've used on many Unix-like systems (e.g SunOS 4, SGI IRIX, HPUX 10.20, CentOS 5, Cygwin) to close a socket:
int getSO_ERROR(int fd) {
int err = 1;
socklen_t len = sizeof err;
if (-1 == getsockopt(fd, SOL_SOCKET, SO_ERROR, (char *)&err, &len))
FatalError("getSO_ERROR");
if (err)
errno = err; // set errno to the socket SO_ERROR
return err;
}
void closeSocket(int fd) { // *not* the Windows closesocket()
if (fd >= 0) {
getSO_ERROR(fd); // first clear any errors, which can cause close to fail
if (shutdown(fd, SHUT_RDWR) < 0) // secondly, terminate the 'reliable' delivery
if (errno != ENOTCONN && errno != EINVAL) // SGI causes EINVAL
Perror("shutdown");
if (close(fd) < 0) // finally call close()
Perror("close");
}
}
But the above does not guarantee that any buffered writes are sent.
Graceful close: It took me about 10 years to figure out how to close a socket. But for another 10 years I just lazily called usleep(20000) for a slight delay to 'ensure' that the write buffer was flushed before the close. This obviously is not very clever, because:
The delay was too long most of the time.
The delay was too short some of the time--maybe!
A signal such SIGCHLD could occur to end usleep() (but I usually called usleep() twice to handle this case--a hack).
There was no indication whether this works. But this is perhaps not important if a) hard resets are perfectly ok, and/or b) you have control over both sides of the link.
But doing a proper flush is surprisingly hard. Using SO_LINGER is apparently not the way to go; see for example:
http://msdn.microsoft.com/en-us/library/ms740481%28v=vs.85%29.aspx
https://www.google.ca/#q=the-ultimate-so_linger-page
And SIOCOUTQ appears to be Linux-specific.
Note shutdown(fd, SHUT_WR) doesn't stop writing, contrary to its name, and maybe contrary to man 2 shutdown.
This code flushSocketBeforeClose() waits until a read of zero bytes, or until the timer expires. The function haveInput() is a simple wrapper for select(2), and is set to block for up to 1/100th of a second.
bool haveInput(int fd, double timeout) {
int status;
fd_set fds;
struct timeval tv;
FD_ZERO(&fds);
FD_SET(fd, &fds);
tv.tv_sec = (long)timeout; // cast needed for C++
tv.tv_usec = (long)((timeout - tv.tv_sec) * 1000000); // 'suseconds_t'
while (1) {
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused");
else if (errno != EINTR)
FatalError("select"); // tbd EBADF: man page "an error has occurred"
}
}
bool flushSocketBeforeClose(int fd, double timeout) {
const double start = getWallTimeEpoch();
char discard[99];
ASSERT(SHUT_WR == 1);
if (shutdown(fd, 1) != -1)
while (getWallTimeEpoch() < start + timeout)
while (haveInput(fd, 0.01)) // can block for 0.01 secs
if (!read(fd, discard, sizeof discard))
return TRUE; // success!
return FALSE;
}
Example of use:
if (!flushSocketBeforeClose(fd, 2.0)) // can block for 2s
printf("Warning: Cannot gracefully close socket\n");
closeSocket(fd);
In the above, my getWallTimeEpoch() is similar to time(), and Perror() is a wrapper for perror().
Edit: Some comments:
My first admission is a bit embarrassing. The OP and Nemo challenged the need to clear the internal so_error before close, but I cannot now find any reference for this. The system in question was HPUX 10.20. After a failed connect(), just calling close() did not release the file descriptor, because the system wished to deliver an outstanding error to me. But I, like most people, never bothered to check the return value of close. So I eventually ran out of file descriptors (ulimit -n), which finally got my attention.
(very minor point) One commentator objected to the hard-coded numerical arguments to shutdown(), rather than e.g. SHUT_WR for 1. The simplest answer is that Windows uses different #defines/enums e.g. SD_SEND. And many other writers (e.g. Beej) use constants, as do many legacy systems.
Also, I always, always, set FD_CLOEXEC on all my sockets, since in my applications I never want them passed to a child and, more importantly, I don't want a hung child to impact me.
Sample code to set CLOEXEC:
static void setFD_CLOEXEC(int fd) {
int status = fcntl(fd, F_GETFD, 0);
if (status >= 0)
status = fcntl(fd, F_SETFD, status | FD_CLOEXEC);
if (status < 0)
Perror("Error getting/setting socket FD_CLOEXEC flags");
}
Great answer from Joseph Quinsey. I have comments on the haveInput function. Wondering how likely it is that select returns an fd you did not include in your set. This would be a major OS bug IMHO. That's the kind of thing I would check if I wrote unit tests for the select function, not in an ordinary app.
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused"); // <--- fd unknown to function
My other comment pertains to the handling of EINTR. In theory, you could get stuck in an infinite loop if select kept returning EINTR, as this error lets the loop start over. Given the very short timeout (0.01), it appears highly unlikely to happen. However, I think the appropriate way of dealing with this would be to return errors to the caller (flushSocketBeforeClose). The caller can keep calling haveInput has long as its timeout hasn't expired, and declare failure for other errors.
ADDITION #1
flushSocketBeforeClose will not exit quickly in case of read returning an error. It will keep looping until the timeout expires. You can't rely on the select inside haveInput to anticipate all errors. read has errors of its own (ex: EIO).
while (haveInput(fd, 0.01))
if (!read(fd, discard, sizeof discard)) <-- -1 does not end loop
return TRUE;
This sounds to me like a bug in your Linux distribution.
The GNU C library documentation says:
When you have finished using a socket, you can simply close its file
descriptor with close
Nothing about clearing any error flags or waiting for the data to be flushed or any such thing.
Your code is fine; your O/S has a bug.
include:
#include <unistd.h>
this should help solve the close(); problem

Resources