I read the man pages, and my understanding is that if write() fails and sets the errno to EAGAIN or EINTR, I may perform the write() again, so I came up with the following code:
ret = 0;
while(ret != count) {
write_count = write(connFD, (char *)buf + ret, count);
while (write_count < 0) {
switch(errno) {
case EINTR:
case EAGAIN:
write_count = write(connFD, (char *)buf + ret, count -ret);
break;
default:
printf("\n The value of ret is : %d\n", ret);
printf("\n The error number is : %d\n", errno);
ASSERT(0);
}
}
ret += write_count;
}
I am performing read() and write() on sockets and handling the read() similarly as above. I am using Linux, with gcc compiler.
You have a bit of a "don't repeat yourself" problem there - there's no need for two separate calls to write, nor for two nested loops.
My normal loop would look something like this:
for (int n = 0; n < count; ) {
int ret = write(fd, (char *)buf + n, count - n);
if (ret < 0) {
if (errno == EINTR || errno == EAGAIN) continue; // try again
perror("write");
break;
} else {
n += ret;
}
}
// if (n < count) here some error occurred
EINTR and EAGAIN handling should often be slightly different. EAGAIN is always some kind of transient error representing the state of the socket buffer (or perhaps, more precisely, that your operation may block).
Once you've hit an EAGAIN you'd likely want to sleep a bit or return control to an event loop (assuming you're using one).
With EINTR the situation is a bit different. If your application is receiving signals non-stop, then it may be an issue in your application or environment, and for that reason I tend to have some kind of internal eintr_max counter so I am not stuck in the theoretical situation where I just continue infinitely looping on EINTR.
Alnitak's answer (sufficient for most cases) should also be saving errno somewhere, as it may be clobbered by perror() (although it may have been omitted for brevity).
I would prefer to poll the descriptor in case of EAGAIN instead of just busy looping and burning up CPU for no good reason. This is kind of a "blocking wrapper" for a non-blocking write I use:
ssize_t written = 0;
while (written < to_write) {
ssize_t result;
if ((result = write(fd, buffer, to_write - written)) < 0) {
if (errno == EAGAIN) {
struct pollfd pfd = { .fd = fd, .events = POLLOUT };
if (poll(&pfd, 1, -1) <= 0 && errno != EAGAIN) {
break;
}
continue;
}
return written ? written : result;
}
written += result;
buffer += result;
}
return written;
Note that I'm not actually checking the results of poll other than the return value; I figure the following write will fail if there is a permanent error on the descriptor.
You may wish to include EINTR as a retryable error as well by simply adding it to the conditions with EAGAIN, but I prefer it to actually interrupt I/O.
Yes, there are cleaner ways to use write(): the class of write functions taking a FILE* as an argument. That is, most importantly, fprintf() and fwrite(). Internally, these library functions use the write() syscall to do their job, and they handle stuff like EAGAIN and EINTR.
If you only have a file descriptor, you can always wrap it into a FILE* by means of fdopen(), so you can use it with the functions above.
However, there is one pitfall: FILE* streams are usually buffered. This can be a problem if you are communicating with some other program and are waiting for its response. This may deadlock both programs even though there is no logical error, simply because fprintf() decided to defer the corresponding write() a bit. You can switch the buffering off, or fflush() output streams whenever you actually need the write() calls to be performed.
Related
So, this is a weird case that I am seeing sometimes and not able to figure out a reason.
We have a C program that reads from a regular file. And there are other processes which write into the same file. The application is based on the fact that the writes are atomic in Linux for write size up to 4096 bytes.
The file is NOT opened with non blocking flag, so my assumption is that reads would be blocking.
But sometimes during the startup, we see "Resource temporarily unavailable" error set in errno. And the size returned by read != -1 but some partially read size.
An error message would look something like:
2018-08-07T06:40:52.991141Z, Invalid message size, log_s.bin, fd 670, Resource temporarily unavailable, read size 285, expected size 525
My questions are:
Why are we getting EAGAIN on blocking file read?
Why is the return value not -1?
This happens only during the initial time when it is started. It works fine thereafter. What are some edge cases that can get us in such situation?
Why are we getting EAGAIN on blocking file read ?
You aren't (see below).
Why is the return value not -1 ?
Because the operation did not fail.
The value of errno only carries a sane value if the call to read() failed. A call to read() failed if and only if -1 is returned.
From the Linux man-page for read():
RETURN VALUE
On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number. It is
not an error if this number is smaller than the number of bytes
requested;
[...]
On error, -1 is returned, and errno is set appropriately.
A common pattern to read() would be
char buffer[BUFFER_MAX];
char * p = buffer;
size_t to_read = ... /* not larger then BUFFER_MAX! */
while (to_read > 0)
{
ssize_t result = read(..., p, to_read);
if (-1 == result)
{
if (EAGAIN == errno || EWOULDBLOCK == errno)
{
continue;
}
if (EINTR == errno)
{
continue; /* or break depending on application design. */
}
perror("read() failed");
exit(EXIT_FAILURE);
}
else if (0 < result)
{
to_read -= (size_t) result;
p += (size_t) result;
}
else if (0 == result) /* end of file / connection shut down for reading */
{
break;
}
else
{
fprintf(stderr, "read() returned the unexpected value of %zd. You probably hit a (kernel) bug ... :-/\n", result);
exit(EXIT_FAILURE);
}
}
If (0 < to_read)
{
fprintf(stderr, "Encountered early end of stream. %zu bytes not read.\n", to_read);
}
This issue is more or less related to embedded perl in C, perlapio - interoperability with STDIO which I think I have solved for the Windows environment. I will post a complete solution if this new issue is solved too.
In the linked question,
StoryTeller gave me the hint
to use PerlIO_findFILE() which solved the immediate problem, but the same code on Linux behaves strangely.
Perl's dup2() seems to have a different behaviour on Win32, where dup2() is a macro for win32_dup2(), which as far as I understand simply uses the dup2() from io.h .
On Win32, Perl's version returns zero on success and non-zero on error, but on Linux the default ANSI dup2() will be used, which instead returns the new file descriptor. Then, I'll have to check errno if everything went fine.
If a call to PerlIO_findFILE() sets errno to "illegal seek" (errno 29 - ESPIPE), then after dup, dup2, pipe etc. errno is still set to "illegal seek", and any further checks on errno still see the same error.
(In practice everything has worked for me because there was no actual error. Also, the solution by checking errno is not thread safe, since between the syscall and the check another tread may reset errno.)
Note that I have
#define PERLIO_NOT_STDIO 0
in effect and I'm using Perl5.14.1.
Am I doing something really wrong here?
Here's a simplified code snippet:
stdOutFILE = PerlIO_findFILE(PerlIO_stderr()); // convert Perl's stdout to stdio FILE handle
fdStdOutOriginal = fileno(stdOutFILE); // get descriptor
if ( fdStdOutOriginal >= 0 ) {
relocatedStdOut = dup(fdStdOutOriginal); // relocate stdOut for external writing
if ( relocatedStdOut >= 0 )
{
if ( pipe(fdPipeStdOut) == 0 ) // create pipe for forwarding to stderr
{
// this has to be done on win32:
// if ( dup2(fdPipeStdOut[1], fdStdOutOriginal) == 0 ) // hang pipe on stdOut
dup2(fdPipeStdOut[1], fdStdOutOriginal);
if( errno == 0 ) {
// do some funny stuff
} else {
// report error
}
}
}
}
errno is meaningless unless a C library call or a system call reports an error, so it can't be used to determine if an error occurred. Notably, these calls aren't required to (and usually don't) reset errno on success. It's not even safe to clear errno before the call, because the call may set errno even if no error occurred.
As best as I can tell, Perl's emulation of dup2 returns the same value as the POSIX one (-1 on error, newfd on success).
#ifndef HAS_DUP2
int
dup2(int oldfd, int newfd)
{
#if defined(HAS_FCNTL) && defined(F_DUPFD)
if (oldfd == newfd)
return oldfd;
PerlLIO_close(newfd);
return fcntl(oldfd, F_DUPFD, newfd);
#else
#define DUP2_MAX_FDS 256
int fdtmp[DUP2_MAX_FDS];
I32 fdx = 0;
int fd;
if (oldfd == newfd)
return oldfd;
PerlLIO_close(newfd);
/* good enough for low fd's... */
while ((fd = PerlLIO_dup(oldfd)) != newfd && fd >= 0) {
if (fdx >= DUP2_MAX_FDS) {
PerlLIO_close(fd);
fd = -1;
break;
}
fdtmp[fdx++] = fd;
}
while (fdx > 0)
PerlLIO_close(fdtmp[--fdx]);
return fd;
#endif
}
#endif
(From 5.24.1)
This means that we can detect an error in a platform-independent fashion, despite your claim to the contrary. As such, the proper usage is
if ( dup2(fdPipeStdOut[1], fdStdOutOriginal) >= 0 ) {
//do some funny stuff
} else {
//report error
}
I have a multi-threaded server (thread pool) that is handling a large number of requests (up to 500/sec for one node), using 20 threads. There's a listener thread that accepts incoming connections and queues them for the handler threads to process. Once the response is ready, the threads then write out to the client and close the socket. All seemed to be fine until recently, a test client program started hanging randomly after reading the response. After a lot of digging, it seems that the close() from the server is not actually disconnecting the socket. I've added some debugging prints to the code with the file descriptor number and I get this type of output.
Processing request for 21
Writing to 21
Closing 21
The return value of close() is 0, or there would be another debug statement printed. After this output with a client that hangs, lsof is showing an established connection.
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (ESTABLISHED)
CLIENT 17747 root 12u IPv4 32754228 TCP localhost:47530->localhost:9980 (ESTABLISHED)
It's as if the server never sends the shutdown sequence to the client, and this state hangs until the client is killed, leaving the server side in a close wait state
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (CLOSE_WAIT)
Also if the client has a timeout specified, it will timeout instead of hanging. I can also manually run
call close(21)
in the server from gdb, and the client will then disconnect. This happens maybe once in 50,000 requests, but might not happen for extended periods.
Linux version: 2.6.21.7-2.fc8xen
Centos version: 5.4 (Final)
socket actions are as follows
SERVER:
int client_socket;
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
while(true) {
client_socket = accept(incoming_socket, (struct sockaddr *)&client_addr, &client_len);
if (client_socket == -1)
continue;
/* insert into queue here for threads to process */
}
Then the thread picks up the socket and builds the response.
/* get client_socket from queue */
/* processing request here */
/* now set to blocking for write; was previously set to non-blocking for reading */
int flags = fcntl(client_socket, F_GETFL);
if (flags < 0)
abort();
if (fcntl(client_socket, F_SETFL, flags|O_NONBLOCK) < 0)
abort();
server_write(client_socket, response_buf, response_length);
server_close(client_socket);
server_write and server_close.
void server_write( int fd, char const *buf, ssize_t len ) {
printf("Writing to %d\n", fd);
while(len > 0) {
ssize_t n = write(fd, buf, len);
if(n <= 0)
return;// I don't really care what error happened, we'll just drop the connection
len -= n;
buf += n;
}
}
void server_close( int fd ) {
for(uint32_t i=0; i<10; i++) {
int n = close(fd);
if(!n) {//closed successfully
return;
}
usleep(100);
}
printf("Close failed for %d\n", fd);
}
CLIENT:
Client side is using libcurl v 7.27.0
CURL *curl = curl_easy_init();
CURLcode res;
curl_easy_setopt( curl, CURLOPT_URL, url);
curl_easy_setopt( curl, CURLOPT_WRITEFUNCTION, write_callback );
curl_easy_setopt( curl, CURLOPT_WRITEDATA, write_tag );
res = curl_easy_perform(curl);
Nothing fancy, just a basic curl connection. Client hangs in tranfer.c (in libcurl) because the socket is not perceived as being closed. It's waiting for more data from the server.
Things I've tried so far:
Shutdown before close
shutdown(fd, SHUT_WR);
char buf[64];
while(read(fd, buf, 64) > 0);
/* then close */
Setting SO_LINGER to close forcibly in 1 second
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
if (setsockopt(client_socket, SOL_SOCKET, SO_LINGER, &l, sizeof(l)) == -1)
abort();
These have made no difference. Any ideas would be greatly appreciated.
EDIT -- This ended up being a thread-safety issue inside a queue library causing the socket to be handled inappropriately by multiple threads.
Here is some code I've used on many Unix-like systems (e.g SunOS 4, SGI IRIX, HPUX 10.20, CentOS 5, Cygwin) to close a socket:
int getSO_ERROR(int fd) {
int err = 1;
socklen_t len = sizeof err;
if (-1 == getsockopt(fd, SOL_SOCKET, SO_ERROR, (char *)&err, &len))
FatalError("getSO_ERROR");
if (err)
errno = err; // set errno to the socket SO_ERROR
return err;
}
void closeSocket(int fd) { // *not* the Windows closesocket()
if (fd >= 0) {
getSO_ERROR(fd); // first clear any errors, which can cause close to fail
if (shutdown(fd, SHUT_RDWR) < 0) // secondly, terminate the 'reliable' delivery
if (errno != ENOTCONN && errno != EINVAL) // SGI causes EINVAL
Perror("shutdown");
if (close(fd) < 0) // finally call close()
Perror("close");
}
}
But the above does not guarantee that any buffered writes are sent.
Graceful close: It took me about 10 years to figure out how to close a socket. But for another 10 years I just lazily called usleep(20000) for a slight delay to 'ensure' that the write buffer was flushed before the close. This obviously is not very clever, because:
The delay was too long most of the time.
The delay was too short some of the time--maybe!
A signal such SIGCHLD could occur to end usleep() (but I usually called usleep() twice to handle this case--a hack).
There was no indication whether this works. But this is perhaps not important if a) hard resets are perfectly ok, and/or b) you have control over both sides of the link.
But doing a proper flush is surprisingly hard. Using SO_LINGER is apparently not the way to go; see for example:
http://msdn.microsoft.com/en-us/library/ms740481%28v=vs.85%29.aspx
https://www.google.ca/#q=the-ultimate-so_linger-page
And SIOCOUTQ appears to be Linux-specific.
Note shutdown(fd, SHUT_WR) doesn't stop writing, contrary to its name, and maybe contrary to man 2 shutdown.
This code flushSocketBeforeClose() waits until a read of zero bytes, or until the timer expires. The function haveInput() is a simple wrapper for select(2), and is set to block for up to 1/100th of a second.
bool haveInput(int fd, double timeout) {
int status;
fd_set fds;
struct timeval tv;
FD_ZERO(&fds);
FD_SET(fd, &fds);
tv.tv_sec = (long)timeout; // cast needed for C++
tv.tv_usec = (long)((timeout - tv.tv_sec) * 1000000); // 'suseconds_t'
while (1) {
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused");
else if (errno != EINTR)
FatalError("select"); // tbd EBADF: man page "an error has occurred"
}
}
bool flushSocketBeforeClose(int fd, double timeout) {
const double start = getWallTimeEpoch();
char discard[99];
ASSERT(SHUT_WR == 1);
if (shutdown(fd, 1) != -1)
while (getWallTimeEpoch() < start + timeout)
while (haveInput(fd, 0.01)) // can block for 0.01 secs
if (!read(fd, discard, sizeof discard))
return TRUE; // success!
return FALSE;
}
Example of use:
if (!flushSocketBeforeClose(fd, 2.0)) // can block for 2s
printf("Warning: Cannot gracefully close socket\n");
closeSocket(fd);
In the above, my getWallTimeEpoch() is similar to time(), and Perror() is a wrapper for perror().
Edit: Some comments:
My first admission is a bit embarrassing. The OP and Nemo challenged the need to clear the internal so_error before close, but I cannot now find any reference for this. The system in question was HPUX 10.20. After a failed connect(), just calling close() did not release the file descriptor, because the system wished to deliver an outstanding error to me. But I, like most people, never bothered to check the return value of close. So I eventually ran out of file descriptors (ulimit -n), which finally got my attention.
(very minor point) One commentator objected to the hard-coded numerical arguments to shutdown(), rather than e.g. SHUT_WR for 1. The simplest answer is that Windows uses different #defines/enums e.g. SD_SEND. And many other writers (e.g. Beej) use constants, as do many legacy systems.
Also, I always, always, set FD_CLOEXEC on all my sockets, since in my applications I never want them passed to a child and, more importantly, I don't want a hung child to impact me.
Sample code to set CLOEXEC:
static void setFD_CLOEXEC(int fd) {
int status = fcntl(fd, F_GETFD, 0);
if (status >= 0)
status = fcntl(fd, F_SETFD, status | FD_CLOEXEC);
if (status < 0)
Perror("Error getting/setting socket FD_CLOEXEC flags");
}
Great answer from Joseph Quinsey. I have comments on the haveInput function. Wondering how likely it is that select returns an fd you did not include in your set. This would be a major OS bug IMHO. That's the kind of thing I would check if I wrote unit tests for the select function, not in an ordinary app.
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused"); // <--- fd unknown to function
My other comment pertains to the handling of EINTR. In theory, you could get stuck in an infinite loop if select kept returning EINTR, as this error lets the loop start over. Given the very short timeout (0.01), it appears highly unlikely to happen. However, I think the appropriate way of dealing with this would be to return errors to the caller (flushSocketBeforeClose). The caller can keep calling haveInput has long as its timeout hasn't expired, and declare failure for other errors.
ADDITION #1
flushSocketBeforeClose will not exit quickly in case of read returning an error. It will keep looping until the timeout expires. You can't rely on the select inside haveInput to anticipate all errors. read has errors of its own (ex: EIO).
while (haveInput(fd, 0.01))
if (!read(fd, discard, sizeof discard)) <-- -1 does not end loop
return TRUE;
This sounds to me like a bug in your Linux distribution.
The GNU C library documentation says:
When you have finished using a socket, you can simply close its file
descriptor with close
Nothing about clearing any error flags or waiting for the data to be flushed or any such thing.
Your code is fine; your O/S has a bug.
include:
#include <unistd.h>
this should help solve the close(); problem
In the man page for the system call write(2) -
ssize_t write(int fd, const void *buf, size_t count);
it says the following:
Return Value
On success, the number of bytes
written are returned (zero indicates
nothing was written). On error, -1 is
returned, and errno is set
appropriately. If count is zero and
the file descriptor refers to a
regular file, 0 may be returned, or an
error could be detected. For a special
file, the results are not portable.
I would interpret this to mean that returning 0 simply means that nothing was written, for whatever arbitrary reason.
However, Stevens in UNP treats a return value of 0 as a fatal error when dealing with a file descriptor that is a TCP socket ( this is wrapped by another function which calls exit(1) on a short count ):
ssize_t /* Write "n" bytes to a descriptor. */
writen(int fd, const void *vptr, size_t n)
{
size_t nleft;
ssize_t nwritten;
const char *ptr;
ptr = vptr;
nleft = n;
while (nleft > 0) {
if ( (nwritten = write(fd, ptr, nleft)) <= 0) {
if (nwritten < 0 && errno == EINTR)
nwritten = 0; /* and call write() again */
else
return(-1); /* error */
}
nleft -= nwritten;
ptr += nwritten;
}
return(n);
}
He only treats 0 as a legit return value if the errno indicates that the call to write was interrupted by the process receiving a signal.
Why?
Stevens probably does this to catch old implementations of
write() that behaved differently. For instance, the Single Unix Spec
says (http://www.opengroup.org/onlinepubs/000095399/functions/write.html)
Where this volume of IEEE Std
1003.1-2001 requires -1 to be returned and errno set to [EAGAIN], most
historical implementations return zero
This will ensure that the code does not spin indefinitely, even if the file descriptor is not a TCP socket or unexpected non-blocking flags are in effect. On some systems, certain legacy non-blocking modes (e.g. O_NDELAY) cause write() to return 0 (without setting errno) if no data can be written without blocking, at least for certain types of file descriptors. (The POSIX standard O_NONBLOCK uses an error return for this case.) And some of the non-blocking modes on some systems apply to the underlying object (e.g. socket, fifo) rather than the file descriptor, and so could even have been enabled by another process having an open file descriptor for the same object. The code protects itself from spinning in such a situation by simply treating it as an error, since it is not intended for use with non-blocking modes.
Also, and just to be somewhat pedantic here, if you are not writing to a socket, i would check to make sure that the buffer length ("count" in the first example) is actually being calculated correctly. In the Stevens example, you wouldn't even execute the write() call if the buffer length was 0.
As your man page says, the return value of 0 is "not portable" for special files. Sockets are special files, so the result could mean something different for them.
Usually for sockets, a value of 0 bytes from read() or write() is an indication that the socket has closed, and after receiving 0, subsequent calls will return -1 with an error code.
UPDATE: i updated the code and problem description to reflect my changes.
I know now that i'm trying a Socket operation on nonsocket. or that my fd_set is not valid since:
select returns -1 and
WSAGetLastError()returns 10038.
But i can't seem to figure out what it is. Platform is Windows. I have not posted the WSAStartup part.
int loop = 0;
FILE *output
int main()
{
fd_set fd;
output = _popen("tail -f test.txt","r");
while(forceExit == 0)
{
FD_ZERO(&fd);
FD_SET(_fileno(output),&fd);
int returncode = select(_fileno(output)+1,&fd,NULL,NULL,NULL);
if(returncode == 0)
{
printf("timed out");
}
else if (returncode < 0)
{
printf("returncode: %d\n",returncode);
printf("Last Error: %d\n",WSAGetLastError());
}
else
{
if(FD_ISSET(_fileno(output),&fd))
{
if(fgets(buff, sizeof(buff), output) != NULL )
{
printf("Output: %s\n", buff);
}
}
else
{
printf(".");
}
}
Sleep(500);
}
return 0;
}
The new outcome now is of course the print out of the returncode and the last error.
You have some data ready to be read, but you are not actually reading anything. When you poll the descriptor next time, the data will still be there. Drain the pipe before you continue to poll.
As far as I can tell, Windows anonymous pipes cannot be used with non-blocking calls like select. So, while your _popen and select code looks good independently, you can't join the two together.
Here's a similar thread elsewhere.
It's possible that calling SetNamedPipeHandleState with the PIPE_NOWAIT flag might work for you, but MSDN is more than a little cryptic on the subject.
So, I think you need to look at other ways of achieving this. I'd suggest having the reading in a separate thread, and use normal blocking I/O.
First of all, as yourself and others have pointed out, select() is only valid for sockets under Windows. select() does not work on streams which is what _popen() returns. Error 10038 clearly identifies this.
I don't get what the purpose of your example is. If you simply want to spawn a process and collect it's stdout, just do this (which comes directly from the MSDN _popen page):
int main( void )
{
char psBuffer[128];
FILE *pPipe;
if( (pPipe = _popen("tail -f test.txt", "rt" )) == NULL )
exit( 1 );
/* Read pipe until end of file, or an error occurs. */
while(fgets(psBuffer, 128, pPipe))
{
printf(psBuffer);
}
/* Close pipe and print return value of pPipe. */
if (feof( pPipe))
{
printf( "\nProcess returned %d\n", _pclose( pPipe ) );
}
else
{
printf( "Error: Failed to read the pipe to the end.\n");
}
}
That's it. No select required.
And I'm not sure how threads will help you here, this will just complicate your problem.
The first thing that I notice is wrong is that you are calling FD_ISSET on your exceptfds in each conditional. I think that you want something like this:
if (FD_ISSET(filePointer,&fd))
{
printf("i have data\n");
}
else ....
The except field in the select is typically used to report errors or out-of-band data on a socket. When one of the descriptors of your exception is set, it doesn't mean an error necessarily, but rather some "message" (i.e. out-of-band data). I suspect that for your application, you can probably get by without putting your file descriptor inside of an exception set. If you truly want to check for errors, you need to be checking the return value of select and doing something if it returns -1 (or SOCKET_ERROR on Windows). I'm not sure of your platform so I can't be more specific about the return code.
select() first argument is the highest number file descriptor in your set, plus 1. (i.e. output+1)
select(output+1, &fd, NULL, &exceptfds, NULL);
The first FD_ISSET(...) should be on the fd_set fd.
if (FD_ISSET(filePointer, &fd))
Your data stream has data, then you need to read that data stream. Use fgets(...) or similar to read from the data source.
char buf[1024];
...
fgets(buf, sizeof(buf) * sizeof(char), output);
The first argument to select needs to be the highest-numbered file descriptor in any of the three sets, plus 1:
int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout);
Also:
if(FD_ISSET(filePointer,&exceptfds))
{
printf("i have data\n");
}
Should be:
if(FD_ISSET(filePointer,&fd))
{
printf("i have data\n");
}
You should check the return code from select().
You also need to reset the fdsets each time you call select().
You don't need timeout since you're not using it.
Edit:
Apparently on Windows, nfds is ignored, but should probably be set correctly, just so the code is more portable.
If you want to use a timeout, you need to pass it into the select call as the last argument:
// Reset fd, exceptfds, and timeout before each select()...
int result = select(maxFDPlusOne, &fd, NULL, &exceptfds, &timeout);
if (result == 0)
{
// timeout
}
else if (result < 0)
{
// error
}
else
{
// something happened
if (FD_ISSET(filePointer,&fd))
{
// Need to read the data, otherwise you'll get notified each time.
}
}
since select doesn't work i used threads, specifically _beginthread , _beginthreadex.