C - select() seems to block for longer than timeout - c

I am writing a data acquisitioning program that needs to
wait for serial with select()
read serial data (RS232 at 115200 baud),
timestamp it (clock_gettime()),
read an ADC on SPI,
interpret it,
send new data over another tty device
loop and repeat
The ADC is irrelevant for now.
At the end of the loop I use select() again with a 0 timeout to poll and see whether data is available already, if it is it means I have overrun , I.e. I expect the loop to end before more data and for the select() at the start of the loop to block and get it as soon as it arrives.
The data should arrive every 5ms, my first select() timeout is calculated as (5.5ms - loop time) - which should be about 4ms.
I get no timeouts but many overruns.
Examining the timestamps reveals that select() blocks for longer than the timeout (but still returns>0).
It looks like select() returns late after getting data before timeout.
This might happen 20 times in 1000 repeats.
What might be the cause? How do I fix it?
EDIT:
Here is cut down version of the code (I do much more error checking than this!)
#include <bcm2835.h> /* for bcm2835_init(), bcm2835_close() */
int main(int argc, char **argv){
int err = 0;
/* Set real time priority SCHED_FIFO */
struct sched_param sp;
sp.sched_priority = 30;
if ( pthread_setschedparam(pthread_self(), SCHED_FIFO, &sp) ){
perror("pthread_setschedparam():");
err = 1;
}
/* 5ms between samples on /dev/ttyUSB0 */
int interval = 5;
/* Setup tty devices with termios, both totally uncooked, 8 bit, odd parity, 1 stop bit, 115200baud */
int fd_wc=setup_serial("/dev/ttyAMA0");
int fd_sc=setup_serial("/dev/ttyUSB0");
/* Setup GPIO for SPI, SPI mode, clock is ~1MHz which equates to more than 50ksps */
bcm2835_init();
setup_mcp3201spi();
int collecting = 1;
struct timespec starttime;
struct timespec time;
struct timespec ftime;
ftime.tv_nsec = 0;
fd_set readfds;
int countfd;
struct timeval interval_timeout;
struct timeval notime;
uint16_t p1;
float w1;
uint8_t *datap = malloc(8);
int data_size;
char output[25];
clock_gettime(CLOCK_MONOTONIC, &starttime);
while ( !err && collecting ){
/* Set timeout to (5*1.2)ms - (looptime)ms, or 0 if looptime was longer than (5*1.2)ms */
interval_timeout.tv_sec = 0;
interval_timeout.tv_usec = interval * 1200 - ftime.tv_nsec / 1000;
interval_timeout.tv_usec = (interval_timeout.tv_usec < 0)? 0 : interval_timeout.tv_usec;
FD_ZERO(&readfds);
FD_SET(fd_wc, &readfds);
FD_SET(0, &readfds); /* so that we can quit, code not included */
if ( (countfd=select(fd_wc+1, &readfds, NULL, NULL, &interval_timeout))<0 ){
perror("select()");
err = 1;
} else if (countfd == 0){
printf("Timeout on select()\n");
fflush(stdout);
err = 1;
} else if (FD_ISSET(fd_wc, &readfds)){
/* timestamp for when data is just available */
clock_gettime(CLOCK_MONOTONIC, &time)
if (starttime.tv_nsec > time.tv_nsec){
time.tv_nsec = 1000000000 + time.tv_nsec - starttime.tv_nsec;
time.tv_sec = time.tv_sec - starttime.tv_sec - 1;
} else {
time.tv_nsec = time.tv_nsec - starttime.tv_nsec;
time.tv_sec = time.tv_sec - starttime.tv_sec;
}
/* get ADC value, which is sampled fast so corresponds to timestamp */
p1 = getADCvalue();
/* receive_frame, receiving is slower so do it after getting ADC value. It is timestamped anyway */
/* This function consists of a loop that gets data from serial 1 byte at a time until a 'frame' is collected. */
/* it uses select() with a very short timeout (enough for 1 byte at baudrate) just to check comms are still going */
/* It never times out and behaves well */
/* The interval_timeout is passed because it is used as a timeout for responding an ACK to the device */
/* That select also never times out */
ireceive_frame(&datap, fd_wc, &data_size, interval_timeout.tv_sec, interval_timeout.tv_usec);
/* do stuff with it */
/* This takes most of the time in the loop, about 1.3ms at 115200 baud */
snprintf(output, 24, "%d.%04d,%d,%.2f\n", time.tv_sec, time.tv_nsec/100000, pressure, w1);
write(fd_sc, output, strnlen(output, 23));
/* Check how long the loop took (minus the polling select() that follows */
clock_gettime(CLOCK_MONOTONIC, &ftime);
if ((time.tv_nsec+starttime.tv_nsec) > ftime.tv_nsec){
ftime.tv_nsec = 1000000000 + ftime.tv_nsec - time.tv_nsec - starttime.tv_nsec;
ftime.tv_sec = ftime.tv_sec - time.tv_sec - starttime.tv_sec - 1;
} else {
ftime.tv_nsec = ftime.tv_nsec - time.tv_nsec - starttime.tv_nsec;
ftime.tv_sec = ftime.tv_sec - time.tv_sec - starttime.tv_sec;
}
/* Poll with 0 timeout to check that data hasn't arrived before we're ready yet */
FD_ZERO(&readfds);
FD_SET(fd_wc, &readfds);
notime.tv_sec = 0;
notime.tv_usec = 0;
if ( !err && ( (countfd=select(fd_wc+1, &readfds, NULL, NULL, &notime)) < 0 )){
perror("select()");
err = 1;
} else if (countfd > 0){
printf("OVERRUN!\n");
snprintf(output, 25, ",,,%d.%04d\n\n", ftime.tv_sec, ftime.tv_nsec/100000);
write(fd_sc, output, strnlen(output, 24));
}
}
}
return 0;
}
The timestamps I see on the serial stream that I output is fairly regular (a deviation is caught up by the next loop usually). A snippet of output:
6.1810,0,225.25
6.1867,0,225.25
6.1922,0,225.25
6,2063,0,225.25
,,,0.0010
Here, up to 6.1922s everything is OK. The next sample is 6.2063 - 14.1ms after the last, but it didn't time out nor did the previous loop from 6.1922-6.2063 catch the overrun with the polling select(). My conclusion is that the last loop was withing the sampling time and select took -10ms too long return without timing out.
The ,,,0.0010 indicates the loop time (ftime) of the loop after - I should really be checking what the loop time was when it went wrong. I'll try that tomorrow.

The timeout passed to select is a rough lower bound — select is allowed to delay your process for slightly more than that. In particular, your process will be delayed if it is preempted by a different process (a context switch), or by interrupt handling in the kernel.
Here is what the Linux manual page has to say on the subject:
Note that the timeout interval will be rounded up to the system clock
granularity, and kernel scheduling delays mean that the blocking
interval may overrun by a small amount.
And here's the POSIX standard:
Implementations may
also place limitations on the granularity of timeout intervals. If the
requested timeout interval requires a finer granularity than the
implementation supports, the actual timeout interval shall be
rounded up to the next supported value.
Avoiding that is difficult on a general purpose system. You will get reasonable results, especially on a multi-core system, by locking your process in memory (mlockall) and setting your process to a real-time priority (use sched_setscheduler with SCHED_FIFO, and remember to sleep often enough to give other processes a chance to run).
A much more difficult approach is to use a real-time microcontroller that is dedicated to running the real-time code. Some people claim to reliably sample at 20MHz on fairly cheap hardware using that technique.

If values for struct timeval are set to zero, then select will not block, but if timeout argument is a NULL pointer, it will...
If the timeout argument is not a NULL pointer, it points to an object of type struct timeval that specifies a maximum interval to
wait for the selection to complete. If the timeout argument points to
an object of type struct timeval whose members are 0, select() does
not block. If the timeout argument is a NULL pointer, select()
blocks until an event causes one of the masks to be returned with
a valid (non-zero) value or until a signal occurs that needs to be
delivered. If the time limit expires before any event occurs that
would cause one of the masks to be set to a non-zero value, select()
completes successfully and returns 0.
Read more here
EDIT to address comments, and add new information:
A couple of noteworthy points.
First - in the comments, there is a suggestion to add sleep() to your worker loop. This is a good suggestion. The reasons stated here, although dealing with thread entry points, still apply, since you are instantiating a continuous loop.
Second - Linux select() is a system call with an interesting implemantation history, and as such has a range of varying behaviours from implementation to implementation, some which may contribute to the unexpected behaviours you are seeing. I am not sure which of the major blood lines of Linux Arch Linux comes from, but the man7.org page for select() includes the following two segments, which per your descriptions appear to describe conditions that could possibly contribute to the delays you are experiencing.
Bad checksum:
Under Linux, select() may report a socket file descriptor as "ready
for reading", while nevertheless a subsequent read blocks. This could
for example happen when data has arrived but upon examination has wrong
checksum and is discarded.
Race condition: (introduces and discusses pselect())
...Suppose the signal handler sets a global flag and returns. Then a test
of this global flag followed by a call of select() could hang indefinitely
if the signal arrived just after the test but just before the call...
Given the description of your observations, and depending on how your version of Linux is implemented, either one of these implementation features may be a possible contributor.

Related

How to use timerfd properly?

I use timerfd with zmq.
How can I use timerfd_create and timerfd_set to wait one second for the timer (https://man7.org/linux/man-pages/man2/timerfd_create.2.html)?
I have looked through the link but I still do not get how I can initilize a timer that waits one second per tick with create and set. This is exactly my task:
We start a timer with timerfd_create(), which is 1 / sec. ticking. When setting a timer with timer_set_(..) a
counter is simply incremented, which is decremented with every tick. When the counter reaches 0, the timer
has expired.
In this project we have a function timer _ set _(), where the timer is set with the function timerfd_create and timerfd_settimer(). I hope you can help me.
This is my progress (part of my code):
struct itimerspec timerValue;
g_items[n].socket = nullptr;
g_items[n].events = ZMQ_POLLIN;
g_items[n].fd = timerfd_create(CLOCK_REALTIME, 0);
if(g_items[n].fd == -1 ){
printf("timerfd_create() failed: errno=%d\n", errno);
return -1;
}
timerValue.it_value.tv_sec = 1;
timerValue.it_value.tv_nsec = 0;
timerValue.it_interval.tv_sec = 1;
timerValue.it_interval.tv_nsec = 0;
timerfd_settime(g_items[n].fd, 0, &timerValue, NULL);
The question appears about setting correctly the timeouts of the timer.
With the settings
timerValue.it_value.tv_sec = 1;
timerValue.it_value.tv_nsec = 0;
timerValue.it_interval.tv_sec = 1;
timerValue.it_interval.tv_nsec = 0;
You are correctly setting the initial timeout to 1s (field timerValue.it_value). But you are also setting a periodic interval of 1s, and you didn't mention the will to do it.
About the timeouts
This behavior is described by the following passage of the manual:
int timerfd_create(int clockid, int flags);
new_value.it_value specifies the initial expiration of the timer, in seconds and nanoseconds. Setting either field of new_value.it_value to a nonzero value arms the timer.Setting both fields of new_value.it_value to zero disarms the timer.
Setting one or both fields of new_value.it_interval to nonzero values specifies the period, in seconds and nanoseconds, for repeated timer expirations after the initial expiration. If both fields of new_value.it_interval are zero, the timer expires just once, at the time specified by new_value.it_value.
The emphasis on the last paragraph is mine, as it shows what to do in order to have a single-shot timer.
The benefits of timerrfd. How to detect timer expiration?
The main advantage provided by timerfd is that the timer is associated to a file descriptor, and this means that it
may be monitored by select(2), poll(2), and epoll(7).
The information contained in the other answer about read() is valid as well: let's just say that, even using functions such as select(), read() function will be required in order to consume data in the file descriptor.
A complete example
In the following demonstrative program, a timeout of 4 seconds is set; after that a periodic interval of 5 seconds is set.
The good old select() is used in order to wait for timer expiration, and read() is used to consume data (that is the number of expired timeouts; we will ignore it).
#include <stdio.h>
#include <sys/timerfd.h>
#include <sys/select.h>
#include <time.h>
int main()
{
int tfd = timerfd_create(CLOCK_REALTIME, 0);
printf("Starting at (%d)...\n", (int)time(NULL));
if(tfd > 0)
{
char dummybuf[8];
struct itimerspec spec =
{
{ 5, 0 }, // Set to {0, 0} if you need a one-shot timer
{ 4, 0 }
};
timerfd_settime(tfd, 0, &spec, NULL);
/* Wait */
fd_set rfds;
int retval;
/* Watch timefd file descriptor */
FD_ZERO(&rfds);
FD_SET(0, &rfds);
FD_SET(tfd, &rfds);
/* Let's wait for initial timer expiration */
retval = select(tfd+1, &rfds, NULL, NULL, NULL); /* Last parameter = NULL --> wait forever */
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
/* Let's wait (twice) for periodic timer expiration */
retval = select(tfd+1, &rfds, NULL, NULL, NULL);
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
retval = select(tfd+1, &rfds, NULL, NULL, NULL);
printf("Expired at %d! (%d) (%d)\n", (int)time(NULL), retval, read(tfd, dummybuf, 8) );
}
return 0;
}
And here it is the output. Every row contains also the timestamp, so that the actual elapsed time can be checked>
Starting at (1596547762)...
Expired at 1596547766! (1) (8)
Expired at 1596547771! (1) (8)
Expired at 1596547776! (1) (8)
Please note:
We just performed 3 reads, for test
The intervals are 4s + 5s + 5s (initial timeout + two interval timeouts)
8 bytes are returned by read(). We ignored them, but they contained the number of the expired timeouts
With timerfds, the idea is that a read on the fd will return the number of times the timer has expired.
From the timerfd_settime(2) man page:
Operating on a timer file descriptor
The file descriptor returned by timerfd_create() supports the following operations:
read(2)
If the timer has already expired one or more times since its settings were last modified using
timerfd_settime(), or since the last successful read(2), then the buffer given to read(2) returns
an unsigned 8-byte integer (uint64_t) containing the number of expirations that have occurred.
If no timer expirations have occurred at the time of the read(2), then the call either blocks
until the next timer expiration, or fails with the error EAGAIN if the file descriptor has been
made nonblocking (via the use of the fcntl(2) F_SETFL operation to set the O_NONBLOCK flag).
So, basically, you create an unsigned 8 byte integer (uint64_t on Linux), and pass that to your read call.
uint64_t buf;
int expired = read( g_items[n].fd, &buf, sizeof(uint64_t));
if( expired < 0 ) perror("read");
Something like that, if you want to block until you get an expiry.

Accuracy of clock_gettime() in a context switch scenario

I'm trying to 'roughly' calculate the time of a thread context switch in a Linux system. I've written a program that uses pipes and multi-threading to achieve this. When running the program the calculated time is clearly wrong(see output below). I am unsure if this is due to me using the wrong clock_id for this procedure or perhaps my implementation
I have implemented sched_setaffinity() so as to only have the program run on core 0. I've tried to leave as much fluff out of code so to only measure the time of a context switch, so the tread process only writes a single character to the pipe and the parent does a 0 byte read.
I have a parent tread that creates one child thread with a one-way pipe between them to pass data, the child thread runs a simple function to write to a pipe.
void* thread_1_function()
{
write(fd2[1],"",sizeof("");
}
while the parent thread creates the child thread, starts the time counter and then calls a read on the pipe that the child thread writes to.
int main(int argc, char argv[])
{
//time struct declaration
struct timespec start,end;
//sets program to only use core 0
cpu_set_t cpu_set;
CPU_ZERO(&cpu_set);
CPU_SET(0,&cpu_set);
if((sched_setaffinity(0, sizeof(cpu_set_t), &cpu_set) < 1))
{
int nproc = sysconf(_SC_NPROCESSORS_ONLN);
int k;
printf("Processor used: ");
for(k = 0; k < nproc; ++k)
{
printf("%d ", CPU_ISSET(k, &cpu_set));
}
printf("\n");
if(pipe(fd1) == -1)
{
printf("fd1 pipe error");
return 1;
}
//fail on file descriptor 2 fail
if(pipe(fd2) == -1)
{
printf("fd2 pipe error");
return 1;
}
pthread_t thread_1;
pthread_create(&thread_1, NULL, &thread_1_function, NULL);
pthread_join(thread_1,NULL);
int i;
uint64_t sum = 0;
for(i = 0; i < iterations; ++i)
{
//initalize clock start
clock_gettime(CLOCK_MONOTONIC, &start);
//wait for child thread to write to pipe
read(fd2[0],input,0);
//record clock end
clock_gettime(CLOCK_MONOTONIC, &end);
write(fd1[1],"",sizeof(""));
uint64_t diff;
diff = billion * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
diff = diff;
sum += diff;
}
The results i get while running this are typically in this manner:
3000
3000
4000
2000
12000
3000
5000
and so forth, when I inspect the time returned to the start and end timespec structs i see that tv_nsec seems to be a 'rounded' number as well:
start.tv_nsec: 714885000, end.tv_nsec: 714888000
Would this be caused by a clock_monotonic not being precise enough for what im attempting to measure, or some other problem that i'm overlooking?
i see that tv_nsec seems to be a 'rounded' number as well:
2626, 714885000, 2626, 714888000
Would this be caused by a clock_monotonic not being precise enough for
what im attempting to measure, or some other problem that i'm
overlooking?
Yes, that's a possibility. Every clock supported by the system has a fixed resolution. struct timespec is capable of supporting clocks with nanosecond resolution, but that does not mean that you can expect every clock to actually have such resolution. It looks like your CLOCK_MONOTONIC might have a resolution of 1 microsecond (1000 nanoseconds), but you can check that via the clock_getres() function.
If it is available to you, then you might try CLOCK_PROCESS_CPUTIME_ID. It is possible that that would have higher resolution than CLOCK_MONOTONIC for you, but do note that single-microsecond resolution is pretty precise -- that's on the order of one tick per 3000 CPU cycles on a modern machine.
Even so, I see several possible problems with your approach:
Although you set your process to have affinity for a single CPU, that does not prevent the system from scheduling other processes on that CPU, too. Thus, unless you've taken additional measures, you can't be certain -- it's not even likely -- that every context switch away from one of your program's threads is to the other thread.
You start your second thread and then immediately join it. There is no more context switching between your threads after that, because your second thread no longer exists after being successfully joined.
read() with a count of 0 may or may not check for errors, and it certainly does not transfer any data. It is totally unclear to me why you identify the time for that call with the time for a context switch.
If a context switch does occur in the space you're timing, then at least two need to occur there -- away from your program and back to it. Also, you're measuring the time consumed by whatever else runs in the other context as well, not just the switch time. The 1000-nanosecond steps may thus reflect time slices, rather than switching time.
Your main thread is writing null characters to the write end of a pipe, but there does not appear to be anything reading them. If indeed there isn't then this will eventually fill up the pipe's buffer and block. The purpose is lost on me.

Implementing a WatchDog timer

I need to implement a timer that checks for conditions every 35 seconds. My program is using IPC schemes to communicate information back and forth between client and server processes. The problem is that I am running msgrcv() function in a loop, which pauses the loop until it finds a message, which is no good because I need the timer to always be checking if a client has stopped sending messages. (if it only checks when it receives a message, this will be useless...)
The problem may seem unclear, but the basics of what I need is a way to implement a Watchdog timer that will check a condition every 35 seconds.
I currently have this code:
time_t start = time(NULL);
//Enter main processing loop
while(running)
{
size_t size = sizeof(StatusMessage) - sizeof(long);
if(msgrcv(messageID, &statusMessage, size, 0, 0) != -1)
{
printf("(SERVER) Message Data (ID #%ld) = %d : %s\n", statusMessage.machineID, statusMessage.status_code, statusMessage.status);
masterList->msgQueueID = messageID;
int numDCs = ++masterList->numberOfDCs;
masterList->dc[numDCs].dcProcessID = (pid_t) statusMessage.machineID;
masterList->dc[numDCs].lastTimeHeardFrom = 1000;
printf("%d\n", masterList->numberOfDCs);
}
printf("%.2f\n", (double) (time(NULL) - start));
}
The only problem is as I stated before, the code to check how much time has passed, won't be reached if there is no message to come in, as the msgrcv function will hold the process.
I hope I am making sense, and that someone will be able to assist me in my problem.
You may want to try the msgctl(msqid, IPC_STAT, struct msqid_ds *msgqdsbuf); If the call is successful, then the current number of messages can be found using msgdsbuf->msg_qnum. The caller needed read permissions, which I think you may have in here.

select for multiple non-blocking connections

I have a single threaded program. It sends message to four destinations every five seconds. I don't want connect() to be blocked. So I am writing my program like this:
int j, rc, non_blocking=1, sockets[4], max_fd=0;
struct sockaddr server=get_server_addr();
fd_set fdset;
const struct timeval conn_timeout = { 2, 0 }; /* 2 seconds */
for (j=0; j<4; ++j)
{
sockets[j]=socket( AF_INET, SOCK_STREAM, 0 );
ioctl(sockets[j], FIONBIO, (char *)&non_blocking);
connect(sockets[j], &server, sizeof (server));
}
/* prepare fd_set */
FD_ZERO ( &fdset );
for (j=0;j<4;++j)
{
if (sockets[j] != -1 )
{
FD_SET ( sockets[j], &fdset );
if ( sockets[j] > max_fd )
{
max_fd = sockets[j];
}
}
}
rc=select(max_fd + 1, NULL, &fdset, NULL, &conn_timeout );
if(rc > 0)
{
for (j=0;j<4;++j)
{
if(sockets[j]!=-1 && FD_ISSET(sockets[j],&fdset))
{
/* send() */
}
}
}
/* close all valid sockets */
However, it seems select() returns immediately after ONE file descriptor is ready instead of blocking for conn_timeout (2 seconds). So in this case how can I achieve my targets?
The program continues if all sockets are ready.
The program can block there for 2 seconds if any one of sockets are not ready.
Yeah, select was designed on the assumption that you would want to service each socket as soon as it became ready.
If I understand what you're trying to do, then the simplest way to accomplish it will be to remove each socket from the fdset as it becomes ready. If there are any sockets left in the set, use gettimeofday to adjust the timeout downward, and call select again. When the set is empty, all four sockets are usable and you can proceed.
There are three basic approaches:
If you want to stay strictly portable you need to iterate:
calculate end time from current time and timeout of your choice
Cycle:
-- Create fdset with those fds not yet ready
-- calculate max time to wait
-- select()
-- remeber those fds that are now ready
-- break if end time reached or all fds ready
End cycle
Now you have knowledge of the ready fds and the elapsed time
If you want to stay portable, but can use threads:
start n threads
select on one fd per thread
join all threads
If you do not need to be portable: Most OSes have a facility for such a situation, e.g. Windows/.NET has WaitAll (together with async send and an event)
I don't see the connection between your stated targets and your stated problem. You are correct in saying that select() blocks until at least one socket is ready, but according to target #2 above that is exactly what you want. There's nothing in your stated targets about blocking until all four sockets are ready at the same time.
You should also note that sockets are almost always ready for writing, unless the send buffer is full, which means the receiver's receive buffer is full, which means the receiver is slower than the sender. So using select() alone as the underlying write timer isn't a good idea.

Time remaining on a select() call

I'm using select() on a Linux/ARM platform to see if a udp socket has received a packet. I'd like to know how much time was remaining in the select call if it returns before the timeout (having detected a packet).
Something along the lines of:
int wait_fd(int fd, int msec)
{
struct timeval tv;
fd_set rws;
tv.tv_sec = msec / 1000ul;
tv.tv_usec = (msec % 1000ul) * 1000ul;
FD_ZERO( & rws);
FD_SET(fd, & rws);
(void)select(fd + 1, & rws, NULL, NULL, & tv);
if (FD_ISSET(fd, &rws)) { /* There is data */
msec = (tv.tv_sec * 1000) + (tv.tv_usec / 1000);
return(msec?msec:1);
} else { /* There is no data */
return(0);
}
}
The safest thing is to ignore the ambiguous definition of select() and time it yourself.
Just get the time before and after the select and subtract that from the interval you wanted.
If I recall correctly, the select() function treats the timeout and an I/O parameter and when select returns the time remaining is returned in the timeout variable.
Otherwise, you will have to record the current time before calling, and again after and obtain the difference between the two.
From "man select" on OSX:
Timeout is not changed by select(), and may be reused on subsequent calls, however it
is good style to re-ini-tialize it before each invocation of select().
You'll need to call gettimeofday before calling select, and then gettimeofday on exit.
[Edit] It seems that linux is slightly different:
(ii) The select function may update the timeout parameter to indicate
how much time was left. The pselect function does not change
this parameter.
On Linux, the function select modifies timeout to reflect the amount of
time not slept; most other implementations do not do this. This causes
problems both when Linux code which reads timeout is ported to other
operating systems, and when code is ported to Linux that reuses a
struct timeval for multiple selects in a loop without reinitializing
it. Consider timeout to be undefined after select returns.
Linux select() updates the timeout argument to reflect the time that has past.
Note that this is not portable across other systems (hence the warning in the OS X manual quoted above) but does work with Linux.
Gilad
Do not use select, try with fd larger than 1024 with your code and see what you will get.

Resources