Are there any well-behaved POSIX interval timers? - c

Inspired by the last leap second, I've been exploring timing (specifically, interval timers) using POSIX calls.
POSIX provides several ways to set up timers, but they're all problematic:
sleep and nanosleep—these are annoying to restart after they're interrupted by a signal, and they introduce clock skew. You can avoid some, but not all, of this skew with some extra work, but these functions use the realtime clock, so this isn't without pitfalls.
setitimer or the more modern timer_settime—these are designed to be interval timers, but they're per-process, which is a problem if you need multiple active timers. They also can't be used synchronously, but that's less of a big deal.
clock_gettime and clock_nanosleep seem like the right answer when used with CLOCK_MONOTONIC. clock_nanosleep supports absolute timeouts, so you can just sleep, increment the timeout, and repeat. It's easy to restart after an interruption that way, too. Unfortunately, these functions might as well be Linux-specific: there's no support for them on Mac OS X or FreeBSD.
pthread_cond_timedwait is available on the Mac and can work with gettimeofday as a kludgy workaround, but on the Mac it can only work with the realtime clock, so it's subject to misbehavior when the system clock is set or a leap second happens.
Is there an API I'm missing? Is there a reasonably portable way to create well-behaved interval timers on UNIX-like systems, or does this sum up the state of things today?
By well-behaved and reasonably portable, I mean:
Not prone to clock skew (minus, of course, the system clock's own skew)
Resilient to the system clock being set or a leap second occurring
Able to support multiple timers in the same process
Available on at least Linux, Mac OS X, and FreeBSD
A note on leap seconds (in response to R..'s answer):
POSIX days are exactly 86,400 seconds long, but real-world days can rarely be longer or shorter. How the system resolves this discrepancy is implementation-defined, but it's common for the leap second to share the same UNIX timestamp as the previous second. See also: Leap Seconds and What To Do With Them.
The Linux kernel leap second bug was a result of failing to do housekeeping after setting the clock back a second: https://lkml.org/lkml/2012/7/1/203. Even without that bug, the clock would have jumped backwards one second.

kqueue and kevent can be utilized for this purpose. OSX 10.6 and FreeBSD 8.1 add support for EVFILT_USER, which we can use to wake up the event loop from another thread.
Note that if you use this to implement your own condition and timedwait, you do not need locks in order to avoid race conditions, contrary to this excellent answer, because you cannot "miss" an event on the queue.
Sources:
FreeBSD man page
OS X man page
kqueue tutorial
libevent source code
Example Code
Compile with clang -o test -std=c99 test.c
#include <sys/types.h>
#include <sys/event.h>
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <pthread.h>
// arbitrary number used for the identifier property
const int NOTIFY_IDENT = 1337;
static int kq;
static void diep(const char *s) {
perror(s);
exit(EXIT_FAILURE);
}
static void *run_thread(void *arg) {
struct kevent kev;
struct kevent out_kev;
memset(&kev, 0, sizeof(kev));
kev.ident = NOTIFY_IDENT;
kev.filter = EVFILT_USER;
kev.flags = EV_ADD | EV_CLEAR;
struct timespec timeout;
timeout.tv_sec = 3;
timeout.tv_nsec = 0;
fprintf(stderr, "thread sleep\n");
if (kevent(kq, &kev, 1, &out_kev, 1, &timeout) == -1)
diep("kevent: waiting");
fprintf(stderr, "thread wakeup\n");
return NULL;
}
int main(int argc, char **argv) {
// create a new kernel event queue
kq = kqueue();
if (kq == -1)
diep("kqueue()");
fprintf(stderr, "spawn thread\n");
pthread_t thread;
if (pthread_create(&thread, NULL, run_thread, NULL))
diep("pthread_create");
if (argc > 1) {
fprintf(stderr, "sleep for 1 second\n");
sleep(1);
fprintf(stderr, "wake up thread\n");
struct kevent kev;
struct timespec timeout = { 0, 0 };
memset(&kev, 0, sizeof(kev));
kev.ident = NOTIFY_IDENT;
kev.filter = EVFILT_USER;
kev.fflags = NOTE_TRIGGER;
if (kevent(kq, &kev, 1, NULL, 0, &timeout) == -1)
diep("kevent: triggering");
} else {
fprintf(stderr, "not waking up thread, pass --wakeup to wake up thread\n");
}
pthread_join(thread, NULL);
close(kq);
return EXIT_SUCCESS;
}
Output
$ time ./test
spawn thread
not waking up thread, pass --wakeup to wake up thread
thread sleep
thread wakeup
real 0m3.010s
user 0m0.001s
sys 0m0.002s
$ time ./test --wakeup
spawn thread
sleep for 1 second
thread sleep
wake up thread
thread wakeup
real 0m1.010s
user 0m0.002s
sys 0m0.002s

POSIX timers (timer_create) do not require signals; you can also arrange for the timer expiration to be delivered in a thread via the SIGEV_THREAD notification type. Unfortunately glibc's implementation actually creates a new thread for each expiration (which both has a lot of overhead and destroys any hope of realtime-quality robustness) despite the fact that the standard allows reuse of the same thread for each expiration.
Short of that, I would just recommend making your own thread that uses clock_nanosleep with TIMER_ABSTIME and CLOCK_MONOTONIC for an interval timer. Since you mentioned that some broken systems might lack these interfaces, you could simply have a drop-in implementation (based e.g. on pthread_cond_timedwait) on such systems, and figure it might be lower-quality due to lack of monotonic clock, but that this is just a fundamental limitation of using a low-quality implementation like MacOSX.
As for your concern about leap seconds, if ntpd or similar is making your realtime clock jump backwards when a leap second occurs, that's a serious bug in ntpd. POSIX time (seconds since the epoch) are in units of calendar seconds (exactly 1/86400 of a day) per the standard, not SI seconds, and thus the only place leap second logic belongs on a POSIX system (if anywhere) is in mktime/gmtime/localtime when they convert between time_t and broken-down time. I haven't been following the bugs that hit this time, but they seem to have resulted from system software doing a lot of stupid and wrong stuff, not from any fundamental issue.

You can look at the question here for clock_gettime emulation, which I've also supplied an answer for, but helped me as well. I've recently added a simple timer to a little repository I keep for Mac OS X timing that partially emulates POSIX calls. A simple test runs the timer at 2000Hz. The repo is called PosixMachTiming. Try it out.
PosixMachTiming is based on Mach. It seems some of the timing-related Mach API has disappeared from Apple's pages and has deprecated, but there are still bits of source code floating around. It looks like AbsoluteTime units and kernel abstractions found here are the new way of doing things. Anyways the PosixMachTiming repo still works for me.
Overview of PosixMachTiming
clock_gettime is emulated for CLOCK_REALTIME by a mach function calls that tap into the system realtime clock, dubbed CALENDAR_CLOCK.
The clock_gettime is emulated for CLOCK_MONOTONIC by using a global variable (extern mach_port_t clock_port). This clock is initialized when the computer turns on or maybe wakes up. I'm not sure. In any case, it's the global variable that the function mach_absolute_time() calls.
clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, ...) is emulated by using nanosleep on the difference between current time and the absolute monotonic time.
itimer_start() and itimer_step() are based on calling clock_nanosleep for a target absolute monotonic time. It increments the target time by the time-step at each iteration (not the current time) so that clock skew is not an issue.
Note that this does not satisfy your requirement to be able to support multiple timers in the same process.

There's also the new CLOCK_TAI which doesn't take the leap second corrections the real time clocks do.

We can make use of timer_create () or timerfd_create () .
Their examples are present in man page .

Related

Linux timer interval

I want to run a timer with interval of 5 ms. I created a Linux timer and when a sigalrm_handler is called I'm checking elapsed time from a previous call. I'm getting times like: 4163, 4422, 4266, 4443, 4470 4503, 4288 microseconds when I want intervals to be about 5000 microseconds with a least possible error. I don't know why this interval is not constant but it varies and is much lower than it should be.
Here is my code:
static int time_count;
static int counter;
struct itimerval timer={0};
void sigalrm_handler(int signum)
{
Serial.print("SIGALRM received, time: ");
Serial.println(micros()-time_count);
time_count=micros();
}
void setup() {
Serial.begin(9600);
timer.it_value.tv_sec = 1;
timer.it_interval.tv_usec = 5000;
signal(SIGALRM, &sigalrm_handler);
setitimer(ITIMER_REAL, &timer, NULL);
time_count = micros();
}
I want to run a timer with interval of 5 ms.
You probably cannot get that period reliably, because it is smaller than a reasonable PC hardware can handle.
As a rule of thumb, 50 Hz (or perhaps 100Hz) is probably the highest reliable frequency you could get. And it is not a matter of software, but of hardware.
Think of your typical processor cache (a few megabytes). You could need a few milliseconds to fill it. Or think of time to handle a page fault; it probably would take more than a millisecond.
And Intel Edison is not a top-fast processor. I won't be surprised if converting a number to a string and displaying that string on some screen could take about a millisecond (but I leave you to check that). This could explain your figures.
Regarding software, see also time(7) (or consider perhaps some busy waiting approach inside the kernel; I don't recommend that).
Look also into /proc/interrupts several times (see proc(5)) by running a few times some cat /proc/interrupts command in a shell. You'll probably see that the kernel gets interrupted less frequently than once every one or a few milliseconds.
BTW your signal handler calls non-async-signal-safe functions (so is undefined behavior). Read signal(7) & signal-safety(7).
So it looks like your entire approach is wrong.
Maybe you want some RTOS, at least if you need some hard real-time (and then, you might consider upgrading your hardware to something faster and more costly).

Is it suitable to use usleep as Timer in c

I am doing C program in Linux . I have a main thread which continuously updates values of two variables and other thread write those variable values into a file every 20 milliseconds. I have used usleep to achieve this time interval. sample code is below.
main()
{
.
.
.
.
.
pthread_create(...write_file..); /* started another thread by passing a function write_file */
while(variable1)
{
updates value of variables
}
return 0;
}
void write_file()
{
.
.
.
.
fp = fopen("sample.txt" , "a");
while(variable2)
{
fprintf(fp," %d \n", somevariable);
usleep(20 * 1000);
}
fclose(fp);
}
Is it suitable to use usleep function achieve 20 milliseconds time interval or should I use some other methods like Timer.?
Is this usleep is accurate enough ? Does this sleep function any way affect the main thread ?
Using of sleep() family often results in non-precise timing, especially when process has many CPU-consuming threads and required intervals are relatively small, like 20ms. So you shouldn't assume that *sleep() call blocks execution exactly to specified time. For described above situation actual sleep duration may be even twice or more greater than specified (assuming that kernel is not real-time one). As result you should implement some kind of compensation logic, that adjusts sleep duration for subsequent calls.
More precise (but of course not ideal) approach is to use POSIX timers. See timer_create(). The most precise timers are the ones that use SIGEV_SIGNAL or SIGEV_THREAD_ID notifications (latter is only on Linux systems). As signal number you can use one of the real-time signals (SIGRTMIN to SIGRTMAX), but be aware that pthread implementations often use few of these signals internally, so you should choose actual number carefully. And also doing something in signal handler context requires extra attention, because not every library function may be used safely here. You can find safe list here.
P.S. Also note that select() called with empty sets is a fairly portable way to sleep with subsecond precision.
Sleeping: sleep() and usleep()
Now, let me start with the easier timing calls. For delays of multiple seconds, your best bet is probably to use sleep(). For delays of at least tens of milliseconds (about 10 ms seems to be the minimum delay), usleep() should work. These functions give the CPU to other processes (``sleep''), so CPU time isn't wasted. See the manual pages sleep(3) and usleep(3) for details.
For delays of under about 50 milliseconds (depending on the speed of your processor and machine, and the system load), giving up the CPU takes too much time, because the Linux scheduler (for the x86 architecture) usually takes at least about 10-30 milliseconds before it returns control to your process. Due to this, in small delays, usleep(3) usually delays somewhat more than the amount that you specify in the parameters, and at least about 10 ms.
nanosleep()
In the 2.0.x series of Linux kernels, there is a new system call, nanosleep() (see the nanosleep(2) manual page), that allows you to sleep or delay for short times (a few microseconds or more).
For delays <= 2 ms, if (and only if) your process is set to soft real time scheduling (using sched_setscheduler()), nanosleep() uses a busy loop; otherwise it sleeps, just like usleep().
The busy loop uses udelay() (an internal kernel function used by many kernel drivers), and the length of the loop is calculated using the BogoMips value (the speed of this kind of busy loop is one of the things that BogoMips measures accurately). See /usr/include/asm/delay.h) for details on how it works.
Source: http://tldp.org/HOWTO/IO-Port-Programming-4.html
Try use nanosleep() instead usleep(), it should be more accurately for 20ms interval.

Measuring context switch time for threads

I want to calculate the context switch time and I am thinking to use mutex and conditional variables to signal between 2 threads so that only one thread runs at a time. I can use CLOCK_MONOTONIC to measure the entire execution time and CLOCK_THREAD_CPUTIME_ID to measure how long each thread runs.
Then the context switch time is the (total_time - thread_1_time - thread_2_time).
To get a more accurate result, I can just loop over it and take the average.
Is this a correct way to approximate the context switch time? I cant think of anything that might go wrong but I am getting answers that are under 1 nanosecond..
I forgot to mention that the more time I loop it over and take the average, the smaller results I get.
Edit
here is a snippet of the code that I have
typedef struct
{
struct timespec start;
struct timespec end;
}thread_time;
...
// each thread function looks similar like this
void* thread_1_func(void* time)
{
thread_time* thread_time = (thread_time*) time;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &(thread_time->start));
for(x = 0; x < loop; ++x)
{
//where it switches to another thread
}
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &(thread_time->end));
return NULL;
};
void* thread_2_func(void* time)
{
//similar as above
}
int main()
{
...
pthread_t thread_1;
pthread_t thread_2;
thread_time thread_1_time;
thread_time thread_2_time;
struct timespec start, end;
// stamps the start time
clock_gettime(CLOCK_MONOTONIC, &start);
// create two threads with the time structs as the arguments
pthread_create(&thread_1, NULL, &thread_1_func, (void*) &thread_1_time);
pthread_create(&thread_2, NULL, &thread_2_func, (void*) &thread_2_time);
// waits for the two threads to terminate
pthread_join(thread_1, NULL);
pthread_join(thread_2, NULL);
// stamps the end time
clock_gettime(CLOCK_MONOTONIC, &end);
// then I calculate the difference between between total execution time and the total execution time of two different threads..
}
First of all, using CLOCK_THREAD_CPUTIME_ID is probably very wrong; this clock will give the time spent in that thread, in user mode. However the context switch does not happen in user mode, You'd want to use another clock. Also, on multiprocessing systems the clocks can give different values from processor to another! Thus I suggest you use CLOCK_REALTIME or CLOCK_MONOTONIC instead. However be warned that even if you read either of these twice in rapid succession, the timestamps usually will tens of nanoseconds apart already.
As for context switches - tthere are many kinds of context switches. The fastest approach is to switch from one thread to another entirely in software. This just means that you push the old registers on stack, set task switched flag so that SSE/FP registers will be lazily saved, save stack pointer, load new stack pointer and return from that function - since the other thread had done the same, the return from that function happens in another thread.
This thread to thread switch is quite fast, its overhead is about the same as for any system call. Switching from one process to another is much slower: this is because the user-space page tables must be flushed and switched by setting the CR0 register; this causes misses in TLB, which maps virtual addresses to physical ones.
However the <1 ns context switch/system call overhead does not really seem plausible - it is very probable that there is either hyperthreading or 2 CPU cores here, so I suggest that you set the CPU affinity on that process so that Linux only ever runs it on say the first CPU core:
#include <sched.h>
cpu_set_t mask;
CPU_ZERO(&mask);
CPU_SET(0, &mask);
result = sched_setaffinity(0, sizeof(mask), &mask);
Then you should be pretty sure that the time you're measuring comes from a real context switch. Also, to measure the time for switching floating point / SSE stacks (this happens lazily), you should have some floating point variables and do calculations on them prior to context switch, then add say .1 to some volatile floating point variable after the context switch to see if it has an effect on the switching time.
This is not straight forward but as usual someone has already done a lot of work on this. (I'm not including the source here because I cannot see any License mentioned)
https://github.com/tsuna/contextswitch/blob/master/timetctxsw.c
If you copy that file to a linux machine as (context_switch_time.c) you can compile and run it using this
gcc -D_GNU_SOURCE -Wall -O3 -std=c11 -lpthread context_switch_time.c
./a.out
I got the following result on a small VM
2000000 thread context switches in 2178645536ns (1089.3ns/ctxsw)
This question has come up before... for Linux you can find some material here.
Write a C program to measure time spent in context switch in Linux OS
Note, while the user was running the test in the above link they were also hammering the machine with games and compiling which is why the context switches were taking a long time. Some more info here...
how can you measure the time spent in a context switch under java platform

How to make a thread sleep/block for nanoseconds (or at least milliseconds)?

How can I block my thread (maybe process) for nanoseconds or maybe for a milliseconds (at least) period?
Please note that I can't use sleep, because the argument to sleep is always in seconds.
nanosleep or clock_nanosleep is the function you should be using (the latter allows you to specify absolute time rather than relative time, and use the monotonic clock or other clocks rather than just the realtime clock, which might run backwards if an operator resets it).
Be aware however that you'll rarely get better than several microseconds in terms of the resolution, and it always rounds up the duration of sleep, rather than rounding down. (Rounding down would generally be impossible anyway since, on most machines, entering and exiting kernelspace takes more than a microsecond.)
Also, if possible I would suggest using a call that blocks waiting for an event rather than sleeping for tiny intervals then polling. For instance, pthread_cond_wait, pthread_cond_timedwait, sem_wait, sem_timedwait, select, read, etc. depending on what task your thread is performing and how it synchronizes with other threads and/or communicates with the outside world.
One relatively portable way is to use select() or pselect() with no file descriptors:
void sleep(unsigned long nsec) {
struct timespec delay = { nsec / 1000000000, nsec % 1000000000 };
pselect(0, NULL, NULL, NULL, &delay, NULL);
}
Try usleep(). Yes this wouldn't give you nanosecond precision but microseconds will work => miliseconds too.
Using any variant of sleep for pthreads, the behaviour is not guaranteed. All the threads can also sleep since the kernel is not aware of the different threads. Hence a solution is required which the pthread library can handle rather than the kernel.
A safer and cleaner solution to use is the pthread_cond_timedwait...
pthread_mutex_t fakeMutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t fakeCond = PTHREAD_COND_INITIALIZER;
void mywait(int timeInSec)
{
struct timespec timeToWait;
struct timeval now;
int rt;
gettimeofday(&now,NULL);
timeToWait.tv_sec = now.tv_sec + timeInSec;
timeToWait.tv_nsec = now.tv_usec*1000;
pthread_mutex_lock(&fakeMutex);
rt = pthread_cond_timedwait(&fakeCond, &fakeMutex, &timeToWait);
pthread_mutex_unlock(&fakeMutex);
printf("\nDone\n");
}
void* fun(void* arg)
{
printf("\nIn thread\n");
mywait(5);
}
int main()
{
pthread_t thread;
void *ret;
pthread_create(&thread, NULL, fun, NULL);
pthread_join(thread,&ret);
}
For pthread_cond_timedwait , you need to specify how much time to wait from current time.
Now by the use of the function mywait() only the thread calling it will sleep and not the other pthreads.
nanosleep allows you to specify the accuracy of the sleep down to nano-seconds. However the actual resolution of your sleep is likely to be much larger due to the kernel/CPU limitations.
Accurate nano-second resolution is going to be impossible on a general Linux OS, due to the fact that generally Linux distributions aren't (hard) real-time OSes. If you really need that fined grained control over timing, consider using such an operating system.
Wikipedia has a list of some real-time operating systems here: http://en.wikipedia.org/wiki/RTOS (note that it doesn't say if they are soft or hard real time, so you'll have to do some research).
On an embedded system with access to multiple hardware timers, create a high-speed clock for your nanosecond or microsecond waits. Create a macro to enable and disable it, and handle your high-resolution processing in the timer interrupt service routine.
If wasting power and busywaiting is not an issue, perform some no-op instructions - but verify that the compiler does not optimize your no-ups out. Try using volatile types.

Loops/timers in C

How does one create a timer in C?
I want a piece of code to continuously fetch data from a gps parsers output.
Are there good libraries for this or should it be self written?
Simplest method available:
#include <pthread.h>
void *do_smth_periodically(void *data)
{
int interval = *(int *)data;
for (;;) {
do_smth();
usleep(interval);
}
}
int main()
{
pthread_t thread;
int interval = 5000;
pthread_create(&thread, NULL, do_smth_periodically, &interval)
...
}
On POSIX systems you can create (and catch) an alarm. Alarm is simple but set in seconds. If you need finer resolution than seconds then use setitimer.
struct itimerval tv;
tv.it_interval.tv_sec = 0;
tv.it_interval.tv_usec = 100000; // when timer expires, reset to 100ms
tv.it_value.tv_sec = 0;
tv.it_value.tv_usec = 100000; // 100 ms == 100000 us
setitimer(ITIMER_REAL, &tv, NULL);
And catch the timer on a regular interval by setting sigaction.
One doesn't "create a timer in C". There is nothing about timing or scheduling in the C standard, so how that is accomplished is left up to the Operating System.
This is probably a reasonable question for a C noob, as many languages do support things like this. Ada does, and I believe the next version of C++ will probably do so (Boost has support for it now). I'm pretty sure Java can do it too.
On linux, probably the best way would be to use pthreads. In particular, you need to call pthread_create() and pass it the address of your routine, which presumably contains a loop with a sleep() (or usleep()) call at the bottom.
Note that if you want to do something that approximates real-time scheduling, just doing a dumb usleep() isn't good enough because it won't account for the execution time of the loop itself. For those applications you will need to set up a periodic timer and wait on that.
SDL provides a cross platform timer in C.
http://www.libsdl.org/cgi/docwiki.cgi/SDL_AddTimer
If your using Windows, you can use SetTimer,else you can build a timer out of timeGetTime and _beginthreadex along with a queue of timers with callbacks
The question about a timer is quite unspecific, though there are two functions that come to my mind that will help you:
sleep() This function will cause execution to stop for a specified number of seconds. You can also use usleep and nanosleep if you want to specify the sleeptime more exactly
gettimeofday() Using this function you are able to stop between to timesteps.
See manpages for further explanation :)
If the gps data is coming from some hardware device, like over a serial port, then one thing that you may consider is changing the architecture around so that the parser kicks off the code that you are trying to run when more data is available.
It could do this through a callback function or it could send an event - the actual implementation would depend on what you have available.

Resources