I'm writing code that tries to detect when i signal changes from 0 to 1 as fast as possible (real time application). I have the following two functions
void *SensorSignalReader (void *arg)
{
char buffer[30];
struct timeval tv;
time_t curtime;
srand(time(NULL));
while (1) {
int t = rand() % 10 + 1; // wait up to 1 sec in 10ths
usleep(t*100000);
int r = rand() % N;
signalArray[r] ^= 1;
if (signalArray[r]) {
changedSignal = r;
gettimeofday(&tv, NULL);
timeStamp[r] = tv;
curtime = tv.tv_sec;
strftime(buffer,30,"%d-%m-%Y %T.",localtime(&curtime));
printf("Changed %5d at Time %s%ld\n",r,buffer,tv.tv_usec);
}
}
}
void *ChangeDetector (void *arg)
{
char buffer[30];
struct timeval tv;
time_t curtime;
int index;
while (1) {
while (changedSignal == -1) {} // issues with O3
gettimeofday(&tv, NULL);
index = changedSignal;
changedSignal = -1;
curtime = tv.tv_sec;
if(timeStamp[index].tv_usec>tv.tv_usec){
tv.tv_usec += 1000000;
tv.tv_sec--;
}
strftime(buffer,30,"%d-%m-%Y %T.",localtime(&curtime));
printf("Detcted %5d at Time %s%ld after %ld.%06ld sec\n---\n",index,buffer,tv.tv_usec,
tv.tv_sec - timeStamp[index].tv_sec,
tv.tv_usec - timeStamp[index].tv_usec);
}
}
I have 2 pthreads running at all times, one for each function.
When i compile normally (gcc -lpthread) This works as intended. SensorSignalReader changes changedSignal and ChangeDetector detects it as the while loop breaks. When I compile with the -O3 or flag though it feels like the variable changedSignal never changes? The while loop in ChangeDetector runs forever while signals are being changed constantly. If I put a printf("%d\n",changedSignal); inside there, it prints -1 all the time. There is something done by O3 that I do not understand. What is it?
It's very likely your program is experiencing undefined behaviour and you just got lucky when you didn't have optimisations switched on.
changedSignal appears to be a shared resource so you need to use atomic operations or some form of locking to ensure that threads won't simultaneously access it.
You can use the pthread functions for locking or gcc's builtin functions for atomic operations.
Edit: As pointed out by Olaf, it looks like you're trying to implement a producer-consumer pattern. You might want to try implementing this by using condition variables instead of trying to reinvent it.
Related
I have a character that should "eat" for 200 microseconds, "sleep" for 200 microseconds, and repeat, until they die, which happens if they haven't eaten for time_to_die microseconds.
In the code snippet in function main indicated below, the struct time_to_die has a member tv_usec configured for 1000 microseconds and I expect it to loop forever.
After some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
end_time = time_add_microseconds(time, interval);
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec);
while (time_compare(time, end_time) == -1)
{
gettimeofday(&time, NULL);
if (time_compare(time, death_time) >= 0)
{
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
}
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
while (true)
{
gettimeofday(&time, NULL);
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, last_eaten_time, time_to_die);
gettimeofday(&time, NULL);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, last_eaten_time, time_to_die);
}
}
Note: Other than the system functions I already used in my code, I'm only allowed to use usleep, write, and malloc and free.
Thank you for your time.
after some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
There are multiple possibilities. Many of them revolve around the fact that there is more going on in your computer while the program runs than just the program running. Unless you're running on a realtime operating system, the bottom line is that you can't fix some of the things that could cause such behavior.
For example, your program shares the CPU with the system itself and with all the other processes running on it. That may be more processes than you think: right now, there are over 400 live processes on my 6-core workstation. When there are more processes demanding CPU time than there are CPUs to run them on, the system will split the available time among the contending processes, preemptively suspending processes when their turns expire.
If your program happens to be preempted during a busy wait, then chances are good that substantially more than 200 μs of wall time will elapse before it is next scheduled any time on a CPU. Time slice size is usually measured in milliseconds, and on a general-purpose OS, there is no upper (or lower) bound on the time between the elapse of one and the commencement of the same program's next one.
As I did in comments, I observe that you are using gettimeofday to measure elapsed time, yet that is not on your list of allowed system functions. One possible resolution of that inconsistency is that you're not meant to perform measurements of elapsed time, but rather to assume / simulate. For example, usleep() is on the list, so perhaps you're meant to usleep() instead of busy wait, and assume that the sleep time is exactly what was requested. Or perhaps you're meant to just adjust an internal time counter instead of actually pausing execution at all.
Why
Ultimately: because an interrupt or trap is delivered to the CPU core executing your program, which transfers control to the operating system.
Some common causes:
The operating system is running its process scheduling using a hardware timer which fires a regular intervals. I.e. the OS is running some kind of fair scheduler and it has to check if your process' time is up for now.
Some device in your system needs to be serviced. E.g. a packet arrived over the network, your sound card's output buffer is running low and must be refilled, etc.
Your program voluntarily makes a request to the operating system that transfers control to it. Basically: anytime you make a syscall, the kernel may have to wait for I/O, or it may decide that it's time to schedule a different process, or both. In your case, the calls to printf will at some point result in a write(2) syscall that will end up performing some I/O.
What to do
Cause 3 can be avoided by ensuring that no syscalls are made, i.e. never trapping in to the OS.
Causes 1 and 2 are very difficult to completely get rid of. You're essentially looking for a real-time operating system (RTOS). An OS like Linux can approximate that by placing processes in different scheduling domains (SCHED_FIFO/SCHED_RR). If you're willing to switch to a kernel that is tailored towards real-time applications, you can get even further. You can also look in to topics like "CPU isolation".
Just to illustrate the printf, but also gettimeofday timings I was mentionned in comments, I tried 2 things
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long histo[5000];
for(int i=0; i<5000; i++){
gettimeofday(&time, NULL);
histo[i]=time.tv_sec * 1000000ULL + time.tv_usec;
}
long long min=1000000000;
long long max=0;
for(int i=1; i<5000; i++){
long long dt=histo[i]-histo[i-1];
if(dt<min) min=dt;
if(dt>max) max=dt;
if(dt>800) printf("%d %lld\n", i, dt);
}
printf("avg: %f min=%lld max=%lld\n", (histo[4999]-histo[0])/5000.0, min, max);
}
So all it does here, is just looping in 5000 printf/gettimeofday iterations. And then measuring (after the loop) the mean, min and max.
On my X11 terminal (Sakura), average is 8 μs per loop, with min 1 μs, and max 3790 μs! (other measurement I made show that this 3000 or so μs is also the only one over 200 μs. In other words, it never goes over 200 μs. Except when it does "bigly").
So, on average, everything goes well. But once in a while, a printf takes almost 4ms (which is not enough, it that doesn't happen several times in a row for a human user to even notice it. But is way more than needed to make your code fail).
On my console (no X11) terminal (a 80x25 terminal, that may, or may not use text mode of my graphics card, I never was sure), mean is 272 μs, min 193 μs, and max is 1100 μs. Which (retroactively) is not surprising. This terminal is slow, but simpler, so less prone to "surprises".
But, well, it fails faster, because probability of going over 200 μs is very high, even if it is not a lot over, more than half of the loops take more than 200 μs.
I also tried measurements on a loop without printf.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long old=0;
long long ntot=0;
long long nov10=0;
long long nov100=0;
long long nov1000=0;
for(int i=0;;i++){
gettimeofday(&time, NULL);
long long t=time.tv_sec * 1000000ULL + time.tv_usec;
if(old){
long long dt=t-old;
ntot++;
if(dt>10){
nov10++;
if(dt>100){
nov100++;
if(dt>1000) nov1000++;
}
}
}
if(i%10000==0){
printf("tot=%lld >10=%lld >100=%lld >1000=%lld\n", ntot, nov10, nov100, nov1000);
old=0;
}else{
old=t;
}
}
}
So, it measures something that I could pompously call a "logarithmic histogram" of timings.
This times, independent from the terminal (I put back old to 0 each times I print something so that those times doesn't count)
Result
tot=650054988 >10=130125 >100=2109 >1000=2
So, sure, 99.98% of the times, gettimeofday takes less than 10μs.
But, 3 times each millions call (and that means, in your code, only a few seconds), it takes more than 100μs. And twice in my experiment, it took more than 1000 μs. Just gettimeofday, not the printf.
Obviously, it's not gettimeofday that took 1ms. But simply, something more important occurred on my system, and that process had to wait 1ms to get some cpu time from the scheduler.
And bear in mind that this is on my computer. And on my computer, your code runs fine (well, those measurement shows that it would have failed eventually if I let it run as long as I let those measurements run).
On your computer, those numbers (the 2 >1000) are certainly way more, so it fails very quickly.
preemptive multitasks OS are simply not made to guarantee executions times in micro-seconds. You have to use a Real Time OS for that (RT-linux for example. It it sills exist, anyway — I haven't used it since 2002).
As pointed out in the other answers, there is no way to make this code work as I expected without a major change in its design, within my constraints. So I changed my code to not depend on gettimeofday for determining whether a philosopher died, or determining the time value to print. Instead, I just add 200 μs to time every time my character eats/sleeps. This does feel like a cheap trick. Because while at the start I display the correct system wall time, my time variable will differentiate from the system time more and more as the program runs, but I guess this is what was wanted from me.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
bool is_destined_to_die(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
current_time = time_add_microseconds(current_time, interval);
if ((current_time.tv_sec * 1000000ULL + current_time.tv_usec) - (last_eaten_time.tv_sec * 1000000ULL + last_eaten_time.tv_usec) >= time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec)
return (true);
else
return (false);
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
if (is_destined_to_die(interval, current_time, last_eaten_time, time_to_die))
{
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000 + time_to_die.tv_usec);
while (time_compare(time, death_time) == -1)
gettimeofday(&time, NULL);
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
end_time = time_add_microseconds(time, interval);
while (time_compare(time, end_time) == -1)
gettimeofday(&time, NULL);
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
gettimeofday(&time, NULL);
while (true)
{
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
}
}
I'm writing an ANSI C application that will be built on both Linux and Windows.
I built the pthread 2.9.1 library for Windows and all works fine.
The problem is that I can't find the function: pthread_sleep()
I also looked around for that function but it seems that it doesn't exist.
Without that function my code will have to call Sleep() on Windows and sleep() on Linux but that's exactly what I don't want.
Thanks,
Enrico Migliore
The problem is that I can't find the function: pthread_sleep()
No, you wouldn't, since pthreads does not provide such a function. And why would it? There is nothing specific to the pthreads API about making a thread sleep, and genuine POSIX platforms all have a variety of other mechanisms, including sleep(), to make a thread sleep. Not that making threads of a multithreaded program sleep (as opposed to block) is very often a good or reasonable thing to do, anyway.
Without that function my code will have to call Sleep() on Windows and sleep() on Linux but that's exactly what I don't want.
I'm inclined to think that you don't really want to call a platform-agnostic alternative either. But the traditional approach to handling such issues of using various platform-specific functions to accomplish a common goal is to use a conditionally-defined macro or a wrapper function with condtionally-defined implementation to hide the platform-specific bits. For example,
#if defined(_MSC_VER)
// Sleep() expects an argument in milliseconds
#define SLEEP(time) Sleep((time) * 1000)
#else
// sleep() expects an argument in seconds
#define SLEEP(time) sleep(time)
#endif
For your particular case, however, there is also the possibility of calling a function with a user-specifiable timeout that you expect always to expire. Pthreads does provide functions you could use for that, such as pthread_cond_timedwait(). You could write a sleep function with that, without any conditional compilation. For example,
int my_sleep(long milliseconds) {
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cv = PTHREAD_COND_INITIALIZER;
int rval = 0;
if (milliseconds > 0) {
struct timespec time;
rval = timespec_get(&time, TIME_UTC);
// ... handle any error ...
time.tv_nsec += (milliseconds % 1000) * 1000000;
time.tv_sec += milliseconds / 1000 + time.tv_nsec / 1000000;
time.tv_nsec %= 1000000;
rval = pthread_mutex_lock(&mutex);
if (rval != 0) {
// handle error ...
} else {
// The loop handles spurrious wakepups
do {
rval = pthread_cond_timedwait(&cond, &mutex, &time); // expects ETIMEDOUT
} while (rval == 0);
if (rval != ETIMEDOUT) {
// handle error ...
}
rval = pthread_mutex_unlock(&mutex); // THIS MUST NOT BE SKIPPED
// ... handle any error ...
}
}
return rval;
}
I am working on a code with multiple number of threads and I want to print the time it took for me to complete task I assigned the i-th thread to do. Meaning I want to print the time each thread took to be done with the doSomeThing function
int main(int argc, char *argv[]){
// ...
i=0;
while (i < NumberOfThreads){
check = pthread_create(&(id_arr[i]), NULL, &doSomeThing, &data);
i++;
}
// ...
}
void* doSomeThing(void *arg){
// ...
}
if I add gettimeofday(&thread_start, NULL) before the pthread_create and then add gettimeofday(&thread_end, NULL) after the pthread_create, will I be actually measuring the time each thread took or just the time the main took? And if I put the gettimeofday inside the doSomething function wouldn't they create race-conditions?
If you have any idea on how to measure the time per thread please let me know, thank you.
You can certainly use gettimeofday inside the thread function itself. Using local (stack) variables is completely thread-safe - every thread runs on its own stack (by definition).
void* doSomeThing(void *arg){
struct timeval t0, t1, dt;
gettimeofday(&t0, NULL);
// do work
gettimeofday(&t1, NULL);
timersub(&t1, &t0, &dt);
fprintf(stderr, "doSomeThing (thread %ld) took %d.%06d sec\n",
(long)pthread_self(), dt.tv_sec, dt.tv_usec);
}
If you put the same code around pthread_create(), you would only see the amount of time it took for the thread to be created, not executed. If pthread_create blocked until the thread completed, there would be no point in ever using threads!
gettimeofday() measures elapsed time. If you want to measure CPU time, try this:
#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/stdint.h>
uint64_t time_used ( ) {
struct rusage ru;
struct timeval t;
getrusage(RUSAGE_THREAD,&ru);
t = ru.ru_utime;
return (uint64_t) t.tv_sec*1000 + t.tv_usec/1000;
}
...
uint64_t t1, t2;
t1 = time_used();
... do some work ...
t2 = time_used();
printf("used %d milliseconds\n",(unsigned)(t2-t1));
You will have to do that inside the thread. This is an example. Search time_used
Jonathon Reinhart uses timersub(), which simplifies things. I merge that in here:
void time_used ( struct timeval *t ) {
struct rusage ru;
getrusage(RUSAGE_THREAD,&ru);
*t = ru.ru_utime;
}
...
struct timeval t0, t1, dt;
time_used(&t0);
... do some work ...
time_used(&t1);
timersub(&t1, &t0, &dt);
printf("used %d.%06d seconds\n", dt.tv_sec, dt.tv_usec);
I have a state machine implementation in a library which runs on Linux. The main loop of the program is to simply wait until enough time has passed to require the next execution of the state machine.
At them moment I have a loop which is similar to the following psuedo-code:
while( 1 )
{
while( StateTicks() > 0 )
StateMachine();
Pause( 10ms );
}
Where StateTicks may return a tick every 50ms or so. The shorter I make the Pause() the more CPU time I use in the program.
Is there a better way to test for a period of time passing, perhaps based on Signals? I'd rather halt execution until StateTicks() is > 0 rather than have the Pause() call at all.
Underneath the hood of the state machine implementation StateTicks uses clock_gettime(PFT_CLOCK ...) which works well. I'm keen to keep that timekeeping because if a StateMachine() call takes longer than a state machine tick this implementation will catchup.
Pause uses nanosleep to achieve a reasonably accurate pause time.
Perhaps this is already the best way, but it doesn't seem particularly graceful.
Create a periodic timer using timer_create(), and have it call sem_post() on a "timer tick semaphore".
To avoid losing ticks, I recommend using a real-time signal, perhaps SIGRTMIN+0 or SIGRTMAX-0. sem_post() is async-signal-safe, so you can safely use it in a signal handler.
Your state machine simply waits on the semaphore; no other timekeeping needed. If you take too long to process a tick, the following sem_wait() will not block, but return immediately. Essentially, the semaphore counts "lost" ticks.
Example code (untested!):
#define _POSIX_C_SOURCE 200809L
#include <semaphore.h>
#include <signal.h>
#include <errno.h>
#include <time.h>
#define TICK_SIGNAL (SIGRTMIN+0)
static timer_t tick_timer;
static sem_t tick_semaphore;
static void tick_handler(int signum, siginfo_t *info, void *context)
{
if (info && info->si_code == SI_TIMER) {
const int saved_errno = errno;
sem_post((sem_t *)info->si_value.sival_ptr);
errno = saved_errno;
}
}
static int tick_setup(const struct timespec interval)
{
struct sigaction act;
struct sigevent evt;
struct itimerspec spec;
if (sem_init(&tick_semaphore, 0, 0))
return errno;
sigemptyset(&act.sa_mask);
act.sa_handler = tick_handler;
act.sa_flags = 0;
if (sigaction(TICK_SIGNAL, &act, NULL))
return errno;
evt.sigev_notify = SIGEV_SIGNAL;
evt.sigev_signo = TICK_SIGNAL;
evt.sigev_value.sival_ptr = &tick_semaphore;
if (timer_create(CLOCK_MONOTONIC, &evt, &tick_timer))
return errno;
spec.it_interval = interval;
spec.it_value = interval;
if (timer_settime(tick_timer, 0, &spec, NULL))
return errno;
return 0;
}
with the tick loop being simply
if (tick_setup(some_interval))
/* failed, see errno; abort */
while (!sem_wait(&tick_semaphore)) {
/* process tick */
}
If you support more than one concurrent state, the one signal handler suffices. Your state typically would include
timer_t timer;
sem_t semaphore;
struct timespec interval;
and the only tricky thing is to make sure there is no pending timer signal when destroying the state that signal would access.
Because signal delivery will interrupt any blocking I/O in the thread used for the signal delivery, you might wish to set up a special thread in your library to handle the timer tick realtime signals, with the realtime signal blocked in all other threads. You can mark your library initialization function __attribute__((constructor)), so that it is automatically executed prior to main().
Optimally, you should use the same thread that does the tick processing for the signal delivery. Otherwise there will be some small jitter or latency in the tick processing, if the signal was delivered using a different CPU core than the one that is running the tick processing.
Basile Starynkevitch's answer jogged my memory about the latencies involved in waiting and signal delivery: If you use nanosleep() and clock_gettime(CLOCK_MONOTONIC,), you can adjust the sleep times to account for the typical latencies.
Here's a quick test program using clock_gettime(CLOCK_MONOTONIC,) and nanosleep():
#define _POSIX_C_SOURCE 200809L
#include <sys/select.h>
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
static const long tick_latency = 75000L; /* 0.75 ms */
static const long tick_adjust = 75000L; /* 0.75 ms */
typedef struct {
struct timespec next;
struct timespec tick;
} state;
void state_init(state *const s, const double ticks_per_sec)
{
if (ticks_per_sec > 0.0) {
const double interval = 1.0 / ticks_per_sec;
s->tick.tv_sec = (time_t)interval;
s->tick.tv_nsec = (long)(1000000000.0 * (interval - (double)s->tick.tv_sec));
if (s->tick.tv_nsec < 0L)
s->tick.tv_nsec = 0L;
else
if (s->tick.tv_nsec > 999999999L)
s->tick.tv_nsec = 999999999L;
} else {
s->tick.tv_sec = 0;
s->tick.tv_nsec = 0L;
}
clock_gettime(CLOCK_MONOTONIC, &s->next);
}
static unsigned long count;
double state_tick(state *const s)
{
struct timespec now, left;
/* Next tick. */
s->next.tv_sec += s->tick.tv_sec;
s->next.tv_nsec += s->tick.tv_nsec;
if (s->next.tv_nsec >= 1000000000L) {
s->next.tv_nsec -= 1000000000L;
s->next.tv_sec++;
}
count = 0UL;
while (1) {
/* Get current time. */
clock_gettime(CLOCK_MONOTONIC, &now);
/* Past tick time? */
if (now.tv_sec > s->next.tv_sec ||
(now.tv_sec == s->next.tv_sec &&
now.tv_nsec >= s->next.tv_nsec - tick_latency))
return (double)(now.tv_sec - s->next.tv_sec)
+ (double)(now.tv_nsec - s->next.tv_nsec) / 1000000000.0;
/* Calculate duration to wait */
left.tv_sec = s->next.tv_sec - now.tv_sec;
left.tv_nsec = s->next.tv_nsec - now.tv_nsec - tick_adjust;
if (left.tv_nsec >= 1000000000L) {
left.tv_nsec -= 1000000000L;
left.tv_sec++;
} else
if (left.tv_nsec < -1000000000L) {
left.tv_nsec += 2000000000L;
left.tv_sec += 2;
} else
if (left.tv_nsec < 0L) {
left.tv_nsec += 1000000000L;
left.tv_sec--;
}
count++;
nanosleep(&left, NULL);
}
}
int main(int argc, char *argv[])
{
double rate, jitter;
long ticks, i;
state s;
char dummy;
if (argc != 3 || !strcmp(argv[1], "-h") || !strcmp(argv[1], "--help")) {
fprintf(stderr, "\n");
fprintf(stderr, "Usage: %s [ -h | --help ]\n", argv[0]);
fprintf(stderr, " %s TICKS_PER_SEC TICKS\n", argv[0]);
fprintf(stderr, "\n");
return 1;
}
if (sscanf(argv[1], " %lf %c", &rate, &dummy) != 1 || rate <= 0.0) {
fprintf(stderr, "%s: Invalid tick rate.\n", argv[1]);
return 1;
}
if (sscanf(argv[2], " %ld %c", &ticks, &dummy) != 1 || ticks < 1L) {
fprintf(stderr, "%s: Invalid tick count.\n", argv[2]);
return 1;
}
state_init(&s, rate);
for (i = 0L; i < ticks; i++) {
jitter = state_tick(&s);
if (jitter > 0.0)
printf("Tick %9ld: Delayed %9.6f ms, %lu sleeps\n", i+1L, +1000.0 * jitter, count);
else
if (jitter < 0.0)
printf("Tick %9ld: Premature %9.6f ms, %lu sleeps\n", i+1L, -1000.0 * jitter, count);
else
printf("Tick %9ld: Exactly on time, %lu sleeps\n", i+1L, count);
fflush(stdout);
}
return 0;
}
Above, tick_latency is the number of nanoseconds you're willing to accept a "tick" in advance, and tick_adjust is the number of nanoseconds you subtract from each sleep duration.
The best values for those are highly configuration-specific, and I haven't got a robust method for estimating them. Hardcoding them (to 0.75ms as above) does not sound too good to me either; perhaps using command-line options or environment values to let users control it, and default to zero would be better.
Anyway, compiling the above as
gcc -O2 test.c -lrt -o test
and running a 500-tick test at 50Hz tick rate,
./test 50 500 | sort -k 4
shows that on my machine, the ticks are accepted within 0.051 ms (51 µs) of the desired moment. Even reducing the priority does not seem to affect it much. A test using 5000 ticks at 5kHz rate (0.2ms per tick),
nice -n 19 ./test 5000 5000 | sort -k 4
yields similar results -- although I did not bother to check what happens if the machine load changes during a run.
In other words, preliminary tests on a single machine indicates it might be a viable option, so you might wish to test the scheme on different machines and under different loads. It is much more precise than I expected on my own machine (Ubuntu 3.11.0-24-generic on x86_64, running on an AMD Athlon II X4 640 CPU).
This approach has the interesting property that you can easily use a single thread to maintain multiple states, even if they use different tick rates. You only need to check which state has the next tick (earliest ->next time), nanosleep() if that occurs in the future, and process the tick, advancing that state to the next tick.
Questions?
In addition of Nominal Animal's answer :
If the Pause time is several milliseconds, you might use poll(2) or perhaps nanosleep(2) (you might compute the remaining time to sleep, e.g. using clock_gettime(2) with CLOCK_REALTIME ...)
If you care about the fact that StateMachine may take several milliseconds (or a large fraction of a millisecond) and you want exactly a 10 millisecond period, consider perhaps using a poll based event loop which uses the Linux specific timerfd_create(2)
See also time(7), and this, that answers (to question about poll etc...)
I am working on the following code. The program should be able to handle SIGINT with sigaction. So far, it's almost done, but I have come along two problems. The first problem is the program should print "Shutting down" and exit with status 1 if it receives 3 signals within 3 seconds.
The second problem is I am using gettimeofday and struct timeval to get the time in seconds regarding to the arrival times of the signals, but I failed here as well. When I tried it out, I got stuck in an infinite loop, even thought I pressed ctrl + C 3 times within 3 seconds. Also, the resulting seconds are quite big numbers.
I hope someone could help me to get these two problems done. Here's the code
int timeBegin = 0;
void sig_handler(int signo) {
(void) signo;
struct timeval t;
gettimeofday(&t, NULL);
int timeEnd = t.tv_sec + t.tv_usec;
printf("Received Signal\n");
int result = timeEnd - timeBegin;
if(check if under 3 seconds) { // How to deal with these two problems?
printf("Shutting down\n");
exit(1);
}
timeBegin = timeEnd // EDIT: setting the time new, each time when a signal arrives. Is that somehow helpful?
}
int main() {
struct sigaction act;
act.sa_handler = &sig_handler;
sigaction(SIGINT, &act, NULL);
for( ;; ) {
sleep(1);
}
return 0;
}
int timeEnd = t.tv_sec + t.tv_usec;
That won't work because tv_sec and tv_usec are different orders of magnitude. If you want microsecond accuracy, you'll have to store the value in a larger type (e.g. int64_t) and convert the seconds to microseconds.
if(check if under 3 seconds) { // How to deal with these two problems?
Well, what have you tried? You have several signals arriving at different times, you need to keep some state about them to know if all arrived within 3 seconds of each other.