Inconsistent busy waiting time in c - c

I have a character that should "eat" for 200 microseconds, "sleep" for 200 microseconds, and repeat, until they die, which happens if they haven't eaten for time_to_die microseconds.
In the code snippet in function main indicated below, the struct time_to_die has a member tv_usec configured for 1000 microseconds and I expect it to loop forever.
After some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
end_time = time_add_microseconds(time, interval);
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec);
while (time_compare(time, end_time) == -1)
{
gettimeofday(&time, NULL);
if (time_compare(time, death_time) >= 0)
{
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
}
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
while (true)
{
gettimeofday(&time, NULL);
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, last_eaten_time, time_to_die);
gettimeofday(&time, NULL);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, last_eaten_time, time_to_die);
}
}
Note: Other than the system functions I already used in my code, I'm only allowed to use usleep, write, and malloc and free.
Thank you for your time.

after some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
There are multiple possibilities. Many of them revolve around the fact that there is more going on in your computer while the program runs than just the program running. Unless you're running on a realtime operating system, the bottom line is that you can't fix some of the things that could cause such behavior.
For example, your program shares the CPU with the system itself and with all the other processes running on it. That may be more processes than you think: right now, there are over 400 live processes on my 6-core workstation. When there are more processes demanding CPU time than there are CPUs to run them on, the system will split the available time among the contending processes, preemptively suspending processes when their turns expire.
If your program happens to be preempted during a busy wait, then chances are good that substantially more than 200 μs of wall time will elapse before it is next scheduled any time on a CPU. Time slice size is usually measured in milliseconds, and on a general-purpose OS, there is no upper (or lower) bound on the time between the elapse of one and the commencement of the same program's next one.
As I did in comments, I observe that you are using gettimeofday to measure elapsed time, yet that is not on your list of allowed system functions. One possible resolution of that inconsistency is that you're not meant to perform measurements of elapsed time, but rather to assume / simulate. For example, usleep() is on the list, so perhaps you're meant to usleep() instead of busy wait, and assume that the sleep time is exactly what was requested. Or perhaps you're meant to just adjust an internal time counter instead of actually pausing execution at all.

Why
Ultimately: because an interrupt or trap is delivered to the CPU core executing your program, which transfers control to the operating system.
Some common causes:
The operating system is running its process scheduling using a hardware timer which fires a regular intervals. I.e. the OS is running some kind of fair scheduler and it has to check if your process' time is up for now.
Some device in your system needs to be serviced. E.g. a packet arrived over the network, your sound card's output buffer is running low and must be refilled, etc.
Your program voluntarily makes a request to the operating system that transfers control to it. Basically: anytime you make a syscall, the kernel may have to wait for I/O, or it may decide that it's time to schedule a different process, or both. In your case, the calls to printf will at some point result in a write(2) syscall that will end up performing some I/O.
What to do
Cause 3 can be avoided by ensuring that no syscalls are made, i.e. never trapping in to the OS.
Causes 1 and 2 are very difficult to completely get rid of. You're essentially looking for a real-time operating system (RTOS). An OS like Linux can approximate that by placing processes in different scheduling domains (SCHED_FIFO/SCHED_RR). If you're willing to switch to a kernel that is tailored towards real-time applications, you can get even further. You can also look in to topics like "CPU isolation".

Just to illustrate the printf, but also gettimeofday timings I was mentionned in comments, I tried 2 things
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long histo[5000];
for(int i=0; i<5000; i++){
gettimeofday(&time, NULL);
histo[i]=time.tv_sec * 1000000ULL + time.tv_usec;
}
long long min=1000000000;
long long max=0;
for(int i=1; i<5000; i++){
long long dt=histo[i]-histo[i-1];
if(dt<min) min=dt;
if(dt>max) max=dt;
if(dt>800) printf("%d %lld\n", i, dt);
}
printf("avg: %f min=%lld max=%lld\n", (histo[4999]-histo[0])/5000.0, min, max);
}
So all it does here, is just looping in 5000 printf/gettimeofday iterations. And then measuring (after the loop) the mean, min and max.
On my X11 terminal (Sakura), average is 8 μs per loop, with min 1 μs, and max 3790 μs! (other measurement I made show that this 3000 or so μs is also the only one over 200 μs. In other words, it never goes over 200 μs. Except when it does "bigly").
So, on average, everything goes well. But once in a while, a printf takes almost 4ms (which is not enough, it that doesn't happen several times in a row for a human user to even notice it. But is way more than needed to make your code fail).
On my console (no X11) terminal (a 80x25 terminal, that may, or may not use text mode of my graphics card, I never was sure), mean is 272 μs, min 193 μs, and max is 1100 μs. Which (retroactively) is not surprising. This terminal is slow, but simpler, so less prone to "surprises".
But, well, it fails faster, because probability of going over 200 μs is very high, even if it is not a lot over, more than half of the loops take more than 200 μs.
I also tried measurements on a loop without printf.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long old=0;
long long ntot=0;
long long nov10=0;
long long nov100=0;
long long nov1000=0;
for(int i=0;;i++){
gettimeofday(&time, NULL);
long long t=time.tv_sec * 1000000ULL + time.tv_usec;
if(old){
long long dt=t-old;
ntot++;
if(dt>10){
nov10++;
if(dt>100){
nov100++;
if(dt>1000) nov1000++;
}
}
}
if(i%10000==0){
printf("tot=%lld >10=%lld >100=%lld >1000=%lld\n", ntot, nov10, nov100, nov1000);
old=0;
}else{
old=t;
}
}
}
So, it measures something that I could pompously call a "logarithmic histogram" of timings.
This times, independent from the terminal (I put back old to 0 each times I print something so that those times doesn't count)
Result
tot=650054988 >10=130125 >100=2109 >1000=2
So, sure, 99.98% of the times, gettimeofday takes less than 10μs.
But, 3 times each millions call (and that means, in your code, only a few seconds), it takes more than 100μs. And twice in my experiment, it took more than 1000 μs. Just gettimeofday, not the printf.
Obviously, it's not gettimeofday that took 1ms. But simply, something more important occurred on my system, and that process had to wait 1ms to get some cpu time from the scheduler.
And bear in mind that this is on my computer. And on my computer, your code runs fine (well, those measurement shows that it would have failed eventually if I let it run as long as I let those measurements run).
On your computer, those numbers (the 2 >1000) are certainly way more, so it fails very quickly.
preemptive multitasks OS are simply not made to guarantee executions times in micro-seconds. You have to use a Real Time OS for that (RT-linux for example. It it sills exist, anyway — I haven't used it since 2002).

As pointed out in the other answers, there is no way to make this code work as I expected without a major change in its design, within my constraints. So I changed my code to not depend on gettimeofday for determining whether a philosopher died, or determining the time value to print. Instead, I just add 200 μs to time every time my character eats/sleeps. This does feel like a cheap trick. Because while at the start I display the correct system wall time, my time variable will differentiate from the system time more and more as the program runs, but I guess this is what was wanted from me.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
bool is_destined_to_die(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
current_time = time_add_microseconds(current_time, interval);
if ((current_time.tv_sec * 1000000ULL + current_time.tv_usec) - (last_eaten_time.tv_sec * 1000000ULL + last_eaten_time.tv_usec) >= time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec)
return (true);
else
return (false);
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
if (is_destined_to_die(interval, current_time, last_eaten_time, time_to_die))
{
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000 + time_to_die.tv_usec);
while (time_compare(time, death_time) == -1)
gettimeofday(&time, NULL);
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
end_time = time_add_microseconds(time, interval);
while (time_compare(time, end_time) == -1)
gettimeofday(&time, NULL);
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
gettimeofday(&time, NULL);
while (true)
{
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
}
}

Related

What does this "alarm" error mean?

I am trying to get the memory consumed by an algorithm, so I have created a group of functions that would stop the execution in periods of 10 milliseconds to let me read the memory using the getrusage() function. The idea is to set a timer that will raise an alarm signal to the process which will be received by a handler medir_memoria().
However, the program stops in the middle with this message:
[1] 3267 alarm ./memory_test
The code for reading the memory is:
#include "../include/rastreador_memoria.h"
#if defined(__linux__) || defined(__APPLE__) || (defined(__unix__) && !defined(_WIN32))
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <signal.h>
#include <sys/resource.h>
static long max_data_size;
static long max_stack_size;
void medir_memoria (int sig)
{
struct rusage info_memoria;
if (getrusage(RUSAGE_SELF, &info_memoria) < 0)
{
perror("Not reading memory");
}
max_data_size = (info_memoria.ru_idrss > max_data_size) ? info_memoria.ru_idrss : max_data_size;
max_stack_size = (info_memoria.ru_isrss > max_stack_size) ? info_memoria.ru_isrss : max_stack_size;
signal(SIGALRM, medir_memoria);
}
void rastrear_memoria ()
{
struct itimerval t;
t.it_interval.tv_sec = 0;
t.it_interval.tv_usec = 10;
t.it_value.tv_sec = 0;
t.it_value.tv_usec = 10;
max_data_size = 0;
max_stack_size = 0;
setitimer(ITIMER_REAL, &t,0);
signal(SIGALRM, medir_memoria);
}
void detener_rastreo ()
{
signal(SIGALRM, SIG_DFL);
printf("Data: %ld\nStack: %ld\n", max_data_size, max_stack_size);
}
#else
#endif
The main() function works calling all of them in this order:
rastrear_memoria()
Function of the algorithm I am testing
detener_rastreo()
How can I solve this? What does that alarm message mean?
First, setting an itimer to ring every 10 µs is optimistic, since ten microseconds is really a small interval of time. Try with 500 µs (or perhaps even 20 milliseconds, i.e. 20000 µs) instead of 10 µs first.
stop the execution in periods of 10 milliseconds
You have coded for a period of 10 microseconds, not milliseconds!
Then, you should exchange the two lines and code:
signal(SIGALRM, medir_memoria);
setitimer(ITIMER_REAL, &t,0);
so that a signal handler is set before the first itimer rings.
I guess your first itimer rings before the signal handler was installed. Read carefully signal(7) and time(7). The default handling of SIGALRM is termination.
BTW, a better way to measure the time used by some function is clock_gettime(2) or clock(3). Thanks to vdso(7) tricks, clock_gettime is able to get some clock in less than 50 nanoseconds on my i5-4690S desktop computer.
trying to get the memory consumed
You could consider using proc(5) e.g. opening, reading, and closing quickly /proc/self/status or /proc/self/statm etc....
(I guess you are on Linux)
BTW, your measurements will disappoint you: notice that quite often free(3) don't release memory to the kernel (thru munmap(2)...) but simply mark & manage that zone to be reusable by future malloc(3). You might consider mallinfo(3) or malloc_info(3) but notice that it is not async-signal-safe so cannot be called from inside a signal handler.
(I tend to believe that your approach is deeply flawed)

Measuring time with time.h?

I'm trying to measure time it takes to run a command using my own command interpreter, but is the time correct? When I run a command it says time much longer than expected:
miniShell>> pwd
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug
Execution time 1828 ms
I'm using the gettimeofday as can be seen from the code. Isn't it wrong somewhere and should be changed so that the timing looks reasonable?
If I make a minimal example, then it looks and runs like this:
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main(int argc, char *argv[]) {
long time;
struct timeval time_start;
struct timeval time_end;
gettimeofday(&time_start, NULL);
printf("run program>> ");
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec-time_start.tv_sec)*1000000 + time_end.tv_usec-time_start.tv_usec;
printf("Execution time %ld ms\n", time); /*Print out the execution time*/
return (0);
}
Then I run it
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug/oslab
run program>> Execution time 14 ms
Process finished with exit code 0
The above 14 ms seems reasonable, why is the time so long for my command?
The tv_usec in struct timeval is a time in microseconds, not milliseconds.
You compute the time incorrectly. tv_usec, where the u stands for the Greek lowercase letter μ ("mu"), holds a number of microseconds. Fix the formula this way:
gettimeofday(&time_end, NULL);
time = (((time_end.tv_sec - time_start.tv_sec) * 1000000LL) +
time_end.tv_usec - time_start.tv_usec) / 1000;
printf("Execution time %ld ms\n", time); /* Print out the execution time*/
It is preferable to make the computation in 64 bits to avoid overflows if long is 32 bits and the elapsed time can exceed 40 minutes.
If you want to preserve the maximum precision, keep the computation in microseconds and print the number of milliseconds with a decimal point:
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec - time_start.tv_sec) * 1000000 +
time_end.tv_usec - time_start.tv_usec;
printf("Execution time %ld.%03ld ms\n", time / 1000, time % 1000);

Multithreading and O3 compilation in C

I'm writing code that tries to detect when i signal changes from 0 to 1 as fast as possible (real time application). I have the following two functions
void *SensorSignalReader (void *arg)
{
char buffer[30];
struct timeval tv;
time_t curtime;
srand(time(NULL));
while (1) {
int t = rand() % 10 + 1; // wait up to 1 sec in 10ths
usleep(t*100000);
int r = rand() % N;
signalArray[r] ^= 1;
if (signalArray[r]) {
changedSignal = r;
gettimeofday(&tv, NULL);
timeStamp[r] = tv;
curtime = tv.tv_sec;
strftime(buffer,30,"%d-%m-%Y %T.",localtime(&curtime));
printf("Changed %5d at Time %s%ld\n",r,buffer,tv.tv_usec);
}
}
}
void *ChangeDetector (void *arg)
{
char buffer[30];
struct timeval tv;
time_t curtime;
int index;
while (1) {
while (changedSignal == -1) {} // issues with O3
gettimeofday(&tv, NULL);
index = changedSignal;
changedSignal = -1;
curtime = tv.tv_sec;
if(timeStamp[index].tv_usec>tv.tv_usec){
tv.tv_usec += 1000000;
tv.tv_sec--;
}
strftime(buffer,30,"%d-%m-%Y %T.",localtime(&curtime));
printf("Detcted %5d at Time %s%ld after %ld.%06ld sec\n---\n",index,buffer,tv.tv_usec,
tv.tv_sec - timeStamp[index].tv_sec,
tv.tv_usec - timeStamp[index].tv_usec);
}
}
I have 2 pthreads running at all times, one for each function.
When i compile normally (gcc -lpthread) This works as intended. SensorSignalReader changes changedSignal and ChangeDetector detects it as the while loop breaks. When I compile with the -O3 or flag though it feels like the variable changedSignal never changes? The while loop in ChangeDetector runs forever while signals are being changed constantly. If I put a printf("%d\n",changedSignal); inside there, it prints -1 all the time. There is something done by O3 that I do not understand. What is it?
It's very likely your program is experiencing undefined behaviour and you just got lucky when you didn't have optimisations switched on.
changedSignal appears to be a shared resource so you need to use atomic operations or some form of locking to ensure that threads won't simultaneously access it.
You can use the pthread functions for locking or gcc's builtin functions for atomic operations.
Edit: As pointed out by Olaf, it looks like you're trying to implement a producer-consumer pattern. You might want to try implementing this by using condition variables instead of trying to reinvent it.

Best way to efficiently pause execution in c

I have a state machine implementation in a library which runs on Linux. The main loop of the program is to simply wait until enough time has passed to require the next execution of the state machine.
At them moment I have a loop which is similar to the following psuedo-code:
while( 1 )
{
while( StateTicks() > 0 )
StateMachine();
Pause( 10ms );
}
Where StateTicks may return a tick every 50ms or so. The shorter I make the Pause() the more CPU time I use in the program.
Is there a better way to test for a period of time passing, perhaps based on Signals? I'd rather halt execution until StateTicks() is > 0 rather than have the Pause() call at all.
Underneath the hood of the state machine implementation StateTicks uses clock_gettime(PFT_CLOCK ...) which works well. I'm keen to keep that timekeeping because if a StateMachine() call takes longer than a state machine tick this implementation will catchup.
Pause uses nanosleep to achieve a reasonably accurate pause time.
Perhaps this is already the best way, but it doesn't seem particularly graceful.
Create a periodic timer using timer_create(), and have it call sem_post() on a "timer tick semaphore".
To avoid losing ticks, I recommend using a real-time signal, perhaps SIGRTMIN+0 or SIGRTMAX-0. sem_post() is async-signal-safe, so you can safely use it in a signal handler.
Your state machine simply waits on the semaphore; no other timekeeping needed. If you take too long to process a tick, the following sem_wait() will not block, but return immediately. Essentially, the semaphore counts "lost" ticks.
Example code (untested!):
#define _POSIX_C_SOURCE 200809L
#include <semaphore.h>
#include <signal.h>
#include <errno.h>
#include <time.h>
#define TICK_SIGNAL (SIGRTMIN+0)
static timer_t tick_timer;
static sem_t tick_semaphore;
static void tick_handler(int signum, siginfo_t *info, void *context)
{
if (info && info->si_code == SI_TIMER) {
const int saved_errno = errno;
sem_post((sem_t *)info->si_value.sival_ptr);
errno = saved_errno;
}
}
static int tick_setup(const struct timespec interval)
{
struct sigaction act;
struct sigevent evt;
struct itimerspec spec;
if (sem_init(&tick_semaphore, 0, 0))
return errno;
sigemptyset(&act.sa_mask);
act.sa_handler = tick_handler;
act.sa_flags = 0;
if (sigaction(TICK_SIGNAL, &act, NULL))
return errno;
evt.sigev_notify = SIGEV_SIGNAL;
evt.sigev_signo = TICK_SIGNAL;
evt.sigev_value.sival_ptr = &tick_semaphore;
if (timer_create(CLOCK_MONOTONIC, &evt, &tick_timer))
return errno;
spec.it_interval = interval;
spec.it_value = interval;
if (timer_settime(tick_timer, 0, &spec, NULL))
return errno;
return 0;
}
with the tick loop being simply
if (tick_setup(some_interval))
/* failed, see errno; abort */
while (!sem_wait(&tick_semaphore)) {
/* process tick */
}
If you support more than one concurrent state, the one signal handler suffices. Your state typically would include
timer_t timer;
sem_t semaphore;
struct timespec interval;
and the only tricky thing is to make sure there is no pending timer signal when destroying the state that signal would access.
Because signal delivery will interrupt any blocking I/O in the thread used for the signal delivery, you might wish to set up a special thread in your library to handle the timer tick realtime signals, with the realtime signal blocked in all other threads. You can mark your library initialization function __attribute__((constructor)), so that it is automatically executed prior to main().
Optimally, you should use the same thread that does the tick processing for the signal delivery. Otherwise there will be some small jitter or latency in the tick processing, if the signal was delivered using a different CPU core than the one that is running the tick processing.
Basile Starynkevitch's answer jogged my memory about the latencies involved in waiting and signal delivery: If you use nanosleep() and clock_gettime(CLOCK_MONOTONIC,), you can adjust the sleep times to account for the typical latencies.
Here's a quick test program using clock_gettime(CLOCK_MONOTONIC,) and nanosleep():
#define _POSIX_C_SOURCE 200809L
#include <sys/select.h>
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
static const long tick_latency = 75000L; /* 0.75 ms */
static const long tick_adjust = 75000L; /* 0.75 ms */
typedef struct {
struct timespec next;
struct timespec tick;
} state;
void state_init(state *const s, const double ticks_per_sec)
{
if (ticks_per_sec > 0.0) {
const double interval = 1.0 / ticks_per_sec;
s->tick.tv_sec = (time_t)interval;
s->tick.tv_nsec = (long)(1000000000.0 * (interval - (double)s->tick.tv_sec));
if (s->tick.tv_nsec < 0L)
s->tick.tv_nsec = 0L;
else
if (s->tick.tv_nsec > 999999999L)
s->tick.tv_nsec = 999999999L;
} else {
s->tick.tv_sec = 0;
s->tick.tv_nsec = 0L;
}
clock_gettime(CLOCK_MONOTONIC, &s->next);
}
static unsigned long count;
double state_tick(state *const s)
{
struct timespec now, left;
/* Next tick. */
s->next.tv_sec += s->tick.tv_sec;
s->next.tv_nsec += s->tick.tv_nsec;
if (s->next.tv_nsec >= 1000000000L) {
s->next.tv_nsec -= 1000000000L;
s->next.tv_sec++;
}
count = 0UL;
while (1) {
/* Get current time. */
clock_gettime(CLOCK_MONOTONIC, &now);
/* Past tick time? */
if (now.tv_sec > s->next.tv_sec ||
(now.tv_sec == s->next.tv_sec &&
now.tv_nsec >= s->next.tv_nsec - tick_latency))
return (double)(now.tv_sec - s->next.tv_sec)
+ (double)(now.tv_nsec - s->next.tv_nsec) / 1000000000.0;
/* Calculate duration to wait */
left.tv_sec = s->next.tv_sec - now.tv_sec;
left.tv_nsec = s->next.tv_nsec - now.tv_nsec - tick_adjust;
if (left.tv_nsec >= 1000000000L) {
left.tv_nsec -= 1000000000L;
left.tv_sec++;
} else
if (left.tv_nsec < -1000000000L) {
left.tv_nsec += 2000000000L;
left.tv_sec += 2;
} else
if (left.tv_nsec < 0L) {
left.tv_nsec += 1000000000L;
left.tv_sec--;
}
count++;
nanosleep(&left, NULL);
}
}
int main(int argc, char *argv[])
{
double rate, jitter;
long ticks, i;
state s;
char dummy;
if (argc != 3 || !strcmp(argv[1], "-h") || !strcmp(argv[1], "--help")) {
fprintf(stderr, "\n");
fprintf(stderr, "Usage: %s [ -h | --help ]\n", argv[0]);
fprintf(stderr, " %s TICKS_PER_SEC TICKS\n", argv[0]);
fprintf(stderr, "\n");
return 1;
}
if (sscanf(argv[1], " %lf %c", &rate, &dummy) != 1 || rate <= 0.0) {
fprintf(stderr, "%s: Invalid tick rate.\n", argv[1]);
return 1;
}
if (sscanf(argv[2], " %ld %c", &ticks, &dummy) != 1 || ticks < 1L) {
fprintf(stderr, "%s: Invalid tick count.\n", argv[2]);
return 1;
}
state_init(&s, rate);
for (i = 0L; i < ticks; i++) {
jitter = state_tick(&s);
if (jitter > 0.0)
printf("Tick %9ld: Delayed %9.6f ms, %lu sleeps\n", i+1L, +1000.0 * jitter, count);
else
if (jitter < 0.0)
printf("Tick %9ld: Premature %9.6f ms, %lu sleeps\n", i+1L, -1000.0 * jitter, count);
else
printf("Tick %9ld: Exactly on time, %lu sleeps\n", i+1L, count);
fflush(stdout);
}
return 0;
}
Above, tick_latency is the number of nanoseconds you're willing to accept a "tick" in advance, and tick_adjust is the number of nanoseconds you subtract from each sleep duration.
The best values for those are highly configuration-specific, and I haven't got a robust method for estimating them. Hardcoding them (to 0.75ms as above) does not sound too good to me either; perhaps using command-line options or environment values to let users control it, and default to zero would be better.
Anyway, compiling the above as
gcc -O2 test.c -lrt -o test
and running a 500-tick test at 50Hz tick rate,
./test 50 500 | sort -k 4
shows that on my machine, the ticks are accepted within 0.051 ms (51 µs) of the desired moment. Even reducing the priority does not seem to affect it much. A test using 5000 ticks at 5kHz rate (0.2ms per tick),
nice -n 19 ./test 5000 5000 | sort -k 4
yields similar results -- although I did not bother to check what happens if the machine load changes during a run.
In other words, preliminary tests on a single machine indicates it might be a viable option, so you might wish to test the scheme on different machines and under different loads. It is much more precise than I expected on my own machine (Ubuntu 3.11.0-24-generic on x86_64, running on an AMD Athlon II X4 640 CPU).
This approach has the interesting property that you can easily use a single thread to maintain multiple states, even if they use different tick rates. You only need to check which state has the next tick (earliest ->next time), nanosleep() if that occurs in the future, and process the tick, advancing that state to the next tick.
Questions?
In addition of Nominal Animal's answer :
If the Pause time is several milliseconds, you might use poll(2) or perhaps nanosleep(2) (you might compute the remaining time to sleep, e.g. using clock_gettime(2) with CLOCK_REALTIME ...)
If you care about the fact that StateMachine may take several milliseconds (or a large fraction of a millisecond) and you want exactly a 10 millisecond period, consider perhaps using a poll based event loop which uses the Linux specific timerfd_create(2)
See also time(7), and this, that answers (to question about poll etc...)

implement time delay in c

I don't know exactly how to word a search for this.. so I haven't had any luck finding anything.. :S
I need to implement a time delay in C.
for example I want to do some stuff, then wait say 1 minute, then continue on doing stuff.
Did that make sense? Can anyone help me out?
In standard C (C99), you can use time() to do this, something like:
#include <time.h>
:
void waitFor (unsigned int secs) {
unsigned int retTime = time(0) + secs; // Get finishing time.
while (time(0) < retTime); // Loop until it arrives.
}
By the way, this assumes time() returns a 1-second resolution value. I don't think that's mandated by the standard so you may have to adjust for it.
In order to clarify, this is the only way I'm aware of to do this with ISO C99 (and the question is tagged with nothing more than "C" which usually means portable solutions are desirable although, of course, vendor-specific solutions may still be given).
By all means, if you're on a platform that provides a more efficient way, use it. As several comments have indicated, there may be specific problems with a tight loop like this, with regard to CPU usage and battery life.
Any decent time-slicing OS would be able to drop the dynamic priority of a task that continuously uses its full time slice but the battery power may be more problematic.
However C specifies nothing about the OS details in a hosted environment, and this answer is for ISO C and ISO C alone (so no use of sleep, select, Win32 API calls or anything like that).
And keep in mind that POSIX sleep can be interrupted by signals. If you are going to go down that path, you need to do something like:
int finishing = 0; // set finishing in signal handler
// if you want to really stop.
void sleepWrapper (unsigned int secs) {
unsigned int left = secs;
while ((left > 0) && (!finishing)) // Don't continue if signal has
left = sleep (left); // indicated exit needed.
}
Here is how you can do it on most desktop systems:
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif
void wait( int seconds )
{ // Pretty crossplatform, both ALL POSIX compliant systems AND Windows
#ifdef _WIN32
Sleep( 1000 * seconds );
#else
sleep( seconds );
#endif
}
int
main( int argc, char **argv)
{
int running = 3;
while( running )
{ // do something
--running;
wait( 3 );
}
return 0; // OK
}
Here is how you can do it on a microcomputer / processor w/o timer:
int wait_loop0 = 10000;
int wait_loop1 = 6000;
// for microprocessor without timer, if it has a timer refer to vendor documentation and use it instead.
void
wait( int seconds )
{ // this function needs to be finetuned for the specific microprocessor
int i, j, k;
for(i = 0; i < seconds; i++)
{
for(j = 0; j < wait_loop0; j++)
{
for(k = 0; k < wait_loop1; k++)
{ // waste function, volatile makes sure it is not being optimized out by compiler
int volatile t = 120 * j * i + k;
t = t + 5;
}
}
}
}
int
main( int argc, char **argv)
{
int running = 3;
while( running )
{ // do something
--running;
wait( 3 );
}
return 0; // OK
}
The waitloop variables must be fine tuned, those did work pretty close for my computer, but the frequency scale thing makes it very imprecise for a modern desktop system; So don't use there unless you're bare to the metal and not doing such stuff.
Check sleep(3) man page or MSDN for Sleep
Although many implementations have the time function return the current time in seconds, there is no guarantee that every implementation will do so (e.g. some may return milliseconds rather than seconds). As such, a more portable solution is to use the difftime function.
difftime is guaranteed by the C standard to return the difference in time in seconds between two time_t values. As such we can write a portable time delay function which will run on all compliant implementations of the C standard.
#include <time.h>
void delay(double dly){
/* save start time */
const time_t start = time(NULL);
time_t current;
do{
/* get current time */
time(&current);
/* break loop when the requested number of seconds have elapsed */
}while(difftime(current, start) < dly);
}
One caveat with the time and difftime functions is that the C standard never specifies a granularity. Most implementations have a granularity of one second. While this is all right for delays lasting several seconds, our delay function may wait too long for delays lasting under one second.
There is a portable standard C alternative: the clock function.
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC.
The clock function solution is quite similar to our time function solution:
#include <time.h>
void delay(double dly){
/* save start clock tick */
const clock_t start = clock();
clock_t current;
do{
/* get current clock tick */
current = clock();
/* break loop when the requested number of seconds have elapsed */
}while((double)(current-start)/CLOCKS_PER_SEC < dly);
}
There is a caveat in this case similar to that of time and difftime: the granularity of the clock function is left to the implementation. For example, machines with 32-bit values for clock_t with a resolution in microseconds may end up wrapping the value returned by clock after 2147 seconds (about 36 minutes).
As such, consider using the time and difftime implementation of the delay function for delays lasting at least one second, and the clock implementation for delays lasting under one second.
A final word of caution: clock returns processor time rather than calendar time; clock may not correspond with the actual elapsed time (e.g. if the process sleeps).
For delays as large as one minute, sleep() is a nice choice.
If someday, you want to pause on delays smaller than one second, you may want to consider poll() with a timeout.
Both are POSIX.
There are no sleep() functions in the pre-C11 C Standard Library, but POSIX does provide a few options.
The POSIX function sleep() (unistd.h) takes an unsigned int argument for the number of seconds desired to sleep. Although this is not a Standard Library function, it is widely available, and glibc appears to support it even when compiling with stricter settings like --std=c11.
The POSIX function nanosleep() (time.h) takes two pointers to timespec structures as arguments, and provides finer control over the sleep duration. The first argument specifies the delay duration. If the second argument is not a null pointer, it holds the time remaining if the call is interrupted by a signal handler.
Programs that use the nanosleep() function may need to include a feature test macro in order to compile. The following code sample will not compile on my linux system without a feature test macro when I use a typical compiler invocation of gcc -std=c11 -Wall -Wextra -Wpedantic.
POSIX once had a usleep() function (unistd.h) that took a useconds_t argument to specify sleep duration in microseconds. This function also required a feature test macro when used with strict compiler settings. Alas, usleep() was made obsolete with POSIX.1-2001 and should no longer be used. It is recommended that nanosleep() be used now instead of usleep().
#define _POSIX_C_SOURCE 199309L // feature test macro for nanosleep()
#include <stdio.h>
#include <unistd.h> // for sleep()
#include <time.h> // for nanosleep()
int main(void)
{
// use unsigned sleep(unsigned seconds)
puts("Wait 5 sec...");
sleep(5);
// use int nanosleep(const struct timespec *req, struct timespec *rem);
puts("Wait 2.5 sec...");
struct timespec ts = { .tv_sec = 2, // seconds to wait
.tv_nsec = 5e8 }; // additional nanoseconds
nanosleep(&ts, NULL);
puts("Bye");
return 0;
}
Addendum:
C11 does have the header threads.h providing thrd_sleep(), which works identically to nanosleep(). GCC did not support threads.h until 2018, with the release of glibc 2.28. It has been difficult in general to find implementations with support for threads.h (Clang did not support it for a long time, but I'm not sure about the current state of affairs there). You will have to use this option with care.
Try sleep(int number_of_seconds)
sleep(int) works as a good delay. For a minute:
//Doing some stuff...
sleep(60); //Freeze for A minute
//Continue doing stuff...
Is it timer?
For WIN32 try http://msdn.microsoft.com/en-us/library/ms687012%28VS.85%29.aspx
you can simply call delay() function. So if you want to delay the process in 3 seconds, call delay(3000)...
If you are certain you want to wait and never get interrupted then use sleep in POSIX or Sleep in Windows. In POSIX sleep takes time in seconds so if you want the time to be shorter there are varieties like usleep() which uses microseconds. Sleep in Windows takes milliseconds, it is rare you need finer granularity than that.
It may be that you wish to wait a period of time but want to allow interrupts, maybe in the case of an emergency. sleep can be interrupted by signals but there is a better way of doing it in this case.
Therefore I actually found in practice what you do is wait for an event or a condition variable with a timeout.
In Windows your call is WaitForSingleObject. In POSIX it is pthread_cond_timedwait.
In Windows you can also use WaitForSingleObjectEx and then you can actually "interrupt" your thread with any queued task by calling QueueUserAPC. WaitForSingleObject(Ex) will return a code determining why it exited, so you will know when it returns a "TIMEDOUT" status that it did indeed timeout. You set the Event it is waiting for when you want it to terminate.
With pthread_cond_timedwait you can signal broadcast the condition variable. (If several threads are waiting on the same one, you will need to broadcast to wake them all up). Each time it loops it should check the condition. Your thread can get the current time and see if it has passed or it can look to see if some condition has been met to determine what to do. If you have some kind of queue you can check it. (Your thread will automatically have a mutex locked that it used to wait on the condition variable, so when it checks the condition it has sole access to it).
// Provides ANSI C method of delaying x milliseconds
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void delayMillis(unsigned long ms) {
clock_t start_ticks = clock();
unsigned long millis_ticks = CLOCKS_PER_SEC/1000;
while (clock()-start_ticks < ms*millis_ticks) {
}
}
/*
* Example output:
*
* CLOCKS_PER_SEC:[1000000]
*
* Test Delay of 800 ms....
*
* start[2054], end[802058],
* elapsedSec:[0.802058]
*/
int testDelayMillis() {
printf("CLOCKS_PER_SEC:[%lu]\n\n", CLOCKS_PER_SEC);
clock_t start_t, end_t;
start_t = clock();
printf("Test Delay of 800 ms....\n", CLOCKS_PER_SEC);
delayMillis(800);
end_t = clock();
double elapsedSec = end_t/(double)CLOCKS_PER_SEC;
printf("\nstart[%lu], end[%lu], \nelapsedSec:[%f]\n", start_t, end_t, elapsedSec);
}
int main() {
testDelayMillis();
}
C11 has a function specifically for this:
#include <threads.h>
#include <time.h>
#include <stdio.h>
void sleep(time_t seconds) {
struct timespec time;
time.tv_sec = seconds;
time.tv_nsec = 0;
while (thrd_sleep(&time, &time)) {}
}
int main() {
puts("Sleeping for 5 seconds...");
sleep(5);
puts("Done!");
return 0;
}
Note that this is only available starting in glibc 2.28.
for C use in gcc.
#include <windows.h>
then use Sleep(); /// Sleep() with capital S. not sleep() with s .
//Sleep(1000) is 1 sec /// maybe.
clang supports sleep(), sleep(1) is for 1 sec time delay/wait.
For short delays (say, some microseconds) on Linux OS, you can use "usleep":
// C Program to perform short delays
#include <unistd.h>
#include <stdio.h>
int main(){
printf("Hello!\n");
usleep(1000000); // For a 1-second delay
printf("Bye!\n);
return 0;
system("timeout /t 60"); // waits 60s. this is only for windows vista,7,8
system("ping -n 60 127.0.0.1 >nul"); // waits 60s. for all windows
Write this code :
void delay(int x)
{ int i=0,j=0;
for(i=0;i<x;i++){for(j=0;j<200000;j++){}}
}
int main()
{
int i,num;
while(1) {
delay(500);
printf("Host name");
printf("\n");}
}

Resources