To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.
Related
What is the method to use a timer in C? I need to wait until 500 ms for a job. Please mention any good way to do this job. I used sleep(3); But this method does not do any work in that time duration. I have something that will try until that time to get any input.
Here's a solution I used (it needs #include <time.h>):
int msec = 0, trigger = 10; /* 10ms */
clock_t before = clock();
do {
/*
* Do something to busy the CPU just here while you drink a coffee
* Be sure this code will not take more than `trigger` ms
*/
clock_t difference = clock() - before;
msec = difference * 1000 / CLOCKS_PER_SEC;
iterations++;
} while ( msec < trigger );
printf("Time taken %d seconds %d milliseconds (%d iterations)\n",
msec/1000, msec%1000, iterations);
You can use a time_t struct and clock() function from time.h.
Store the start time in a time_t struct by using clock() and check the elapsed time by comparing the difference between stored time and current time.
Yes, you need a loop. If you already have a main loop (most GUI event-driven stuff does) you can probably stick your timer into that. Use:
#include <time.h>
time_t my_t, fire_t;
Then (for times over 1 second), initialize your timer by reading the current time:
my_t = time(NULL);
Add the number of seconds your timer should wait and store it in fire_t. A time_t is essentially a uint32_t, you may need to cast it.
Inside your loop do another
my_t = time(NULL);
if (my_t > fire_t) then consider the timer fired and do the stuff you want there. That will probably include resetting it by doing another fire_t = time(NULL) + seconds_to_wait for next time.
A time_t is a somewhat antiquated unix method of storing time as the number of seconds since midnight 1/1/1970 but it has many advantages. For times less than 1 second you need to use gettimeofday() (microseconds) or clock_gettime() (nanoseconds) and deal with a struct timeval or struct timespec which is a time_t and the microseconds or nanoseconds since that 1 second mark. Making a timer works the same way except when you add your time to wait you need to remember to manually do the carry (into the time_t) if the resulting microseconds or nanoseconds value goes over 1 second. Yes, it's messy. See man 2 time, man gettimeofday, man clock_gettime.
sleep(), usleep(), nanosleep() have a hidden benefit. You see it as pausing your program, but what they really do is release the CPU for that amount of time. Repeatedly polling by reading the time and comparing to the done time (are we there yet?) will burn a lot of CPU cycles which may slow down other programs running on the same machine (and use more electricity/battery). It's better to sleep() most of the time then start checking the time.
If you're trying to sleep and do work at the same time you need threads.
May be this examples help to you
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
/*
Implementation simple timeout
Input: count milliseconds as number
Usage:
setTimeout(1000) - timeout on 1 second
setTimeout(10100) - timeout on 10 seconds and 100 milliseconds
*/
void setTimeout(int milliseconds)
{
// If milliseconds is less or equal to 0
// will be simple return from function without throw error
if (milliseconds <= 0) {
fprintf(stderr, "Count milliseconds for timeout is less or equal to 0\n");
return;
}
// a current time of milliseconds
int milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
// needed count milliseconds of return from this timeout
int end = milliseconds_since + milliseconds;
// wait while until needed time comes
do {
milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
} while (milliseconds_since <= end);
}
int main()
{
// input from user for time of delay in seconds
int delay;
printf("Enter delay: ");
scanf("%d", &delay);
// counter downtime for run a rocket while the delay with more 0
do {
// erase the previous line and display remain of the delay
printf("\033[ATime left for run rocket: %d\n", delay);
// a timeout for display
setTimeout(1000);
// decrease the delay to 1
delay--;
} while (delay >= 0);
// a string for display rocket
char rocket[3] = "-->";
// a string for display all trace of the rocket and the rocket itself
char *rocket_trace = (char *) malloc(100 * sizeof(char));
// display trace of the rocket from a start to the end
int i;
char passed_way[100] = "";
for (i = 0; i <= 50; i++) {
setTimeout(25);
sprintf(rocket_trace, "%s%s", passed_way, rocket);
passed_way[i] = ' ';
printf("\033[A");
printf("| %s\n", rocket_trace);
}
// erase a line and write a new line
printf("\033[A");
printf("\033[2K");
puts("Good luck!");
return 0;
}
Compile file, run and delete after (my preference)
$ gcc timeout.c -o timeout && ./timeout && rm timeout
Try run it for yourself to see result.
Notes:
Testing environment
$ uname -a
Linux wlysenko-Aspire 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ gcc --version
gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?
I want to measure the time between the start to the end of the function in a loop. This difference will be used to set the amount of loops of the inner while-loops which does some here not important stuff.
I want to time the function like this :
#include <wiringPi.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BILLION 1E9
float hz = 1000;
long int nsPerTick = BILLION/hz;
double unprocessed = 1;
struct timespec now;
struct timespec last;
clock_gettime(CLOCK_REALTIME, &last);
[ ... ]
while (1)
{
clock_gettime(CLOCK_REALTIME, &now);
double diff = (last.tv_nsec - now.tv_nsec );
unprocessed = unprocessed + (diff/ nsPerTick);
clock_gettime(CLOCK_REALTIME, &last);
while (unprocessed >= 1) {
unprocessed --;
DO SOME RANDOM MAGIC;
}
}
The difference between the timer is always negative. I was told this was where the error was:
if ( (last.tv_nsec - now.tv_nsec)<0) {
double diff = 1000000000+ last.tv_nsec - now.tv_nsec;
}
else {
double diff = (last.tv_nsec - now.tv_nsec );
}
But still, my variable difference and is always negative like "-1095043244" (but the time spent during the function is a positive of course).
What's wrong?
Your first issue is that you have `last.tv_nsec - now.tv_nsec, which is the wrong way round.
last.tv_nsec is in the past (let's say it's set to 1), and now.tv_nsec will always be later (for example, 8ns later, so it's 9). In that case, last.tv_nsec - now.tv_nsec == 1 - 9 == -8.
The other issue is that tv_nsec isn't the time in nanoseconds: for that, you'd need to multiply the time in seconds by a billion and add that. So to get the difference in ns between now and last, you want:
((now.tv_sec - last.tv_sec) * ONE_BILLION) + (now.tv_nsec - last.tv_nsec)
(N.B. I'm still a little surprised that although now.tv_nsec and last.tv_nsec are both less than a billion, subtracting one from the other gives a value less than -1000000000, so there may yet be something I'm missing here.)
I was just investigating timing on Pi, with similar approach and similar problems. My thoughts are:
You don't have to use double. In fact you also don't need nano-seconds, as the clock on Pi has 1 microsecond accuracy anyway (it's the way the Broadcom did it). I suggest you to use gettimeofday() to get microsecs instead of nanosecs. Then computation is easy, it's just:
number of seconds + (1000 * 1000 * number of micros)
which you can simply calculate as unsigned int.
I've implemented the convenient API for this:
typedef struct
{
struct timeval startTimeVal;
} TIMER_usecCtx_t;
void TIMER_usecStart(TIMER_usecCtx_t* ctx)
{
gettimeofday(&ctx->startTimeVal, NULL);
}
unsigned int TIMER_usecElapsedUs(TIMER_usecCtx_t* ctx)
{
unsigned int rv;
/* get current time */
struct timeval nowTimeVal;
gettimeofday(&nowTimeVal, NULL);
/* compute diff */
rv = 1000000 * (nowTimeVal.tv_sec - ctx->startTimeVal.tv_sec) + nowTimeVal.tv_usec - ctx->startTimeVal.tv_usec;
return rv;
}
And the usage is:
TIMER_usecCtx_t timer;
TIMER_usecStart(&timer);
while (1)
{
if (TIMER_usecElapsedUs(timer) > yourDelayInMicroseconds)
{
doSomethingHere();
TIMER_usecStart(&timer);
}
}
Also notice the gettime() calls on Pi take almost 1 [us] to complete. So, if you need to call gettime() a lot and need more accuracy, go for some more advanced methods of getting time... I've explained more about it in this short article about Pi get-time calls
Well, I don't know C, but if it's a timing issue on a Raspberry Pi it might have something to do with the lack of an RTC (real time clock) on the chip.
You should not be storing last.tv_nsec - now.tv_nsec in a double.
If you look at the documentation of time.h, you can see that tv_nsec is stored as a long. So you will need something along the lines of:
long diff = end.tv_nsec - begin.tv_nsec
With that being said, only comparing the nanoseconds can go wrong. You also need to look at the number of seconds also. So to convert everything to seconds, you can use this:
long nanosec_diff = end.tv_nsec - begin.tv_nsec;
time_t sec_diff = end.tv_sec - begin.tv_sec; // need <sys/types.h> for time_t
double diff_in_seconds = sec_diff + nanosec_diff / 1000000000.0
Also, make sure you are always subtracting the end time from the start time (or else your time will still be negative).
And there you go!
gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC
I want to calculate the time in milliseconds taken by the execution of some part of my program. I've been looking online, but there's not much info on this topic. Any of you know how to do this?
Best way to answer is with an example:
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
/* Return 1 if the difference is negative, otherwise 0. */
int timeval_subtract(struct timeval *result, struct timeval *t2, struct timeval *t1)
{
long int diff = (t2->tv_usec + 1000000 * t2->tv_sec) - (t1->tv_usec + 1000000 * t1->tv_sec);
result->tv_sec = diff / 1000000;
result->tv_usec = diff % 1000000;
return (diff<0);
}
void timeval_print(struct timeval *tv)
{
char buffer[30];
time_t curtime;
printf("%ld.%06ld", tv->tv_sec, tv->tv_usec);
curtime = tv->tv_sec;
strftime(buffer, 30, "%m-%d-%Y %T", localtime(&curtime));
printf(" = %s.%06ld\n", buffer, tv->tv_usec);
}
int main()
{
struct timeval tvBegin, tvEnd, tvDiff;
// begin
gettimeofday(&tvBegin, NULL);
timeval_print(&tvBegin);
// lengthy operation
int i,j;
for(i=0;i<999999L;++i) {
j=sqrt(i);
}
//end
gettimeofday(&tvEnd, NULL);
timeval_print(&tvEnd);
// diff
timeval_subtract(&tvDiff, &tvEnd, &tvBegin);
printf("%ld.%06ld\n", tvDiff.tv_sec, tvDiff.tv_usec);
return 0;
}
Another option ( at least on some UNIX ) is clock_gettime and related functions. These allow access to various realtime clocks and you can select one of the higher resolution ones and throw away the resolution you don't need.
The gettimeofday function returns the time with microsecond precision (if the platform can support that, of course):
The gettimeofday() function shall
obtain the current time, expressed as
seconds and microseconds since the
Epoch, and store it in the timeval
structure pointed to by tp. The
resolution of the system clock is
unspecified.
C libraries have a function to let you get the system time. You can calculate elapsed time after you capture the start and stop times.
The function is called gettimeofday() and you can look at the man page to find out what to include and how to use it.
On Windows, you can just do this:
DWORD dwTickCount = GetTickCount();
// Perform some things.
printf("Code took: %dms\n", GetTickCount() - dwTickCount);
Not the most general/elegant solution, but nice and quick when you need it.