Self-correcting periodic timer using gettimeofday() - timer

I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:
long long getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (long long) (time.tv_sec + time.tv_usec);
}
As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X.
To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.
Here is how I'm trying to implement it:
long long start, finish, offset, previous, remaining_usecs;
long long delaytime_us = 1000000;
/* Initialise previous timestamp as 1000000us ago*/
previous = getus() - delaytime_us;
while(1)
{
/* starting timestamp */
start = getus();
/* here is where I would do some I/O */
/* calculate how much to compensate */
offset = (start - previous) - delaytime_us;
printf("(%lld - %lld) - %lld = %lld\n",
start, previous, delaytime_us, offset);
previous = start;
finish = getus();
/* calculate to our best ability how long we spent on I/O.
* We'll try and compensate for its inaccuracy next time around!*/
remaining_usecs = (delaytime_us - (finish - start)) - offset;
printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
start, finish, offset, previous, remaining_usecs);
usleep(remaining_usecs);
}
It appears to work on the first iteration of the loop, however after that things get messed up.
Here's the output for 5 iterations of the loop:
(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642
(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701
(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226
(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761
(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset.
Does anyone know what I'm doing wrong here?
EDIT: I would also welcome suggestions of better ways to get an accurate timer and be
able to sleep during the delay!

After some more research I've discovered two things wrong here-
Firstly, I'm calculating the timestamp wrong. getus() should return like this:
return (long long) 1000000 * (time.tv_sec + time.tv_usec);
And secondly, I should be storing the timestamp in unsigned long long or uint64_t.
So getus() should look like this:
uint64_t getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
I won't actually be able to test this until tomorrow, so I will report back.

Related

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

Measuring time in millisecond precision

My program is going to race different sorting algorithms against each other, both in time and space. I've got space covered, but measuring time is giving me some trouble. Here is the code that runs the sorts:
void test(short* n, short len) {
short i, j, a[1024];
for(i=0; i<2; i++) { // Loop over each sort algo
memused = 0; // Initialize memory marker
for(j=0; j<len; j++) // Copy scrambled list into fresh array
a[j] = n[j]; // (Sorting algos are in-place)
// ***Point A***
switch(i) { // Pick sorting algo
case 0:
selectionSort(a, len);
case 1:
quicksort(a, len);
}
// ***Point B***
spc[i][len] = memused; // Record how much mem was used
}
}
(I removed some of the sorting algos for simplicity)
Now, I need to measure how much time the sorting algo takes. The most obvious way to do this is to record the time at point (a) and then subtract that from the time at point (b). But none of the C time functions are good enough:
time() gives me time in seconds, but the algos are faster than that, so I need something more accurate.
clock() gives me CPU ticks since the program started, but seems to round to the nearest 10,000; still not small enough
The time shell command works well enough, except that I need to run over 1,000 tests per algorithm, and I need the individual time for each one.
I have no idea what getrusage() returns, but it's also too long.
What I need is time in units (significantly, if possible) smaller than the run time of the sorting functions: about 2ms. So my question is: Where can I get that?
gettimeofday() has microseconds resolution and is easy to use.
A pair of useful timer functions is:
static struct timeval tm1;
static inline void start()
{
gettimeofday(&tm1, NULL);
}
static inline void stop()
{
struct timeval tm2;
gettimeofday(&tm2, NULL);
unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;
printf("%llu ms\n", t);
}
For measuring time, use clock_gettime with CLOCK_MONOTONIC (or CLOCK_MONOTONIC_RAW if it is available). Where possible, avoid using gettimeofday. It is specifically deprecated in favor of clock_gettime, and the time returned from it is subject to adjustments from time servers, which can throw off your measurements.
You can get the total user + kernel time (or choose just one) using getrusage as follows:
#include <sys/time.h>
#include <sys/resource.h>
double get_process_time() {
struct rusage usage;
if( 0 == getrusage(RUSAGE_SELF, &usage) ) {
return (double)(usage.ru_utime.tv_sec + usage.ru_stime.tv_sec) +
(double)(usage.ru_utime.tv_usec + usage.ru_stime.tv_usec) / 1.0e6;
}
return 0;
}
I elected to create a double containing fractional seconds...
double t_begin, t_end;
t_begin = get_process_time();
// Do some operation...
t_end = get_process_time();
printf( "Elapsed time: %.6f seconds\n", t_end - t_begin );
The Time Stamp Counter could be helpful here:
static unsigned long long rdtsctime() {
unsigned int eax, edx;
unsigned long long val;
__asm__ __volatile__("rdtsc":"=a"(eax), "=d"(edx));
val = edx;
val = val << 32;
val += eax;
return val;
}
Though there are some caveats to this. The timestamps for different processor cores may be different, and changing clock speeds (due to power saving features and the like) can cause erroneous results.

clock_gettime on Raspberry Pi with C

I want to measure the time between the start to the end of the function in a loop. This difference will be used to set the amount of loops of the inner while-loops which does some here not important stuff.
I want to time the function like this :
#include <wiringPi.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BILLION 1E9
float hz = 1000;
long int nsPerTick = BILLION/hz;
double unprocessed = 1;
struct timespec now;
struct timespec last;
clock_gettime(CLOCK_REALTIME, &last);
[ ... ]
while (1)
{
clock_gettime(CLOCK_REALTIME, &now);
double diff = (last.tv_nsec - now.tv_nsec );
unprocessed = unprocessed + (diff/ nsPerTick);
clock_gettime(CLOCK_REALTIME, &last);
while (unprocessed >= 1) {
unprocessed --;
DO SOME RANDOM MAGIC;
}
}
The difference between the timer is always negative. I was told this was where the error was:
if ( (last.tv_nsec - now.tv_nsec)<0) {
double diff = 1000000000+ last.tv_nsec - now.tv_nsec;
}
else {
double diff = (last.tv_nsec - now.tv_nsec );
}
But still, my variable difference and is always negative like "-1095043244" (but the time spent during the function is a positive of course).
What's wrong?
Your first issue is that you have `last.tv_nsec - now.tv_nsec, which is the wrong way round.
last.tv_nsec is in the past (let's say it's set to 1), and now.tv_nsec will always be later (for example, 8ns later, so it's 9). In that case, last.tv_nsec - now.tv_nsec == 1 - 9 == -8.
The other issue is that tv_nsec isn't the time in nanoseconds: for that, you'd need to multiply the time in seconds by a billion and add that. So to get the difference in ns between now and last, you want:
((now.tv_sec - last.tv_sec) * ONE_BILLION) + (now.tv_nsec - last.tv_nsec)
(N.B. I'm still a little surprised that although now.tv_nsec and last.tv_nsec are both less than a billion, subtracting one from the other gives a value less than -1000000000, so there may yet be something I'm missing here.)
I was just investigating timing on Pi, with similar approach and similar problems. My thoughts are:
You don't have to use double. In fact you also don't need nano-seconds, as the clock on Pi has 1 microsecond accuracy anyway (it's the way the Broadcom did it). I suggest you to use gettimeofday() to get microsecs instead of nanosecs. Then computation is easy, it's just:
number of seconds + (1000 * 1000 * number of micros)
which you can simply calculate as unsigned int.
I've implemented the convenient API for this:
typedef struct
{
struct timeval startTimeVal;
} TIMER_usecCtx_t;
void TIMER_usecStart(TIMER_usecCtx_t* ctx)
{
gettimeofday(&ctx->startTimeVal, NULL);
}
unsigned int TIMER_usecElapsedUs(TIMER_usecCtx_t* ctx)
{
unsigned int rv;
/* get current time */
struct timeval nowTimeVal;
gettimeofday(&nowTimeVal, NULL);
/* compute diff */
rv = 1000000 * (nowTimeVal.tv_sec - ctx->startTimeVal.tv_sec) + nowTimeVal.tv_usec - ctx->startTimeVal.tv_usec;
return rv;
}
And the usage is:
TIMER_usecCtx_t timer;
TIMER_usecStart(&timer);
while (1)
{
if (TIMER_usecElapsedUs(timer) > yourDelayInMicroseconds)
{
doSomethingHere();
TIMER_usecStart(&timer);
}
}
Also notice the gettime() calls on Pi take almost 1 [us] to complete. So, if you need to call gettime() a lot and need more accuracy, go for some more advanced methods of getting time... I've explained more about it in this short article about Pi get-time calls
Well, I don't know C, but if it's a timing issue on a Raspberry Pi it might have something to do with the lack of an RTC (real time clock) on the chip.
You should not be storing last.tv_nsec - now.tv_nsec in a double.
If you look at the documentation of time.h, you can see that tv_nsec is stored as a long. So you will need something along the lines of:
long diff = end.tv_nsec - begin.tv_nsec
With that being said, only comparing the nanoseconds can go wrong. You also need to look at the number of seconds also. So to convert everything to seconds, you can use this:
long nanosec_diff = end.tv_nsec - begin.tv_nsec;
time_t sec_diff = end.tv_sec - begin.tv_sec; // need <sys/types.h> for time_t
double diff_in_seconds = sec_diff + nanosec_diff / 1000000000.0
Also, make sure you are always subtracting the end time from the start time (or else your time will still be negative).
And there you go!

C GetTickCount (windows function) to Time (nanoseconds)

I'm testing one code provided from my colleague and I need to measure the time of execution of one routine than performs a context switch (of threads).
What's the best choice to measure the time? I know that is available High Resolution Timers like,
QueryPerformanceCounter
QueryPerformanceFrequency
but how can I translate using that timers to miliseconds or nanoseconds?
LARGE_INTEGER lFreq, lStart;
LARGE_INTEGER lEnd;
double d;
QueryPerformanceFrequency(&lFreq);
QueryPerformanceCounter(&lStart);
/* do something ... */
QueryPerformanceCounter(&lEnd);
d = ((doublel)End.QuadPart - (doublel)lStart.QuadPart) / (doublel)lFreq.QuadPart;
d is time interval in seconds.
As the operation than i am executing is in order of 500 nanos, and the timers doens't have precision, what i made was,
i saved actual time with GetTickCount() - (Uses precision of ~ 12milis) and performs the execution of a route N_TIMES (Number of times than routine executed) than remains until i press something on console.
Calculate the time again, and make the difference dividing by N_TIMES, something like that:
int static counter;
void routine()
{
// Operations here..
counter++;
}
int main(){
start <- GetTickCount();
handle <- createThread(....., routine,...);
resumeThread(handle);
getchar();
WaitForSingleObject(handle, INFINITE);
Elapsed = (GetTickCount() - start) * 1000000.0) / counter;
printf("Nanos: %d", elapsed);
}
:)

UTC time stamp on Windows

I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem.
I am using <time.h> for UTC time.
If your time stamp has only 1 second resolution then there will always be +/- 1 uncertainty in the least significant digit (i.e. +/- 1 second in this case).
Clarification: if you only have a resolution of 1 second then your time values are quantized. The real time, t, represented by such a quantized value has a range of t..t+0.9999. If you take the difference of two such times, t0 and t1, then the maximum error in t1-t0 is -0.999..+0.999, which when quantized is +/-1 second. So in your case you would expect to see difference values in the range 9..11 seconds.
A thread that sleeps for X milliseconds is not guaranteed to sleep for precisely that many milliseconds. I am assuming that you have a statement that goes something like:
while(1) {
...
sleep(10); // Sleep for 10 seconds.
// fetch timestamp and send
}
You will get a more accurate gauge of time if you sleep for shorter periods (say 20 milliseconds) in a loop checking until the time has expired. When you sleep for 10 seconds, your thread gets moved further out of the immediate scheduling priority of the underlying OS.
You might also take into account that the time taken to send the timestamps may vary, depending on network conditions, etc, if you do a sleep(10) -> send ->sleep(10) type of loop, the time taken to send will be added onto the next sleep(10) in real terms.
Try something like this (forgive me, my C is a little rusty):
bool expired = false;
double last, current;
double t1, t2;
double difference = 0;
while(1) {
...
last = (double)clock();
while(!expired) {
usleep(200); // sleep for 20 milliseconds
current = (double)clock();
if(((current - last) / (double)CLOCKS_PER_SEC) >= (10.0 - difference))
expired = true;
}
t1 = (double)clock();
// Set and send the timestamp.
t2 = (double)clock();
//
// Calculate how long it took to send the stamps.
// and take that away from the next sleep cycle.
//
difference = (t2 - t1) / (double)CLOCKS_PER_SEC;
expired = false;
}
If you are not bothered about using the standard C library, you could look at using the high resolution timer functionality of windows such as QueryPerformanceFrequency/QueryPerformanceCounter functions.
LONG_INTEGER freq;
LONG_INTEGER t2, t1;
//
// Get the resolution of the timer.
//
QueryPerformanceFrequency(&freq);
// Start Task.
QueryPerformanceCounter(&t1);
... Do something ....
QueryPerformanceCounter(&t2);
// Very accurate duration in seconds.
double duration = (double)(t2.QuadPart - t1.QuadPart) / (double)freq.QuadPart;

Resources