clock_gettime() still not monotonic - alternatives? - c

As has been known for a while (see, e.g., this old question, and bug reports that pop when you google this), clock_gettime() doesn't appear to report back time monotonically. To rule out any silly error I might have overseen, here is the relevant code (excerpt from larger program):
<include time.h>
long nano_1, nano_2;
double delta;
struct timespec tspec, *tspec_ptr;
clock_gettime(CLOCK_MONOTONIC_RAW, tspec_ptr);
nano_1 = tspec.tv_nsec;
sort_selection(sorted_ptr, n);
clock_gettime(CLOCK_MONOTONIC_RAW, tspec_ptr);
nano_2 = tspec.tv_nsec;
delta = (nano_2 - nano_1)/1000000.0;
printf("\nSelection sort took %g micro seconds.\n", (double) delta);
Sorting small arrays (about 1,000 elements) reports plausible times. When I sort larger ones (10,000+) using 3 sort algorithms, 1-2 of the 3 report back negative sort time. I tried all clock types mentioned in the man page, not only CLOCK_MONOTONIC_RAW - no change.
(1) Anything I overlooked in my code?
(2) Is there an alternative to clock_gettime() that measures time in increments more accurate than seconds? I don't need nanonseconds, but seconds is too coarse to really help.
System:
- Ubuntu 12.04.
- kernel 3.2.0-30
- gcc 4.6.3.
- libc version 2.15
- compiled with -lrt

This has nothing to do with the mythology of clock_gettime's monotonic clock not actually being monotonic (which probably has a basis in reality, but which was never well documented and probably fixed a long time ago). It's just a bug in your program. tv_nsec is the nanoseconds portion of a time value that's stored as two fields:
tv_sec - whole seconds
tv_nsec - nanoseconds in the range 0 to 999999999
Of course tv_nsec is going to jump backwards from 999999999 to 0 when tv_sec increments. To compute differences of timespec structs, you need to take 1000000000 times the difference in seconds and add that to the difference in nanoseconds. Of course this could quickly overflow if you don't convert to a 64-bit type first.

Based on a bit of reading around (including the link I provided above, and How to measure the ACTUAL execution time of a C program under Linux?) it seems that getrusage() or clock() should both provide you with a "working" timer that measures the time spent by your calculation only. It does puzzle me that your other function doesn't always give a >= 0 interval, I must say.
For use on getrusage, see http://linux.die.net/man/2/getrusage

Related

struct timespec not working in MSVC 2015 for x64 build [duplicate]

On posix it is possible to use timespec to calculate accurate time length (like seconds and milliseconds). Unfortunately I need to migrate to windows with Visual Studio compiler. The VS time.h library doesn't declare timespec so I'm looking for other options. As far as could search is it possible to use clock and time_t although I could't check how precise is counting millisecons with clock counting.
What do you do/use for calculating time elapse in a operation (if possible using standards c++ library) ?
The function GetTickCount is usually used for that.
Also a similiar thread: C++ timing, milliseconds since last whole second
Depends on what sort of accuracy you want, my understanding is that clock and time_t are not accurate to the millisecond level. Similarly GetTickCount() is commonly used (MS docs say accurate to 10-15ms) but not sufficiently accurate for many purposes.
I use QueryPerformanceFrequency and QueryPerformanceCounter for accurate timing measurements for performance.

Accurate Time Keeping in C with High Resolution

First off I know there are a lot of similar questions and I have done a lot of digging please refrain from immediate hostility ( in my experience the people on this site are pretty hostile if they believe a question has already been asked answered) until you hear me out. If the answer is out there I haven't found it and I don't want to hijack another persons question.
That being said I am working in C on a linux based microcomputer. I have been using it to track and control motor RPM which obviously requires good time keeping. I was originally using calculations with the processor clock to track time on the order of milliseconds but for a variety of reasons that are probably woefully apparent this was problematic. I then switched over to using time.h and specifically the difftime() function. This was a good solution which allows me to accurately track and control the motors RPM with little to no issue. However I want to now plot that data. This again was not overly problematic except that the plot looks terrible because my time scale can not go any lower than seconds.
The best solution I could find would be to use sys/time.h and gettimeofday() which can give time since the epoch in greater resolution. However the issue is, as far as I can tell, that there is no difftime() type function for this that will maintain the higher time resolution. Why is this an issue? Because difftime() returns a double value that can easily be used to calculate RPM from a rotary encoder rotation count (rotations/(sec/60)) whereas there doesn't seem to be a way to do this with gettimeofday() as one uses time_t structs and the other uses timeval structs.
So is there a way to accurately return time differences between two times (as determined by real time elapsed since the epoch) with a better resolution than seconds? Or alternatively does anyone know of a better approach to accurately gauging elapsed time to calculate RPM? Thank you.
Convert the result of gettimeofday to a double:
gettimeofday(&now, NULL);
double dsecs = now.tv_sec + (now.tv_usec / 1000000.0);
Then your difftime is just a subtraction of two of these dsecs

How to measure cpu time and wall clock time?

I saw many topics about this, even on stackoverflow, for example:
How can I measure CPU time and wall clock time on both Linux/Windows?
I want to measure both cpu and wall time. Although person who answered a question in topic I posted recommend using gettimeofday to measure a wall time, I read that its better to use instead clock_gettime. So, I wrote the code below (is it ok, is it really measure a wall time, not cpu time? Im asking, cause I found a webpage: http://nadeausoftware.com/articles/2012/03/c_c_tip_how_measure_cpu_time_benchmarking#clockgettme where it says that clock_gettime measures a cpu time...) Whats the truth and which one should I use to measure a wall time?
Another question is about cpu time. I found the answer that clock is great about it, so I wrote a sample code for it too. But its not what I really want, for my code it shows me a 0 secods of cpu time. Is it possible to measure cpu time more precisely (in seconds)? Thanks for any help (for now on, Im interested only in Linux solutions).
Heres my code:
#include <time.h>
#include <stdio.h> /* printf */
#include <math.h> /* sqrt */
#include <stdlib.h>
int main()
{
int i;
double sum;
// measure elapsed wall time
struct timespec now, tmstart;
clock_gettime(CLOCK_REALTIME, &tmstart);
for(i=0; i<1024; i++){
sum += log((double)i);
}
clock_gettime(CLOCK_REALTIME, &now);
double seconds = (double)((now.tv_sec+now.tv_nsec*1e-9) - (double)(tmstart.tv_sec+tmstart.tv_nsec*1e-9));
printf("wall time %fs\n", seconds);
// measure cpu time
double start = (double)clock() /(double) CLOCKS_PER_SEC;
for(i=0; i<1024; i++){
sum += log((double)i);
}
double end = (double)clock() / (double) CLOCKS_PER_SEC;
printf("cpu time %fs\n", end - start);
return 0;
}
Compile it like this:
gcc test.c -o test -lrt -lm
and it shows me:
wall time 0.000424s
cpu time 0.000000s
I know I can make more iterations but thats not the point here ;)
IMPORTANT:
printf("CLOCKS_PER_SEC is %ld\n", CLOCKS_PER_SEC);
shows
CLOCKS_PER_SEC is 1000000
According to my manual page on clock it says
POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.
When increasing the number iterations on my computer the measured cpu-time starts showing on 100000 iterations. From the returned figures it seems the resolution is actually 10 millisecond.
Beware that when you optimize your code, the whole loop may disappear because sum is a dead value. There is also nothing to stop the compiler from moving the clock statements across the loop as there are no real dependences with the code in between.
Let me elaborate a bit more on micro measurements of performance of code. The naive and tempting way to measure performance is indeed by adding clock statements as you have done. However since time is not a concept or side effect in C, compilers can often move these clock calls at will. To remedy this it is tempting to make such clock calls have side effects by for example having it access volatile variables. However this still doesn't prohibit the compiler from moving highly side-effect free code over the calls. Think for example of accessing regular local variables. But worse, by making the clock calls look very scary to the compiler, you will actually negatively impact any optimizations. As a result, mere measuring of the performance impacts that performance in a negative and undesirable way.
If you use profiling, as already mentioned by someone, you can get a pretty good assessment of the performance of even optimized code, although the overall time of course is increased.
Another good way to measure performance is just asking the compiler to report the number of cycles some code will take. For a lot of architectures the compiler has a very accurate estimate of this. However most notably for a Pentium architecture it doesn't because the hardware does a lot of scheduling that is hard to predict.
Although it is not standing practice I think compilers should support a pragma that marks a function to be measured. The compiler then can include high precision non-intrusive measuring points in the prologue and epilogue of a function and prohibit any inlining of the function. Depending on the architecture it can choose a high precision clock to measure time, preferably with support from the OS to only measure time of the current process.

Computing time on Linux: granularity and precision

**********************Original edit**********************
I am using different kind of clocks to get the time on Linux systems:
rdtsc, gettimeofday, clock_gettime
and already read various questions like these:
What's the best timing resolution can i get on Linux
How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?
How do I measure a time interval in C?
faster equivalent of gettimeofday
Granularity in time function
Why is clock_gettime so erratic?
But I am a little confused:
What is the difference between granularity, resolution, precision, and accuracy?
Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)
For example, while using the "clock_gettime" the precision is 10 ms as I get with:
struct timespec res;
clock_getres(CLOCK_REALTIME, &res):
and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:
long ticks_per_sec = sysconf(_SC_CLK_TCK);
Accuracy is in nanosecond, as the above code suggest:
struct timespec gettime_now;
clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;
In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:
http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw
So my question is If this remarks above were right, and also:
a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?
b) Can we change them (with any way)?
**********************Edit number 2**********************
I have tested some new clocks and I will like to share information:
a) In the page below, David Terei, did a fine program that compares various clock and their performances:
https://github.com/dterei/Scraps/tree/master/c/time
b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):
http://msdn.microsoft.com/en-us/library/t3282fe5.aspx
I think it's a Windows-oriented time function.
Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively
c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:
http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/
... or you may use a trial version
Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)
Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)
Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.
On Linux, the available timers with increasing granularity are:
clock() from <time.h> (20 ms or 10 ms resolution?)
gettimeofday() from Posix <sys/time.h> (microseconds)
clock_gettime() on Posix (nanoseconds?)
In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.

cross platform high res timer in C?

I'm trying to learn c by porting one of my apps. I'm looking for a high resolution timer that works on linux, mac and windows.
Trying to keep with things that are ANSI C99. I'm using mingw and gcc, so any GNU libs should also be fine?
I looked at time.h; but everything I read warns about clock (cpu ticks) isn't reliable across platforms nor can you get "real time" (instead of cpu time) less than 1 second resolution.
Is there any libs like Boost for C?
Try gettimeofday from sys/time.h. Although this is not a hi-res timer, it's more accurate than time function.
Could you explain what you mean by clock() not being "reliable"? The function returns a value of type clock_t of how many ticks have passed since the start of the program. The number of ticks per second is defined by the macro CLOCKS_PER_SEC, which on most systems is 1000, meaning that each tick corresponds to a millisecond of time. Hopefully this resolution is sufficient.

Resources