I know there is <chrono> in c++ and std::chrono::high_resolution_clock::now() can get the exact time of type std::chrono::high_resolution_clock::time_point. However, I am using raw C language so it is impossible to call this function. The time() function can only get second level time.
My final target is to measure the running time of a multi-thread program. The clock() function will sum the clocks of all threads up so it can not be used to measure the running time.
The following snippet will give the exact system time :
time_t now;
time(&now);
struct tm* now_tm;
now_tm = localtime(&now);
char dte[80];
strftime (dte, 80, "%Y-%m-%d %I:%M:%S", now_tm);
Unfortunately, the higher-resolution clocks tend to differ a bit from platform to platform. It also depends what type of clock you're interested in (such as real time, CPU time, or monotonic time).
Based on your use case, you probably want monotonic timeā¦
If you're on a POSIX operating system, clock_gettime is usually the way to go, but even then which clocks are available can vary a lot. For example, macOS doesn't support CLOCK_MONOTONIC, but it does have other APIs you can use to get the monotonic time.
If you're on Windows you'll probably want to use QueryPerformanceCounters.
If you want something to abstract away the differences, I put something together a while back.
Related
Quoting man 3 ftime:
This function is obsolete. Don't use it. If the time in seconds suffices, time(2) can be used; gettimeofday(2) gives microseconds;
clock_gettime(2) gives nanoseconds but is not as widely available.
Why should it not be used? What are the perils?
I understand that time(2), gettimeofday(2) and clock_gettime(2) can be used instead of ftime(3), but ftime(3) gives out exactly milliseconds, and this I find convenient, since milliseconds is the exact precision I need.
Such advice is intended to help you make your program portable and avoid various pitfalls. While the ftime function likely won't be removed from systems that have it, new systems your software gets ported to might not have it, and you may run into problems, e.g. if the system model of time zone evolves to something not conveniently expressible in the format of ftime's structure.
On posix it is possible to use timespec to calculate accurate time length (like seconds and milliseconds). Unfortunately I need to migrate to windows with Visual Studio compiler. The VS time.h library doesn't declare timespec so I'm looking for other options. As far as could search is it possible to use clock and time_t although I could't check how precise is counting millisecons with clock counting.
What do you do/use for calculating time elapse in a operation (if possible using standards c++ library) ?
The function GetTickCount is usually used for that.
Also a similiar thread: C++ timing, milliseconds since last whole second
Depends on what sort of accuracy you want, my understanding is that clock and time_t are not accurate to the millisecond level. Similarly GetTickCount() is commonly used (MS docs say accurate to 10-15ms) but not sufficiently accurate for many purposes.
I use QueryPerformanceFrequency and QueryPerformanceCounter for accurate timing measurements for performance.
**********************Original edit**********************
I am using different kind of clocks to get the time on Linux systems:
rdtsc, gettimeofday, clock_gettime
and already read various questions like these:
What's the best timing resolution can i get on Linux
How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?
How do I measure a time interval in C?
faster equivalent of gettimeofday
Granularity in time function
Why is clock_gettime so erratic?
But I am a little confused:
What is the difference between granularity, resolution, precision, and accuracy?
Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)
For example, while using the "clock_gettime" the precision is 10 ms as I get with:
struct timespec res;
clock_getres(CLOCK_REALTIME, &res):
and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:
long ticks_per_sec = sysconf(_SC_CLK_TCK);
Accuracy is in nanosecond, as the above code suggest:
struct timespec gettime_now;
clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;
In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:
http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw
So my question is If this remarks above were right, and also:
a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?
b) Can we change them (with any way)?
**********************Edit number 2**********************
I have tested some new clocks and I will like to share information:
a) In the page below, David Terei, did a fine program that compares various clock and their performances:
https://github.com/dterei/Scraps/tree/master/c/time
b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):
http://msdn.microsoft.com/en-us/library/t3282fe5.aspx
I think it's a Windows-oriented time function.
Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively
c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:
http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/
... or you may use a trial version
Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)
Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)
Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.
On Linux, the available timers with increasing granularity are:
clock() from <time.h> (20 ms or 10 ms resolution?)
gettimeofday() from Posix <sys/time.h> (microseconds)
clock_gettime() on Posix (nanoseconds?)
In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.
I'm trying to learn c by porting one of my apps. I'm looking for a high resolution timer that works on linux, mac and windows.
Trying to keep with things that are ANSI C99. I'm using mingw and gcc, so any GNU libs should also be fine?
I looked at time.h; but everything I read warns about clock (cpu ticks) isn't reliable across platforms nor can you get "real time" (instead of cpu time) less than 1 second resolution.
Is there any libs like Boost for C?
Try gettimeofday from sys/time.h. Although this is not a hi-res timer, it's more accurate than time function.
Could you explain what you mean by clock() not being "reliable"? The function returns a value of type clock_t of how many ticks have passed since the start of the program. The number of ticks per second is defined by the macro CLOCKS_PER_SEC, which on most systems is 1000, meaning that each tick corresponds to a millisecond of time. Hopefully this resolution is sufficient.
What is the most efficient way of getting current time/date/day/year in C language? As I have to execute this many times, I need a real efficient way.
I am on freeBSD.
thanks in advance.
/* ctime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t rawtime;
time ( &rawtime );
printf ( "The current local time is: %s", ctime (&rawtime) );
return 0;
}
You can use ctime, if you need it as a string.
Standard C provides only one way to get the time - time() - which can be converted to a time/date/year with localtime() or gmtime(). So trivially, that must be the most efficient way.
Any other methods are operating-system specific, and you haven't told us what operating system you're using.
It really depends on what you mean by "many" :-)
I think you'll probably find that using the ISO standard time() and localtime() functions will be more than fast enough. For example, on my "Intel(R) Core(TM)2 Duo CPU E6850 # 3.00GHz", using unoptimised code, I can call time() ten million times in 1.045 seconds, and a time()/localtime() combination half a million times in 0.98 seconds. Whether that's fast enough for your needs, only you can decide, but I'm hard-pressed trying to come up with a use case that needs more grunt than that.
The time() function gives you the number of seconds since the epoch, while localtime() both converts it to local time (from UTC) and splits it into a more usable form, the struct tm structure.
#include <time.h>
time_t t = time (NULL);
struct tm* lt = localtime (&t);
// Use lt->tm_year, lt->tm_mday, and so forth.
Any attempt to cache the date/time and use other ways of finding out a delta to apply to it, such as with clock(), will almost invariably:
be slower; and
suffer from the fact you won't pick up external time changes.
The simplest is
#include <time.h>
//...
time_t current_time = time (NULL);
struct tm* local_time = localtime (¤t_time);
printf ("the time is %s\n", asctime (local_time));
You can use gettimeofday() function to get time in seconds & microseconds which is (I think) very fast (as there is a similar function in Linux kernel do_gettimeofday()) and then you can convert it to your required format (might possible to use functions mentioned above for conversion.
I hope this helps.
Just about the only way (that's standard, anyway) is to call time followed by localtime or gmtime.
well, in general, directly accessing the OS's API to get the time is probably the most efficient, but not so portable.....
the C time functions are ok.
But really depends on your platform
Assuming a one second resolution is enough, the most efficient way on FreeBSD (or any POSIX system) is likely
Install a one second interval timer with setitimer (ITIMER_REAL, ...)
When it triggers the SIGALRM, update a static variable holding the currrent time
Use the value in the static variable whenever you need the time
Even if signals get lost due to system overload this will correct itself the next time the process is scheduled.