How to get the current time in milliseconds from C in Linux? - c

How do I get the current time on Linux in milliseconds?

This can be achieved using the POSIX clock_gettime function.
In the current version of POSIX, gettimeofday is marked obsolete. This means it may be removed from a future version of the specification. Application writers are encouraged to use the clock_gettime function instead of gettimeofday.
Here is an example of how to use clock_gettime:
#define _POSIX_C_SOURCE 200809L
#include <inttypes.h>
#include <math.h>
#include <stdio.h>
#include <time.h>
void print_current_time_with_ms (void)
{
long ms; // Milliseconds
time_t s; // Seconds
struct timespec spec;
clock_gettime(CLOCK_REALTIME, &spec);
s = spec.tv_sec;
ms = round(spec.tv_nsec / 1.0e6); // Convert nanoseconds to milliseconds
if (ms > 999) {
s++;
ms = 0;
}
printf("Current time: %"PRIdMAX".%03ld seconds since the Epoch\n",
(intmax_t)s, ms);
}
If your goal is to measure elapsed time, and your system supports the "monotonic clock" option, then you should consider using CLOCK_MONOTONIC instead of CLOCK_REALTIME.

You have to do something like this:
struct timeval tv;
gettimeofday(&tv, NULL);
double time_in_mill =
(tv.tv_sec) * 1000 + (tv.tv_usec) / 1000 ; // convert tv_sec & tv_usec to millisecond

Following is the util function to get current timestamp in milliseconds:
#include <sys/time.h>
long long current_timestamp() {
struct timeval te;
gettimeofday(&te, NULL); // get current time
long long milliseconds = te.tv_sec*1000LL + te.tv_usec/1000; // calculate milliseconds
// printf("milliseconds: %lld\n", milliseconds);
return milliseconds;
}
About timezone:
gettimeofday() support to specify timezone,
I use NULL, which ignore the timezone, but you can specify a timezone, if need.
#Update - timezone
Since the long representation of time is not relevant to or effected by timezone itself, so setting tz param of gettimeofday() is not necessary, since it won't make any difference.
And, according to man page of gettimeofday(), the use of the timezone structure is obsolete, thus the tz argument should normally be specified as NULL, for details please check the man page.

Use gettimeofday() to get the time in seconds and microseconds. Combining and rounding to milliseconds is left as an exercise.

C11 timespec_get
It returns up to nanoseconds, rounded to the resolution of the implementation.
It is already implemented in Ubuntu 15.10. API looks the same as the POSIX clock_gettime.
#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
More details here: https://stackoverflow.com/a/36095407/895245

This version need not math library and checked the return value of clock_gettime().
#include <time.h>
#include <stdlib.h>
#include <stdint.h>
/**
* #return milliseconds
*/
uint64_t get_now_time() {
struct timespec spec;
if (clock_gettime(1, &spec) == -1) { /* 1 is CLOCK_MONOTONIC */
abort();
}
return spec.tv_sec * 1000 + spec.tv_nsec / 1e6;
}

Derived from Dan Moulding's POSIX answer, this should work :
#include <time.h>
#include <math.h>
long millis(){
struct timespec _t;
clock_gettime(CLOCK_REALTIME, &_t);
return _t.tv_sec*1000 + lround(_t.tv_nsec/1e6);
}
Also as pointed out by David Guyon: compile with -lm

Jirka Justra's answer returns a long, which is usually 32 bits. The number of milliseconds since unix time 0 in 1970 requires more bits, so the data type
should be long long or unsigned long long, which is usually 64 bits. Also, as Kevin Thibedeau commented, rounding can be done without converting to floating point or using math.h.
#include <time.h>
long long millis () {
struct timespec t ;
clock_gettime ( CLOCK_REALTIME , & t ) ;
return t.tv_sec * 1000 + ( t.tv_nsec + 500000 ) / 1000000 ;
}
If you are trying to measure time less than 50 days, 32 bits is enough. Data type int is 32 bits or 64 bits on most computers, so the data type can be unsigned int.

Related

Getting the localtime in milliseconds

I need the fastest way to get the localtime (thus considering the current timezone) at least in milliseconds precision, if is possible to get in tenths of milliseconds it would be better.
I would like to avoid the use of gettimeofday() as it is now an obsolete function.
So, there seems that I'll need to use the clock_gettime(CLOCK_REALTIME, ...) and adjust the hour to the current timezone, but how ? And where is the best point to do that ? Before to store the timestamp got with clock_gettime, or before to convert it in a gregorian calendar in the current timezone?
EDIT: My original sample joining get_clock and localtime - There are better ways to reach that?
#include <time.h>
#include <stdio.h>
int main() {
struct timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
struct tm* ptm;
ptm = localtime(&(ts.tv_sec));
// Tenths of milliseconds (4 decimal digits)
int tenths_ms = ts.tv_nsec / (100000L);
printf("%04d-%02d-%02d %02d:%02d:%02d.%04d\n",
1900 + ptm->tm_year, ptm->tm_mon + 1, ptm->tm_mday,
ptm->tm_hour, ptm->tm_min, ptm->tm_sec, tenths_ms);
}
I don't think there's a better way than with clock_gettime() and localtime(). However you need to round the returned nanoseconds correctly and consider the case when the time is rounded up to the next second. To format the time you can use strftime() instead of formatting the tm struct manually:
#include <time.h>
#include <stdio.h>
int main(void) {
struct timespec ts;
long msec;
int err = clock_gettime(CLOCK_REALTIME, &ts);
if (err) {
perror("clock_gettime");
return 1;
}
// round nanoseconds to milliseconds
if (ts.tv_nsec >= 999500000) {
ts.tv_sec++;
msec = 0;
} else {
msec = (ts.tv_nsec + 500000) / 1000000;
}
struct tm* ptm = localtime(&ts.tv_sec);
if (ptm == NULL) {
perror("localtime");
return 1;
}
char time_str[sizeof("1900-01-01 23:59:59")];
time_str[strftime(time_str, sizeof(time_str),
"%Y-%m-%d %H:%M:%S", ptm)] = '\0';
printf("%s.%03li\n", time_str, msec);
}
Yes, this can be achieved using the clock_gettime() function. In the current version of POSIX, gettimeofday() is marked obsolete. This means it may be removed from a future version of the specification. Application writers are encouraged to use the clock_gettime() function instead of gettimeofday().
Long story short, here is an example of how to use clock_gettime():
#define _POSIX_C_SOURCE 200809L
#include <inttypes.h>
#include <math.h>
#include <stdio.h>
#include <time.h>
void print_current_time_in_ms (void)
{
long ms; // Milliseconds
time_t s; // Seconds
struct timespec spec;
clock_gettime(CLOCK_REALTIME, &spec);
s = spec.tv_sec;
ms = round(spec.tv_nsec / 1.0e6); // Convert nanoseconds to milliseconds
printf("Current time: %"PRIdMAX".%03ld seconds since the Epoch\n",
(intmax_t)s, ms);
}
If your goal is to measure elapsed time, and your system supports the "monotonic clock" option, then you should consider using CLOCK_MONOTONIC instead of CLOCK_REALTIME.
One more point, remember to include -lm flag when trying to compile the code.
To get the timezone, just do the following:
#define _GNU_SOURCE /* for tm_gmtoff and tm_zone */
#include <stdio.h>
#include <time.h>
int main(void)
{
time_t t = time(NULL);
struct tm lt = {0};
localtime_r(&t, &lt);
printf("Offset to GMT is %lds.\n", lt.tm_gmtoff);
printf("The time zone is '%s'.\n", lt.tm_zone);
return 0;
}
Note: The seconds since epoch returned by time() are measured as if in GMT (Greenwich Mean Time).

clock_gettime alternative in Mac OS X

When compiling a program I wrote on Mac OS X after installing the necessary libraries through MacPorts, I get this error:
In function 'nanotime':
error: 'CLOCK_REALTIME' undeclared (first use in this function)
error: (Each undeclared identifier is reported only once
error: for each function it appears in.)
It appears that clock_gettime is not implemented in Mac OS X. Is there an alternative means of getting the epoch time in nanoseconds? Unfortunately gettimeofday is in microseconds.
After hours of perusing different answers, blogs, and headers, I found a portable way to get the current time:
#include <time.h>
#include <sys/time.h>
#ifdef __MACH__
#include <mach/clock.h>
#include <mach/mach.h>
#endif
struct timespec ts;
#ifdef __MACH__ // OS X does not have clock_gettime, use clock_get_time
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), CALENDAR_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
ts.tv_sec = mts.tv_sec;
ts.tv_nsec = mts.tv_nsec;
#else
clock_gettime(CLOCK_REALTIME, &ts);
#endif
or check out this gist: https://gist.github.com/1087739
Hope this saves someone time. Cheers!
None of the solutions above answers the question. Either they don't give you absolute Unix time, or their accuracy is 1 microsecond. The most popular solution by jbenet is slow (~6000ns) and does not count in nanoseconds even though its return suggests so. Below is a test for 2 solutions suggested by jbenet and Dmitri B, plus my take on this. You can run the code without changes.
The 3rd solution does count in nanoseconds and gives you absolute Unix time reasonably fast (~90ns). So if someone find it useful - please let us all know here :-). I will stick to the one from Dmitri B (solution #1 in the code) - it fits my needs better.
I needed commercial quality alternative to clock_gettime() to make pthread_…timed.. calls, and found this discussion very helpful. Thanks guys.
/*
Ratings of alternatives to clock_gettime() to use with pthread timed waits:
Solution 1 "gettimeofday":
Complexity : simple
Portability : POSIX 1
timespec : easy to convert from timeval to timespec
granularity : 1000 ns,
call : 120 ns,
Rating : the best.
Solution 2 "host_get_clock_service, clock_get_time":
Complexity : simple (error handling?)
Portability : Mac specific (is it always available?)
timespec : yes (struct timespec return)
granularity : 1000 ns (don't be fooled by timespec format)
call time : 6000 ns
Rating : the worst.
Solution 3 "mach_absolute_time + gettimeofday once":
Complexity : simple..average (requires initialisation)
Portability : Mac specific. Always available
timespec : system clock can be converted to timespec without float-math
granularity : 1 ns.
call time : 90 ns unoptimised.
Rating : not bad, but do we really need nanoseconds timeout?
References:
- OS X is UNIX System 3 [U03] certified
http://www.opengroup.org/homepage-items/c987.html
- UNIX System 3 <--> POSIX 1 <--> IEEE Std 1003.1-1988
http://en.wikipedia.org/wiki/POSIX
http://www.unix.org/version3/
- gettimeofday() is mandatory on U03,
clock_..() functions are optional on U03,
clock_..() are part of POSIX Realtime extensions
http://www.unix.org/version3/inttables.pdf
- clock_gettime() is not available on MacMini OS X
(Xcode > Preferences > Downloads > Command Line Tools = Installed)
- OS X recommends to use gettimeofday to calculate values for timespec
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/pthread_cond_timedwait.3.html
- timeval holds microseconds, timespec - nanoseconds
http://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html
- microtime() is used by kernel to implement gettimeofday()
http://ftp.tw.freebsd.org/pub/branches/7.0-stable/src/sys/kern/kern_time.c
- mach_absolute_time() is really fast
http://www.opensource.apple.com/source/Libc/Libc-320.1.3/i386/mach/mach_absolute_time.c
- Only 9 deciaml digits have meaning when int nanoseconds converted to double seconds
Tutorial: Performance and Time post uses .12 precision for nanoseconds
http://www.macresearch.org/tutorial_performance_and_time
Example:
Three ways to prepare absolute time 1500 milliseconds in the future to use with pthread timed functions.
Output, N = 3, stock MacMini, OSX 10.7.5, 2.3GHz i5, 2GB 1333MHz DDR3:
inittime.tv_sec = 1390659993
inittime.tv_nsec = 361539000
initclock = 76672695144136
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_1() : 1390659994.861618000
get_abs_future_time_1() : 1390659994.861634000
get_abs_future_time_1() : 1390659994.861642000
get_abs_future_time_2() : 1390659994.861643671
get_abs_future_time_2() : 1390659994.861643877
get_abs_future_time_2() : 1390659994.861643972
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h> /* gettimeofday */
#include <mach/mach_time.h> /* mach_absolute_time */
#include <mach/mach.h> /* host_get_clock_service, mach_... */
#include <mach/clock.h> /* clock_get_time */
#define BILLION 1000000000L
#define MILLION 1000000L
#define NORMALISE_TIMESPEC( ts, uint_milli ) \
do { \
ts.tv_sec += uint_milli / 1000u; \
ts.tv_nsec += (uint_milli % 1000u) * MILLION; \
ts.tv_sec += ts.tv_nsec / BILLION; \
ts.tv_nsec = ts.tv_nsec % BILLION; \
} while (0)
static mach_timebase_info_data_t timebase = { 0, 0 }; /* numer = 0, denom = 0 */
static struct timespec inittime = { 0, 0 }; /* nanoseconds since 1-Jan-1970 to init() */
static uint64_t initclock; /* ticks since boot to init() */
void init()
{
struct timeval micro; /* microseconds since 1 Jan 1970 */
if (mach_timebase_info(&timebase) != 0)
abort(); /* very unlikely error */
if (gettimeofday(&micro, NULL) != 0)
abort(); /* very unlikely error */
initclock = mach_absolute_time();
inittime.tv_sec = micro.tv_sec;
inittime.tv_nsec = micro.tv_usec * 1000;
printf("\tinittime.tv_sec = %ld\n", inittime.tv_sec);
printf("\tinittime.tv_nsec = %ld\n", inittime.tv_nsec);
printf("\tinitclock = %ld\n", (long)initclock);
}
/*
* Get absolute future time for pthread timed calls
* Solution 1: microseconds granularity
*/
struct timespec get_abs_future_time_coarse(unsigned milli)
{
struct timespec future; /* ns since 1 Jan 1970 to 1500 ms in the future */
struct timeval micro = {0, 0}; /* 1 Jan 1970 */
(void) gettimeofday(&micro, NULL);
future.tv_sec = micro.tv_sec;
future.tv_nsec = micro.tv_usec * 1000;
NORMALISE_TIMESPEC( future, milli );
return future;
}
/*
* Solution 2: via clock service
*/
struct timespec get_abs_future_time_served(unsigned milli)
{
struct timespec future;
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), CALENDAR_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
future.tv_sec = mts.tv_sec;
future.tv_nsec = mts.tv_nsec;
NORMALISE_TIMESPEC( future, milli );
return future;
}
/*
* Solution 3: nanosecond granularity
*/
struct timespec get_abs_future_time_fine(unsigned milli)
{
struct timespec future; /* ns since 1 Jan 1970 to 1500 ms in future */
uint64_t clock; /* ticks since init */
uint64_t nano; /* nanoseconds since init */
clock = mach_absolute_time() - initclock;
nano = clock * (uint64_t)timebase.numer / (uint64_t)timebase.denom;
future = inittime;
future.tv_sec += nano / BILLION;
future.tv_nsec += nano % BILLION;
NORMALISE_TIMESPEC( future, milli );
return future;
}
#define N 3
int main()
{
int i, j;
struct timespec time[3][N];
struct timespec (*get_abs_future_time[])(unsigned milli) =
{
&get_abs_future_time_coarse,
&get_abs_future_time_served,
&get_abs_future_time_fine
};
init();
for (j = 0; j < 3; j++)
for (i = 0; i < N; i++)
time[j][i] = get_abs_future_time[j](1500); /* now() + 1500 ms */
for (j = 0; j < 3; j++)
for (i = 0; i < N; i++)
printf("get_abs_future_time_%d() : %10ld.%09ld\n",
j, time[j][i].tv_sec, time[j][i].tv_nsec);
return 0;
}
In effect, it seems not to be implemented for macOS before Sierra 10.12. You may want to look at this blog entry. The main idea is in the following code snippet:
#include <mach/mach_time.h>
#define ORWL_NANO (+1.0E-9)
#define ORWL_GIGA UINT64_C(1000000000)
static double orwl_timebase = 0.0;
static uint64_t orwl_timestart = 0;
struct timespec orwl_gettime(void) {
// be more careful in a multithreaded environement
if (!orwl_timestart) {
mach_timebase_info_data_t tb = { 0 };
mach_timebase_info(&tb);
orwl_timebase = tb.numer;
orwl_timebase /= tb.denom;
orwl_timestart = mach_absolute_time();
}
struct timespec t;
double diff = (mach_absolute_time() - orwl_timestart) * orwl_timebase;
t.tv_sec = diff * ORWL_NANO;
t.tv_nsec = diff - (t.tv_sec * ORWL_GIGA);
return t;
}
#if defined(__MACH__) && !defined(CLOCK_REALTIME)
#include <sys/time.h>
#define CLOCK_REALTIME 0
// clock_gettime is not implemented on older versions of OS X (< 10.12).
// If implemented, CLOCK_REALTIME will have already been defined.
int clock_gettime(int /*clk_id*/, struct timespec* t) {
struct timeval now;
int rv = gettimeofday(&now, NULL);
if (rv) return rv;
t->tv_sec = now.tv_sec;
t->tv_nsec = now.tv_usec * 1000;
return 0;
}
#endif
Everything you need is described in Technical Q&A QA1398: Technical Q&A QA1398: Mach Absolute Time Units, basically the function you want is mach_absolute_time.
Here's a slightly earlier version of the sample code from that page that does everything using Mach calls (the current version uses AbsoluteToNanoseconds from CoreServices). In current OS X (i.e., on Snow Leopard on x86_64) the absolute time values are actually in nanoseconds and so don't actually require any conversion at all. So, if you're good and writing portable code, you'll convert, but if you're just doing something quick and dirty for yourself, you needn't bother.
FWIW, mach_absolute_time is really fast.
uint64_t GetPIDTimeInNanoseconds(void)
{
uint64_t start;
uint64_t end;
uint64_t elapsed;
uint64_t elapsedNano;
static mach_timebase_info_data_t sTimebaseInfo;
// Start the clock.
start = mach_absolute_time();
// Call getpid. This will produce inaccurate results because
// we're only making a single system call. For more accurate
// results you should call getpid multiple times and average
// the results.
(void) getpid();
// Stop the clock.
end = mach_absolute_time();
// Calculate the duration.
elapsed = end - start;
// Convert to nanoseconds.
// If this is the first time we've run, get the timebase.
// We can use denom == 0 to indicate that sTimebaseInfo is
// uninitialised because it makes no sense to have a zero
// denominator is a fraction.
if ( sTimebaseInfo.denom == 0 ) {
(void) mach_timebase_info(&sTimebaseInfo);
}
// Do the maths. We hope that the multiplication doesn't
// overflow; the price you pay for working in fixed point.
elapsedNano = elapsed * sTimebaseInfo.numer / sTimebaseInfo.denom;
printf("multiplier %u / %u\n", sTimebaseInfo.numer, sTimebaseInfo.denom);
return elapsedNano;
}
Note that macOS Sierra 10.12 now supports clock_gettime():
#include <stdio.h>
#include <time.h>
int main() {
struct timespec res;
struct timespec time;
clock_getres(CLOCK_REALTIME, &res);
clock_gettime(CLOCK_REALTIME, &time);
printf("CLOCK_REALTIME: res.tv_sec=%lu res.tv_nsec=%lu\n", res.tv_sec, res.tv_nsec);
printf("CLOCK_REALTIME: time.tv_sec=%lu time.tv_nsec=%lu\n", time.tv_sec, time.tv_nsec);
}
It does provide nanoseconds; however, the resolution is 1000, so it is (in)effectively limited to microseconds:
CLOCK_REALTIME: res.tv_sec=0 res.tv_nsec=1000
CLOCK_REALTIME: time.tv_sec=1475279260 time.tv_nsec=525627000
You will need XCode 8 or later to be able to use this feature. Code compiled to use this feature will not run on versions of Mac OS X (10.11 or earlier).
Thanks for your posts
I think you can add the following lines
#ifdef __MACH__
#include <mach/mach_time.h>
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 0
int clock_gettime(int clk_id, struct timespec *t){
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
uint64_t time;
time = mach_absolute_time();
double nseconds = ((double)time * (double)timebase.numer)/((double)timebase.denom);
double seconds = ((double)time * (double)timebase.numer)/((double)timebase.denom * 1e9);
t->tv_sec = seconds;
t->tv_nsec = nseconds;
return 0;
}
#else
#include <time.h>
#endif
Let me know what you get for latency and granularity
Maristic has the best answer here to date. Let me simplify and add a remark. #include and Init():
#include <mach/mach_time.h>
double conversion_factor;
void Init() {
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
conversion_factor = (double)timebase.numer / (double)timebase.denom;
}
Use as:
uint64_t t1, t2;
Init();
t1 = mach_absolute_time();
/* profiled code here */
t2 = mach_absolute_time();
double duration_ns = (double)(t2 - t1) * conversion_factor;
Such timer has latency of 65ns +/- 2ns (2GHz CPU). Use this if you need "time evolution" of single execution. Otherwise loop your code 10000 times and profile even with gettimeofday(), which is portable (POSIX), and has the latency of 100ns +/- 0.5ns (though only 1us granularity).
I tried the version with clock_get_time, and did cache the host_get_clock_service call. It's way slower than gettimeofday, it takes several microseconds per invocation. And, what's worse, the return value has steps of 1000, i.e. it's still microsecond granularity.
I'd advice to use gettimeofday, and multiply tv_usec by 1000.
Based on the open source mach_absolute_time.c we can see that the line extern mach_port_t clock_port; tells us there's a mach port already initialized for monotonic time. This clock port can be accessed directly without having to resort to calling mach_absolute_time then converting back to a struct timespec. Bypassing a call to mach_absolute_time should improve performance.
I created a small Github repo (PosixMachTiming) with the code based on the extern clock_port and a similar thread. PosixMachTiming emulates clock_gettime for CLOCK_REALTIME and CLOCK_MONOTONIC. It also emulates the function clock_nanosleep for absolute monotonic time. Please give it a try and see how the performance compares. Maybe you might want to create comparative tests or emulate other POSIX clocks/functions?
As of at least as far back as Mountain Lion, mach_absolute_time() returns nanoseconds and not absolute time (which was the number of bus cycles).
The following code on my MacBook Pro (2 GHz Core i7) showed that the time to call mach_absolute_time() averaged 39 ns over 10 runs (min 35, max 45), which is basically the time between the return of the two calls to mach_absolute_time(), about 1 invocation:
#include <stdint.h>
#include <mach/mach_time.h>
#include <iostream>
using namespace std;
int main()
{
uint64_t now, then;
uint64_t abs;
then = mach_absolute_time(); // return nanoseconds
now = mach_absolute_time();
abs = now - then;
cout << "nanoseconds = " << abs << endl;
}
void clock_get_uptime(uint64_t *result);
void clock_get_system_microtime( uint32_t *secs,
uint32_t *microsecs);
void clock_get_system_nanotime( uint32_t *secs,
uint32_t *nanosecs);
void clock_get_calendar_microtime( uint32_t *secs,
uint32_t *microsecs);
void clock_get_calendar_nanotime( uint32_t *secs,
uint32_t *nanosecs);
For MacOS you can find a good information on their developers page
https://developer.apple.com/library/content/documentation/Darwin/Conceptual/KernelProgramming/services/services.html
I found another portable solution.
Declare in some header file (or even in your source one):
/* If compiled on DARWIN/Apple platforms. */
#ifdef DARWIN
#define CLOCK_REALTIME 0x2d4e1588
#define CLOCK_MONOTONIC 0x0
#endif /* DARWIN */
And the add the function implementation:
#ifdef DARWIN
/*
* Bellow we provide an alternative for clock_gettime,
* which is not implemented in Mac OS X.
*/
static inline int clock_gettime(int clock_id, struct timespec *ts)
{
struct timeval tv;
if (clock_id != CLOCK_REALTIME)
{
errno = EINVAL;
return -1;
}
if (gettimeofday(&tv, NULL) < 0)
{
return -1;
}
ts->tv_sec = tv.tv_sec;
ts->tv_nsec = tv.tv_usec * 1000;
return 0;
}
#endif /* DARWIN */
Don't forget to include <time.h>.

How do I measure time in C?

I want to find out for how long (approximately) some block of code executes. Something like this:
startStopwatch();
// do some calculations
stopStopwatch();
printf("%lf", timeMesuredInSeconds);
How?
You can use the clock method in time.h
Example:
clock_t start = clock();
/*Do something*/
clock_t end = clock();
float seconds = (float)(end - start) / CLOCKS_PER_SEC;
You can use the time.h library, specifically the time and difftime functions:
/* difftime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t start,end;
double dif;
time (&start);
// Do some calculation.
time (&end);
dif = difftime (end,start);
printf ("Your calculations took %.2lf seconds to run.\n", dif );
return 0;
}
(Example adapted from the difftime webpage linked above.)
Please note that this method can only give seconds worth of accuracy - time_t records the seconds since the UNIX epoch (Jan 1st, 1970).
Sometime it's needed to measure astronomical time rather than CPU time (especially this applicable on Linux):
#include <time.h>
double what_time_is_it()
{
struct timespec now;
clock_gettime(CLOCK_REALTIME, &now);
return now.tv_sec + now.tv_nsec*1e-9;
}
int main() {
double time = what_time_is_it();
printf("time taken %.6lf\n", what_time_is_it() - time);
return 0;
}
The standard C library provides the time function and it is useful if you only need to compare seconds. If you need millisecond precision, though, the most portable way is to call timespec_get. It can tell time up to nanosecond precision, if the system supports. Calling it, however, takes a bit more effort because it involves a struct. Here's a function that just converts the struct to a simple 64-bit integer.
#include <stdio.h>
#include <inttypes.h>
#include <time.h>
int64_t millis()
{
struct timespec now;
timespec_get(&now, TIME_UTC);
return ((int64_t) now.tv_sec) * 1000 + ((int64_t) now.tv_nsec) / 1000000;
}
int main(void)
{
printf("Unix timestamp with millisecond precision: %" PRId64 "\n", millis());
}
Unlike clock, this function returns a Unix timestamp so it will correctly account for the time spent in blocking functions, such as sleep. This is a useful property for benchmarking and implementing delays that take running time into account.
GetTickCount().
#include <windows.h>
void MeasureIt()
{
DWORD dwStartTime = GetTickCount();
DWORD dwElapsed;
DoSomethingThatYouWantToTime();
dwElapsed = GetTickCount() - dwStartTime;
printf("It took %d.%3d seconds to complete\n", dwElapsed/1000, dwElapsed - dwElapsed/1000);
}
I would use the QueryPerformanceCounter and QueryPerformanceFrequency functions of the Windows API. Call the former before and after the block and subtract (current − old) to get the number of "ticks" between the instances. Divide this by the value obtained by the latter function to get the duration in seconds.
For sake of completeness, there is more precise clock counter than GetTickCount() or clock() which gives you only 32-bit result that can overflow relatively quickly. It's QueryPerformanceCounter(). QueryPerformanceFrequency() gets clock frequency which is a divisor for two counters difference. Something like CLOCKS_PER_SEC in <time.h>.
#include <stdio.h>
#include <windows.h>
int main()
{
LARGE_INTEGER tu_freq, tu_start, tu_end;
__int64 t_ns;
QueryPerformanceFrequency(&tu_freq);
QueryPerformanceCounter(&tu_start);
/* do your stuff */
QueryPerformanceCounter(&tu_end);
t_ns = 1000000000ULL * (tu_end.QuadPart - tu_start.QuadPart) / tu_freq.QuadPart;
printf("dt = %g[s]; (%llu)[ns]\n", t_ns/(double)1e+9, t_ns);
return 0;
}
If you don't need fantastic resolution, you could use GetTickCount(): http://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx
(If it's for something other than your own simple diagnostics, then note that this number can wrap around, so you'll need to handle that with a little arithmetic).
QueryPerformanceCounter is another reasonable option. (It's also described on MSDN)

Calculating elapsed time in a C program in milliseconds

I want to calculate the time in milliseconds taken by the execution of some part of my program. I've been looking online, but there's not much info on this topic. Any of you know how to do this?
Best way to answer is with an example:
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
/* Return 1 if the difference is negative, otherwise 0. */
int timeval_subtract(struct timeval *result, struct timeval *t2, struct timeval *t1)
{
long int diff = (t2->tv_usec + 1000000 * t2->tv_sec) - (t1->tv_usec + 1000000 * t1->tv_sec);
result->tv_sec = diff / 1000000;
result->tv_usec = diff % 1000000;
return (diff<0);
}
void timeval_print(struct timeval *tv)
{
char buffer[30];
time_t curtime;
printf("%ld.%06ld", tv->tv_sec, tv->tv_usec);
curtime = tv->tv_sec;
strftime(buffer, 30, "%m-%d-%Y %T", localtime(&curtime));
printf(" = %s.%06ld\n", buffer, tv->tv_usec);
}
int main()
{
struct timeval tvBegin, tvEnd, tvDiff;
// begin
gettimeofday(&tvBegin, NULL);
timeval_print(&tvBegin);
// lengthy operation
int i,j;
for(i=0;i<999999L;++i) {
j=sqrt(i);
}
//end
gettimeofday(&tvEnd, NULL);
timeval_print(&tvEnd);
// diff
timeval_subtract(&tvDiff, &tvEnd, &tvBegin);
printf("%ld.%06ld\n", tvDiff.tv_sec, tvDiff.tv_usec);
return 0;
}
Another option ( at least on some UNIX ) is clock_gettime and related functions. These allow access to various realtime clocks and you can select one of the higher resolution ones and throw away the resolution you don't need.
The gettimeofday function returns the time with microsecond precision (if the platform can support that, of course):
The gettimeofday() function shall
obtain the current time, expressed as
seconds and microseconds since the
Epoch, and store it in the timeval
structure pointed to by tp. The
resolution of the system clock is
unspecified.
C libraries have a function to let you get the system time. You can calculate elapsed time after you capture the start and stop times.
The function is called gettimeofday() and you can look at the man page to find out what to include and how to use it.
On Windows, you can just do this:
DWORD dwTickCount = GetTickCount();
// Perform some things.
printf("Code took: %dms\n", GetTickCount() - dwTickCount);
Not the most general/elegant solution, but nice and quick when you need it.

How to measure time in milliseconds using ANSI C?

Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.
There is no ANSI C function that provides better than 1 second time resolution but the POSIX function gettimeofday provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.
You can use this function like this:
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
// Some code you want to time, for example:
sleep(1);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);
This returns Time elapsed: 1.000870 on my machine.
#include <time.h>
clock_t uptime = clock() / (CLOCKS_PER_SEC / 1000);
I always use the clock_gettime() function, returning time from the CLOCK_MONOTONIC clock. The time returned is the amount of time, in seconds and nanoseconds, since some unspecified point in the past, such as system startup of the epoch.
#include <stdio.h>
#include <stdint.h>
#include <time.h>
int64_t timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
int main(int argc, char **argv)
{
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
// Some code I am interested in measuring
clock_gettime(CLOCK_MONOTONIC, &end);
uint64_t timeElapsed = timespecDiff(&end, &start);
}
Implementing a portable solution
As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.
Monotonic clock vs. time stamps
Generally speaking there are two ways of time measurement:
monotonic clock;
current (date)time stamp.
The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that's why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).
The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.
So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.
Fallback strategy
When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.
Windows
There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:
query a timer frequency (ticks per second) with QueryPerformanceFrequency:
LARGE_INTEGER tcounter;
LARGE_INTEGER freq;
if (QueryPerformanceFrequency (&tcounter) != 0)
freq = tcounter.QuadPart;
The timer frequency is fixed on the system boot so you need to get it only once.
query the current ticks value with QueryPerformanceCounter:
LARGE_INTEGER tcounter;
LARGE_INTEGER tick_value;
if (QueryPerformanceCounter (&tcounter) != 0)
tick_value = tcounter.QuadPart;
scale the ticks to elapsed time, i.e. to microseconds:
LARGE_INTEGER usecs = (tick_value - prev_tick_value) / (freq / 1000000);
According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:
GetTickCount provides the number of milliseconds that have elapsed since the system was started. It wraps every 49.7 days, so be careful in measuring longer intervals.
GetTickCount64 is a 64-bit version of GetTickCount, but it is available starting from Windows Vista and above.
OS X (macOS)
OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple's article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:
get the clock frequency numerator and denominator:
#include <mach/mach_time.h>
#include <stdint.h>
static uint64_t freq_num = 0;
static uint64_t freq_denom = 0;
void init_clock_frequency ()
{
mach_timebase_info_data_t tb;
if (mach_timebase_info (&tb) == KERN_SUCCESS && tb.denom != 0) {
freq_num = (uint64_t) tb.numer;
freq_denom = (uint64_t) tb.denom;
}
}
You need to do that only once.
query the current tick value with mach_absolute_time:
uint64_t tick_value = mach_absolute_time ();
scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:
uint64_t value_diff = tick_value - prev_tick_value;
/* To prevent overflow */
value_diff /= 1000;
value_diff *= freq_num;
value_diff /= freq_denom;
The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by 1000 to get microseconds. You can find the same approach used in Chromium's time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.
Linux and UNIX
The clock_gettime call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need is CLOCK_MONOTONIC. Not all systems which have clock_gettime support CLOCK_MONOTONIC, so the first thing you need to do is to check its availability:
if _POSIX_MONOTONIC_CLOCK is defined to a value >= 0 it means that CLOCK_MONOTONIC is avaiable;
if _POSIX_MONOTONIC_CLOCK is defined to 0 it means that you should additionally check if it works at runtime, I suggest to use sysconf:
#include <unistd.h>
#ifdef _SC_MONOTONIC_CLOCK
if (sysconf (_SC_MONOTONIC_CLOCK) > 0) {
/* A monotonic clock presents */
}
#endif
otherwise a monotonic clock is not supported and you should use a fallback strategy (see below).
Usage of clock_gettime is pretty straight forward:
get the time value:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_posix_clock_time ()
{
struct timespec ts;
if (clock_gettime (CLOCK_MONOTONIC, &ts) == 0)
return (uint64_t) (ts.tv_sec * 1000000 + ts.tv_nsec / 1000);
else
return 0;
}
I've scaled down the time to microseconds here.
calculate the difference with the previous time value received the same way:
uint64_t prev_time_value, time_value;
uint64_t time_diff;
/* Initial time */
prev_time_value = get_posix_clock_time ();
/* Do some work here */
/* Final time */
time_value = get_posix_clock_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
The best fallback strategy is to use the gettimeofday call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as with clock_gettime, but to get a time value you should:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_gtod_clock_time ()
{
struct timeval tv;
if (gettimeofday (&tv, NULL) == 0)
return (uint64_t) (tv.tv_sec * 1000000 + tv.tv_usec);
else
return 0;
}
Again, the time value is scaled down to microseconds.
SGI IRIX
IRIX has the clock_gettime call, but it lacks CLOCK_MONOTONIC. Instead it has its own monotonic clock source defined as CLOCK_SGI_CYCLE which you should use instead of CLOCK_MONOTONIC with clock_gettime.
Solaris and HP-UX
Solaris has its own high-resolution timer interface gethrtime which returns the current timer value in nanoseconds. Though the newer versions of Solaris may have clock_gettime, you can stick to gethrtime if you need to support old Solaris versions.
Usage is simple:
#include <sys/time.h>
void time_measure_example ()
{
hrtime_t prev_time_value, time_value;
hrtime_t time_diff;
/* Initial time */
prev_time_value = gethrtime ();
/* Do some work here */
/* Final time */
time_value = gethrtime ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
HP-UX lacks clock_gettime, but it supports gethrtime which you should use in the same way as on Solaris.
BeOS
BeOS also has its own high-resolution timer interface system_time which returns the number of microseconds have elapsed since the computer was booted.
Example usage:
#include <kernel/OS.h>
void time_measure_example ()
{
bigtime_t prev_time_value, time_value;
bigtime_t time_diff;
/* Initial time */
prev_time_value = system_time ();
/* Do some work here */
/* Final time */
time_value = system_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
OS/2
OS/2 has its own API to retrieve high-precision time stamps:
query a timer frequency (ticks per unit) with DosTmrQueryFreq (for GCC compiler):
#define INCL_DOSPROFILE
#define INCL_DOSERRORS
#include <os2.h>
#include <stdint.h>
ULONG freq;
DosTmrQueryFreq (&freq);
query the current ticks value with DosTmrQueryTime:
QWORD tcounter;
unit64_t time_low;
unit64_t time_high;
unit64_t timestamp;
if (DosTmrQueryTime (&tcounter) == NO_ERROR) {
time_low = (unit64_t) tcounter.ulLo;
time_high = (unit64_t) tcounter.ulHi;
timestamp = (time_high << 32) | time_low;
}
scale the ticks to elapsed time, i.e. to microseconds:
uint64_t usecs = (prev_timestamp - timestamp) / (freq / 1000000);
Example implementation
You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).
timespec_get from C11
Returns up to nanoseconds, rounded to the resolution of the implementation.
Looks like an ANSI ripoff from POSIX' clock_gettime.
Example: a printf is done every 100ms on Ubuntu 15.10:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
static long get_nanos(void) {
struct timespec ts;
timespec_get(&ts, TIME_UTC);
return (long)ts.tv_sec * 1000000000L + ts.tv_nsec;
}
int main(void) {
long nanos;
long last_nanos;
long start;
nanos = get_nanos();
last_nanos = nanos;
start = nanos;
while (1) {
nanos = get_nanos();
if (nanos - last_nanos > 100000000L) {
printf("current nanos: %ld\n", nanos - start);
last_nanos = nanos;
}
}
return EXIT_SUCCESS;
}
The C11 N1570 standard draft 7.27.2.5 "The timespec_get function says":
If base is TIME_UTC, the tv_sec member is set to the number of seconds since an
implementation defined epoch, truncated to a whole value and the tv_nsec member is
set to the integral number of nanoseconds, rounded to the resolution of the system clock. (321)
321) Although a struct timespec object describes times with nanosecond resolution, the available
resolution is system dependent and may even be greater than 1 second.
C++11 also got std::chrono::high_resolution_clock: C++ Cross-Platform High-Resolution Timer
glibc 2.21 implementation
Can be found under sysdeps/posix/timespec_get.c as:
int
timespec_get (struct timespec *ts, int base)
{
switch (base)
{
case TIME_UTC:
if (__clock_gettime (CLOCK_REALTIME, ts) < 0)
return 0;
break;
default:
return 0;
}
return base;
}
so clearly:
only TIME_UTC is currently supported
it forwards to __clock_gettime (CLOCK_REALTIME, ts), which is a POSIX API: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
Linux x86-64 has a clock_gettime system call.
Note that this is not a fail-proof micro-benchmarking method because:
man clock_gettime says that this measure may have discontinuities if you change some system time setting while your program runs. This should be a rare event of course, and you might be able to ignore it.
this measures wall time, so if the scheduler decides to forget about your task, it will appear to run for longer.
For those reasons getrusage() might be a better better POSIX benchmarking tool, despite it's lower microsecond maximum precision.
More information at: Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?
The best precision you can possibly get is through the use of the x86-only "rdtsc" instruction, which can provide clock-level resolution (ne must of course take into account the cost of the rdtsc call itself, which can be measured easily on application startup).
The main catch here is measuring the number of clocks per second, which shouldn't be too hard.
The accepted answer is good enough.But my solution is more simple.I just test in Linux, use gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0.
Alse use gettimeofday, the tv_sec is the part of second, and the tv_usec is microseconds, not milliseconds.
long currentTimeMillis() {
struct timeval time;
gettimeofday(&time, NULL);
return time.tv_sec * 1000 + time.tv_usec / 1000;
}
int main() {
printf("%ld\n", currentTimeMillis());
// wait 1 second
sleep(1);
printf("%ld\n", currentTimeMillis());
return 0;
}
It print:
1522139691342
1522139692342, exactly a second.
^
As of ANSI/ISO C11 or later, you can use timespec_get() to obtain millisecond, microsecond, or nanosecond timestamps, like this:
#include <time.h>
/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns) ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns) ((ns)/1000)
/// Get a time stamp in milliseconds.
uint64_t millis()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
return ms;
}
/// Get a time stamp in microseconds.
uint64_t micros()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
return us;
}
/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
return ns;
}
// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.
For a much-more-thorough answer of mine, including with an entire timing library I wrote, see here: How to get a simple timestamp in C.
#Ciro Santilli Путлер also presents a concise demo of C11's timespec_get() function here, which is how I first learned how to use that function.
In my more-thorough answer, I explain that on my system, the best resolution possible is ~20ns, but the resolution is hardware-dependent and can vary from system to system.
Under windows:
SYSTEMTIME t;
GetLocalTime(&t);
swprintf_s(buff, L"[%02d:%02d:%02d:%d]\t", t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);

Resources