gettimeofday for displaying current time without using space - c

Is there any way to store number of time ticks from gettimeofday() and format into some specific way (for example "%m-%d-%Y %T") without using any array?.
This will save memory for each instance of a program which calculates current time.
Code for the same using an array.(Taken from C - gettimeofday for computing time?)
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);

any way to store number ... from gettimeofday() and ... without using any array?
gettimeofday() does not return the "number of time ticks". It is a *nix functions defined to populate a struct timeval which contains a time_t and a long representing the current time in seconds and microseconds. Use clock() to get "number of time ticks".
The simplest way to avoid an array is to keep the result in a struct timeval without any modifications.
This will save memory for each instance of a program which calculates current time.
This sounds like OP would like to conserve memory as much as possible. Code could convert the struct timeval to a int64_t (a count of microseconds since the epoch) without much risk of range loss. This would only be a memory savings if sizeof(struct timeval) > sizeof(int64_t).
#include <sys/time.h>
int64_t timeval_to_int64(const struct timeval *tv) {
const int32_t us_per_s = 1000000;
int64_t t = tv->tv_sec;
if (t >= INT64_MAX / us_per_s
&& (t > INT64_MAX / us_per_s || tv->tv_usec > INT64_MAX % us_per_s)) {
// Handle overflow
return INT64_MAX;
}
if (t <= INT64_MIN / us_per_s
&& (t < INT64_MIN / us_per_s || tv->tv_usec < INT64_MIN % us_per_s)) {
// Handle underflow
return INT64_MIN;
}
return t * us_per_s + tv->tv_usec;
}
void int64_to_timeval(struct timeval *tv, int64_t t) {
const int32_t us_per_s = 1000000;
tv->tv_sec = t / us_per_s;
tv->tv_usec = t % us_per_s;
if (tv->tv_usec < 0) { // insure tv_usec member is positive.
tv->tv_usec += us_per_s;
tv->tv_sec++;
}
}
If code wants to save the timestamp to a file as text, with minimal space, a number of choices are available. What is the most efficient binary to text encoding?
goes over some ideas. For OP's code though a decimal or hexadecimal print of the 2 members may be sufficient.
printf("%lld.%06ld\n", (long long) tv.tv_sec, tv.tv_usec);
I do not recommend storing timestamps via localtime() as that imparts ambiguity or overhead of timezone and day light savings time. If code must saving using month, day, year, consider ISO 8601 and using universal time.
#include <sys/time.h>
int print_timeval_to_ISO8601(const struct timeval *tv) {
struct tm t = *gmtime(&tv->tv_sec);
return printf("%04d-%02d-%02dT%02d:%02d:%02d.%06ld\n", //
t.tm_year + 1900, t.tm_mon + 1, t.tm_mday, //
t.tm_hour, t.tm_min, t.tm_sec, tv->tv_usec);
// Or ("%04d%02d%02d%02d%02d%02d.%06ld\n"
}
Note OP's code has a weakness. Microsecond values like 12 will print a ".12" when ".000012" should appear.
// printf("%s%ld\n",buffer,tv.tv_usec);
printf("%s%06ld\n",buffer,tv.tv_usec);

Related

Measuring Elapsed Time Using clock_gettime(CLOCK_MONOTONIC)

I have to elapse the measuring time during multiple threads. I must get an output like this:
Starting Time | Thread Number
00000000000 | 1
00000000100 | 2
00000000200 | 3
Firstly, I used gettimeofday but I saw that there are some negative numbers then I made little research and learn that gettimeofday is not reliable to measure elapsed time. Then I decide to use clock_gettime(CLOCK_MONOTONIC).
However, there is a problem. When I use second to measure time, I cannot measure time precisely. When I use nanosecond, length of end.tv_nsec variable cannot exceed 9 digits (since it is a long variable). That means, when it has to move to the 10th digit, it still remains at 9 digits and actually the number gets smaller, causing the elapsed time to be negative.
That is my code:
long elapsedTime;
struct timespec end;
struct timespec start2;
//gettimeofday(&start2, NULL);
clock_gettime(CLOCK_MONOTONIC,&start2);
while(c <= totalCount)
{
if(strcmp(algorithm,"FCFS") == 0)
{
printf("In SErunner count=%d \n",count);
if(count > 0)
{
printf("Count = %d \n",count);
it = deQueue();
c++;
tid = it->tid;
clock_gettime(CLOCK_MONOTONIC,&end);
usleep( 1000*(it->value));
elapsedTime = ( end.tv_sec - start2.tv_sec);
printf("Process of thread %d finished with value %d\n",it->tid,it->value);
fprintf(outputFile,"%ld %d %d\n",elapsedTime,it->value,it->tid+1);
}
}
Unfortunately, timespec does not have microsecond variable. If you can help me I will be very happy.
Write a helper function that calculates the difference between two timespecs:
int64_t difftimespec_ns(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec);
}
If you want it in microseconds, just divide it by 1000, or use:
int64_t difftimespec_us(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec) / 1000;
}
Remember to include <inttypes.h>, so that you can use conversion "%" PRIi64 to print integers of int64_t type:
printf("%09" PRIi64 " | 5\n", difftimespec_ns(after, before));
To calculate the delta (elapsed time), you need to make an substraction between two timeval or two timespec structures depending on the services you are using.
For timeval, there is a set of operations to manipulate struct timeval in <sys/time.h> (e.g. /usr/include/x86_64-linux-gnu/sys/time.h):
# define timersub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
For timespec, if you don't have them installed in your header files, copy something like the macro defined in this source code:
#define timespecsub(tsp, usp, vsp) \
do { \
(vsp)->tv_sec = (tsp)->tv_sec - (usp)->tv_sec; \
(vsp)->tv_nsec = (tsp)->tv_nsec - (usp)->tv_nsec; \
if ((vsp)->tv_nsec < 0) { \
(vsp)->tv_sec--; \
(vsp)->tv_nsec += 1000000000L; \
} \
} while (0)
You could convert the time to a double value using some code such as :
double
clocktime_BM (clockid_t clid)
{
struct timespec ts = { 0, 0 };
if (clock_gettime (clid, &ts))
return NAN;
return (double) ts.tv_sec + 1.0e-9 * ts.tv_nsec;
}
The returned double value contains something in seconds. On most machines, double-s are IEEE 754 floating point numbers, and basic operations on them are fast (less than a µs each). Read the floating-point-gui.de for more about them. In 2020 x86-64 based laptops and servers have some HPET. Don't expect a microsecond precision on time measurements (since Linux runs many processes, and they might get scheduled at arbitrary times; read some good textbook about operating systems for explanations).
(the above code is from Bismon, funded thru CHARIOT; something similar appears in RefPerSys)
On Linux, be sure to read syscalls(2), clock_gettime(2), errno(3), time(7), vdso(7).
Consider studying the source code of the Linux kernel and/or of the GNU libc and/or of musl-libc. See LinuxFromScratch and OSDEV and kernelnewbies.
Be aware of The year 2038 problem on some 32 bits computers.

Which clock should be used for inter process communication in linux?

I have implemented two single thread process A & B with two msg queue [separate queue for send and receive]. Process A will send a message to B and wait for reply in the receive queue.
I want to send a time-stamp from process A to process B. If process B receives the message after 10 second, i want to send a Error string from process B to A.
Accuracy should be in milliseconds.
In process A i used ,
struct timespec msg_dispatch_time;
clock_gettime(CLOCK_REALTIME, &msg_dispatch_time);
:
:
add_timestamp_in_msg(msg, msg_dispatch_time);
:
if (msgsnd(msqid, msg, sizeof(msg), msgflg) == -1)
perror("msgop: msgsnd failed");
In process B,
struct timespec msg_dispatch_time;
struct timespec msg_receive_time;
:
clock_gettime(CLOCK_REALTIME, &msg_received_time);
:
if( !(time_diff(msg_received_time, msg_dispatch_time) >= 10 ))
msgsnd(msqid, &sbuf, buf_length, msg_flag)
else
{
/*send the error string.*/
//msgsnd(msgid,)
}
My question is,
1) How to write a time_diff function here with millisecond accuracy to compare against 10 seconds?
if( !(time_diff(msg_received_time, msg_dispatch_time) >= 10 ))
/********Existing time diff code******************/
long int time_diff (struct timeval time1, struct timeval time2)
{
struct timeval diff,
if (time1.tv_usec < time2.tv_usec) {
time1.tv_usec += 1000000;
time1.tv_sec--;
}
diff.tv_usec = time1.tv_usec - time2.tv_usec;
diff.tv_sec = time1.tv_sec - time2.tv_sec;
return diff.tv_sec; //return the diff in second
}
2) Is clock_gettime is fine to use across process in the same system?
If you wish to keep using the struct timespec type, then I recommend using a difftime() equivalent for struct timespec type, i.e.
double difftimespec(const struct timespec after, const struct timespec before)
{
return (double)(after.tv_sec - before.tv_sec)
+ (double)(after.tv_nsec - before.tv_nsec) / 1000000000.0;
}
However, I think there exists a better option for your overall use case.
If you are satisfied for your program to work till year 2242, you could use a 64-bit signed integer to hold the number of nanoseconds since Epoch. For binary messages, it is a much easier format to handle than struct timespec. Essentially:
#define _POSIX_C_SOURCE 200809L
#include <stdint.h>
#include <time.h>
typedef int64_t nstime;
#define NSTIME_MIN INT64_MIN
#define NSTIME_MAX INT64_MAX
nstime nstime_realtime(void)
{
struct timespec ts;
if (clock_gettime(CLOCK_REALTIME, &ts))
return NSTIME_MIN;
return ((nstime)ts.tv_sec * 1000000000)
+ (nstime)ts.tv_nsec;
}
double nstime_secs(const nstime ns)
{
return (double)ns / 1000000000.0;
}
struct timespec nstime_timespec(const nstime ns)
{
struct timespec ts;
if (ns < 0) {
ts.tv_sec = (time_t)(ns / -1000000000);
ts.tv_nsec = -(long)((-ns) % 1000000000);
if (ts.tv_nsec < 0L) {
ts.tv_sec--;
ts.tv_nsec += 1000000000L;
}
} else {
ts.tv_sec = (time_t)(ns / 1000000000);
ts.tv_nsec = (long)(ns % 1000000000);
}
}
You can add and substract nstime timestamps any way you wish, and they are suitable for binary storage, too (byte order (aka endianness) issues notwithstanding).
(Note that the code above is untested, and I consider it public domain/CC0.)
Using clock_gettime() is fine. Both CLOCK_REALTIME and CLOCK_MONOTONIC are system-wide, i.e. should report the exact same results in different processes, if executed at the same physical moment.
CLOCK_REALTIME is available in all POSIXy systems, but CLOCK_MONOTONIC is optional. Both are immune to daylight savings time changes. Incremental NTP adjustments affect both. Manual changes to system time by an administrator only affect CLOCK_REALTIME. The epoch for CLOCK_REALTIME is currently Jan 1, 1970, 00:00:00, but it is unspecified for CLOCK_MONOTONIC.
Personally, I recommend using clock_gettime(CLOCK_REALTIME,), because then your application can talk across processes in a cluster, not just on a local machine; cluster nodes may use different epochs for CLOCK_MONOTONIC.

clock_gettime on Raspberry Pi with C

I want to measure the time between the start to the end of the function in a loop. This difference will be used to set the amount of loops of the inner while-loops which does some here not important stuff.
I want to time the function like this :
#include <wiringPi.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BILLION 1E9
float hz = 1000;
long int nsPerTick = BILLION/hz;
double unprocessed = 1;
struct timespec now;
struct timespec last;
clock_gettime(CLOCK_REALTIME, &last);
[ ... ]
while (1)
{
clock_gettime(CLOCK_REALTIME, &now);
double diff = (last.tv_nsec - now.tv_nsec );
unprocessed = unprocessed + (diff/ nsPerTick);
clock_gettime(CLOCK_REALTIME, &last);
while (unprocessed >= 1) {
unprocessed --;
DO SOME RANDOM MAGIC;
}
}
The difference between the timer is always negative. I was told this was where the error was:
if ( (last.tv_nsec - now.tv_nsec)<0) {
double diff = 1000000000+ last.tv_nsec - now.tv_nsec;
}
else {
double diff = (last.tv_nsec - now.tv_nsec );
}
But still, my variable difference and is always negative like "-1095043244" (but the time spent during the function is a positive of course).
What's wrong?
Your first issue is that you have `last.tv_nsec - now.tv_nsec, which is the wrong way round.
last.tv_nsec is in the past (let's say it's set to 1), and now.tv_nsec will always be later (for example, 8ns later, so it's 9). In that case, last.tv_nsec - now.tv_nsec == 1 - 9 == -8.
The other issue is that tv_nsec isn't the time in nanoseconds: for that, you'd need to multiply the time in seconds by a billion and add that. So to get the difference in ns between now and last, you want:
((now.tv_sec - last.tv_sec) * ONE_BILLION) + (now.tv_nsec - last.tv_nsec)
(N.B. I'm still a little surprised that although now.tv_nsec and last.tv_nsec are both less than a billion, subtracting one from the other gives a value less than -1000000000, so there may yet be something I'm missing here.)
I was just investigating timing on Pi, with similar approach and similar problems. My thoughts are:
You don't have to use double. In fact you also don't need nano-seconds, as the clock on Pi has 1 microsecond accuracy anyway (it's the way the Broadcom did it). I suggest you to use gettimeofday() to get microsecs instead of nanosecs. Then computation is easy, it's just:
number of seconds + (1000 * 1000 * number of micros)
which you can simply calculate as unsigned int.
I've implemented the convenient API for this:
typedef struct
{
struct timeval startTimeVal;
} TIMER_usecCtx_t;
void TIMER_usecStart(TIMER_usecCtx_t* ctx)
{
gettimeofday(&ctx->startTimeVal, NULL);
}
unsigned int TIMER_usecElapsedUs(TIMER_usecCtx_t* ctx)
{
unsigned int rv;
/* get current time */
struct timeval nowTimeVal;
gettimeofday(&nowTimeVal, NULL);
/* compute diff */
rv = 1000000 * (nowTimeVal.tv_sec - ctx->startTimeVal.tv_sec) + nowTimeVal.tv_usec - ctx->startTimeVal.tv_usec;
return rv;
}
And the usage is:
TIMER_usecCtx_t timer;
TIMER_usecStart(&timer);
while (1)
{
if (TIMER_usecElapsedUs(timer) > yourDelayInMicroseconds)
{
doSomethingHere();
TIMER_usecStart(&timer);
}
}
Also notice the gettime() calls on Pi take almost 1 [us] to complete. So, if you need to call gettime() a lot and need more accuracy, go for some more advanced methods of getting time... I've explained more about it in this short article about Pi get-time calls
Well, I don't know C, but if it's a timing issue on a Raspberry Pi it might have something to do with the lack of an RTC (real time clock) on the chip.
You should not be storing last.tv_nsec - now.tv_nsec in a double.
If you look at the documentation of time.h, you can see that tv_nsec is stored as a long. So you will need something along the lines of:
long diff = end.tv_nsec - begin.tv_nsec
With that being said, only comparing the nanoseconds can go wrong. You also need to look at the number of seconds also. So to convert everything to seconds, you can use this:
long nanosec_diff = end.tv_nsec - begin.tv_nsec;
time_t sec_diff = end.tv_sec - begin.tv_sec; // need <sys/types.h> for time_t
double diff_in_seconds = sec_diff + nanosec_diff / 1000000000.0
Also, make sure you are always subtracting the end time from the start time (or else your time will still be negative).
And there you go!

clock_gettime alternative in Mac OS X

When compiling a program I wrote on Mac OS X after installing the necessary libraries through MacPorts, I get this error:
In function 'nanotime':
error: 'CLOCK_REALTIME' undeclared (first use in this function)
error: (Each undeclared identifier is reported only once
error: for each function it appears in.)
It appears that clock_gettime is not implemented in Mac OS X. Is there an alternative means of getting the epoch time in nanoseconds? Unfortunately gettimeofday is in microseconds.
After hours of perusing different answers, blogs, and headers, I found a portable way to get the current time:
#include <time.h>
#include <sys/time.h>
#ifdef __MACH__
#include <mach/clock.h>
#include <mach/mach.h>
#endif
struct timespec ts;
#ifdef __MACH__ // OS X does not have clock_gettime, use clock_get_time
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), CALENDAR_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
ts.tv_sec = mts.tv_sec;
ts.tv_nsec = mts.tv_nsec;
#else
clock_gettime(CLOCK_REALTIME, &ts);
#endif
or check out this gist: https://gist.github.com/1087739
Hope this saves someone time. Cheers!
None of the solutions above answers the question. Either they don't give you absolute Unix time, or their accuracy is 1 microsecond. The most popular solution by jbenet is slow (~6000ns) and does not count in nanoseconds even though its return suggests so. Below is a test for 2 solutions suggested by jbenet and Dmitri B, plus my take on this. You can run the code without changes.
The 3rd solution does count in nanoseconds and gives you absolute Unix time reasonably fast (~90ns). So if someone find it useful - please let us all know here :-). I will stick to the one from Dmitri B (solution #1 in the code) - it fits my needs better.
I needed commercial quality alternative to clock_gettime() to make pthread_…timed.. calls, and found this discussion very helpful. Thanks guys.
/*
Ratings of alternatives to clock_gettime() to use with pthread timed waits:
Solution 1 "gettimeofday":
Complexity : simple
Portability : POSIX 1
timespec : easy to convert from timeval to timespec
granularity : 1000 ns,
call : 120 ns,
Rating : the best.
Solution 2 "host_get_clock_service, clock_get_time":
Complexity : simple (error handling?)
Portability : Mac specific (is it always available?)
timespec : yes (struct timespec return)
granularity : 1000 ns (don't be fooled by timespec format)
call time : 6000 ns
Rating : the worst.
Solution 3 "mach_absolute_time + gettimeofday once":
Complexity : simple..average (requires initialisation)
Portability : Mac specific. Always available
timespec : system clock can be converted to timespec without float-math
granularity : 1 ns.
call time : 90 ns unoptimised.
Rating : not bad, but do we really need nanoseconds timeout?
References:
- OS X is UNIX System 3 [U03] certified
http://www.opengroup.org/homepage-items/c987.html
- UNIX System 3 <--> POSIX 1 <--> IEEE Std 1003.1-1988
http://en.wikipedia.org/wiki/POSIX
http://www.unix.org/version3/
- gettimeofday() is mandatory on U03,
clock_..() functions are optional on U03,
clock_..() are part of POSIX Realtime extensions
http://www.unix.org/version3/inttables.pdf
- clock_gettime() is not available on MacMini OS X
(Xcode > Preferences > Downloads > Command Line Tools = Installed)
- OS X recommends to use gettimeofday to calculate values for timespec
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/pthread_cond_timedwait.3.html
- timeval holds microseconds, timespec - nanoseconds
http://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html
- microtime() is used by kernel to implement gettimeofday()
http://ftp.tw.freebsd.org/pub/branches/7.0-stable/src/sys/kern/kern_time.c
- mach_absolute_time() is really fast
http://www.opensource.apple.com/source/Libc/Libc-320.1.3/i386/mach/mach_absolute_time.c
- Only 9 deciaml digits have meaning when int nanoseconds converted to double seconds
Tutorial: Performance and Time post uses .12 precision for nanoseconds
http://www.macresearch.org/tutorial_performance_and_time
Example:
Three ways to prepare absolute time 1500 milliseconds in the future to use with pthread timed functions.
Output, N = 3, stock MacMini, OSX 10.7.5, 2.3GHz i5, 2GB 1333MHz DDR3:
inittime.tv_sec = 1390659993
inittime.tv_nsec = 361539000
initclock = 76672695144136
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_0() : 1390659994.861599000
get_abs_future_time_1() : 1390659994.861618000
get_abs_future_time_1() : 1390659994.861634000
get_abs_future_time_1() : 1390659994.861642000
get_abs_future_time_2() : 1390659994.861643671
get_abs_future_time_2() : 1390659994.861643877
get_abs_future_time_2() : 1390659994.861643972
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h> /* gettimeofday */
#include <mach/mach_time.h> /* mach_absolute_time */
#include <mach/mach.h> /* host_get_clock_service, mach_... */
#include <mach/clock.h> /* clock_get_time */
#define BILLION 1000000000L
#define MILLION 1000000L
#define NORMALISE_TIMESPEC( ts, uint_milli ) \
do { \
ts.tv_sec += uint_milli / 1000u; \
ts.tv_nsec += (uint_milli % 1000u) * MILLION; \
ts.tv_sec += ts.tv_nsec / BILLION; \
ts.tv_nsec = ts.tv_nsec % BILLION; \
} while (0)
static mach_timebase_info_data_t timebase = { 0, 0 }; /* numer = 0, denom = 0 */
static struct timespec inittime = { 0, 0 }; /* nanoseconds since 1-Jan-1970 to init() */
static uint64_t initclock; /* ticks since boot to init() */
void init()
{
struct timeval micro; /* microseconds since 1 Jan 1970 */
if (mach_timebase_info(&timebase) != 0)
abort(); /* very unlikely error */
if (gettimeofday(&micro, NULL) != 0)
abort(); /* very unlikely error */
initclock = mach_absolute_time();
inittime.tv_sec = micro.tv_sec;
inittime.tv_nsec = micro.tv_usec * 1000;
printf("\tinittime.tv_sec = %ld\n", inittime.tv_sec);
printf("\tinittime.tv_nsec = %ld\n", inittime.tv_nsec);
printf("\tinitclock = %ld\n", (long)initclock);
}
/*
* Get absolute future time for pthread timed calls
* Solution 1: microseconds granularity
*/
struct timespec get_abs_future_time_coarse(unsigned milli)
{
struct timespec future; /* ns since 1 Jan 1970 to 1500 ms in the future */
struct timeval micro = {0, 0}; /* 1 Jan 1970 */
(void) gettimeofday(&micro, NULL);
future.tv_sec = micro.tv_sec;
future.tv_nsec = micro.tv_usec * 1000;
NORMALISE_TIMESPEC( future, milli );
return future;
}
/*
* Solution 2: via clock service
*/
struct timespec get_abs_future_time_served(unsigned milli)
{
struct timespec future;
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), CALENDAR_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
future.tv_sec = mts.tv_sec;
future.tv_nsec = mts.tv_nsec;
NORMALISE_TIMESPEC( future, milli );
return future;
}
/*
* Solution 3: nanosecond granularity
*/
struct timespec get_abs_future_time_fine(unsigned milli)
{
struct timespec future; /* ns since 1 Jan 1970 to 1500 ms in future */
uint64_t clock; /* ticks since init */
uint64_t nano; /* nanoseconds since init */
clock = mach_absolute_time() - initclock;
nano = clock * (uint64_t)timebase.numer / (uint64_t)timebase.denom;
future = inittime;
future.tv_sec += nano / BILLION;
future.tv_nsec += nano % BILLION;
NORMALISE_TIMESPEC( future, milli );
return future;
}
#define N 3
int main()
{
int i, j;
struct timespec time[3][N];
struct timespec (*get_abs_future_time[])(unsigned milli) =
{
&get_abs_future_time_coarse,
&get_abs_future_time_served,
&get_abs_future_time_fine
};
init();
for (j = 0; j < 3; j++)
for (i = 0; i < N; i++)
time[j][i] = get_abs_future_time[j](1500); /* now() + 1500 ms */
for (j = 0; j < 3; j++)
for (i = 0; i < N; i++)
printf("get_abs_future_time_%d() : %10ld.%09ld\n",
j, time[j][i].tv_sec, time[j][i].tv_nsec);
return 0;
}
In effect, it seems not to be implemented for macOS before Sierra 10.12. You may want to look at this blog entry. The main idea is in the following code snippet:
#include <mach/mach_time.h>
#define ORWL_NANO (+1.0E-9)
#define ORWL_GIGA UINT64_C(1000000000)
static double orwl_timebase = 0.0;
static uint64_t orwl_timestart = 0;
struct timespec orwl_gettime(void) {
// be more careful in a multithreaded environement
if (!orwl_timestart) {
mach_timebase_info_data_t tb = { 0 };
mach_timebase_info(&tb);
orwl_timebase = tb.numer;
orwl_timebase /= tb.denom;
orwl_timestart = mach_absolute_time();
}
struct timespec t;
double diff = (mach_absolute_time() - orwl_timestart) * orwl_timebase;
t.tv_sec = diff * ORWL_NANO;
t.tv_nsec = diff - (t.tv_sec * ORWL_GIGA);
return t;
}
#if defined(__MACH__) && !defined(CLOCK_REALTIME)
#include <sys/time.h>
#define CLOCK_REALTIME 0
// clock_gettime is not implemented on older versions of OS X (< 10.12).
// If implemented, CLOCK_REALTIME will have already been defined.
int clock_gettime(int /*clk_id*/, struct timespec* t) {
struct timeval now;
int rv = gettimeofday(&now, NULL);
if (rv) return rv;
t->tv_sec = now.tv_sec;
t->tv_nsec = now.tv_usec * 1000;
return 0;
}
#endif
Everything you need is described in Technical Q&A QA1398: Technical Q&A QA1398: Mach Absolute Time Units, basically the function you want is mach_absolute_time.
Here's a slightly earlier version of the sample code from that page that does everything using Mach calls (the current version uses AbsoluteToNanoseconds from CoreServices). In current OS X (i.e., on Snow Leopard on x86_64) the absolute time values are actually in nanoseconds and so don't actually require any conversion at all. So, if you're good and writing portable code, you'll convert, but if you're just doing something quick and dirty for yourself, you needn't bother.
FWIW, mach_absolute_time is really fast.
uint64_t GetPIDTimeInNanoseconds(void)
{
uint64_t start;
uint64_t end;
uint64_t elapsed;
uint64_t elapsedNano;
static mach_timebase_info_data_t sTimebaseInfo;
// Start the clock.
start = mach_absolute_time();
// Call getpid. This will produce inaccurate results because
// we're only making a single system call. For more accurate
// results you should call getpid multiple times and average
// the results.
(void) getpid();
// Stop the clock.
end = mach_absolute_time();
// Calculate the duration.
elapsed = end - start;
// Convert to nanoseconds.
// If this is the first time we've run, get the timebase.
// We can use denom == 0 to indicate that sTimebaseInfo is
// uninitialised because it makes no sense to have a zero
// denominator is a fraction.
if ( sTimebaseInfo.denom == 0 ) {
(void) mach_timebase_info(&sTimebaseInfo);
}
// Do the maths. We hope that the multiplication doesn't
// overflow; the price you pay for working in fixed point.
elapsedNano = elapsed * sTimebaseInfo.numer / sTimebaseInfo.denom;
printf("multiplier %u / %u\n", sTimebaseInfo.numer, sTimebaseInfo.denom);
return elapsedNano;
}
Note that macOS Sierra 10.12 now supports clock_gettime():
#include <stdio.h>
#include <time.h>
int main() {
struct timespec res;
struct timespec time;
clock_getres(CLOCK_REALTIME, &res);
clock_gettime(CLOCK_REALTIME, &time);
printf("CLOCK_REALTIME: res.tv_sec=%lu res.tv_nsec=%lu\n", res.tv_sec, res.tv_nsec);
printf("CLOCK_REALTIME: time.tv_sec=%lu time.tv_nsec=%lu\n", time.tv_sec, time.tv_nsec);
}
It does provide nanoseconds; however, the resolution is 1000, so it is (in)effectively limited to microseconds:
CLOCK_REALTIME: res.tv_sec=0 res.tv_nsec=1000
CLOCK_REALTIME: time.tv_sec=1475279260 time.tv_nsec=525627000
You will need XCode 8 or later to be able to use this feature. Code compiled to use this feature will not run on versions of Mac OS X (10.11 or earlier).
Thanks for your posts
I think you can add the following lines
#ifdef __MACH__
#include <mach/mach_time.h>
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 0
int clock_gettime(int clk_id, struct timespec *t){
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
uint64_t time;
time = mach_absolute_time();
double nseconds = ((double)time * (double)timebase.numer)/((double)timebase.denom);
double seconds = ((double)time * (double)timebase.numer)/((double)timebase.denom * 1e9);
t->tv_sec = seconds;
t->tv_nsec = nseconds;
return 0;
}
#else
#include <time.h>
#endif
Let me know what you get for latency and granularity
Maristic has the best answer here to date. Let me simplify and add a remark. #include and Init():
#include <mach/mach_time.h>
double conversion_factor;
void Init() {
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
conversion_factor = (double)timebase.numer / (double)timebase.denom;
}
Use as:
uint64_t t1, t2;
Init();
t1 = mach_absolute_time();
/* profiled code here */
t2 = mach_absolute_time();
double duration_ns = (double)(t2 - t1) * conversion_factor;
Such timer has latency of 65ns +/- 2ns (2GHz CPU). Use this if you need "time evolution" of single execution. Otherwise loop your code 10000 times and profile even with gettimeofday(), which is portable (POSIX), and has the latency of 100ns +/- 0.5ns (though only 1us granularity).
I tried the version with clock_get_time, and did cache the host_get_clock_service call. It's way slower than gettimeofday, it takes several microseconds per invocation. And, what's worse, the return value has steps of 1000, i.e. it's still microsecond granularity.
I'd advice to use gettimeofday, and multiply tv_usec by 1000.
Based on the open source mach_absolute_time.c we can see that the line extern mach_port_t clock_port; tells us there's a mach port already initialized for monotonic time. This clock port can be accessed directly without having to resort to calling mach_absolute_time then converting back to a struct timespec. Bypassing a call to mach_absolute_time should improve performance.
I created a small Github repo (PosixMachTiming) with the code based on the extern clock_port and a similar thread. PosixMachTiming emulates clock_gettime for CLOCK_REALTIME and CLOCK_MONOTONIC. It also emulates the function clock_nanosleep for absolute monotonic time. Please give it a try and see how the performance compares. Maybe you might want to create comparative tests or emulate other POSIX clocks/functions?
As of at least as far back as Mountain Lion, mach_absolute_time() returns nanoseconds and not absolute time (which was the number of bus cycles).
The following code on my MacBook Pro (2 GHz Core i7) showed that the time to call mach_absolute_time() averaged 39 ns over 10 runs (min 35, max 45), which is basically the time between the return of the two calls to mach_absolute_time(), about 1 invocation:
#include <stdint.h>
#include <mach/mach_time.h>
#include <iostream>
using namespace std;
int main()
{
uint64_t now, then;
uint64_t abs;
then = mach_absolute_time(); // return nanoseconds
now = mach_absolute_time();
abs = now - then;
cout << "nanoseconds = " << abs << endl;
}
void clock_get_uptime(uint64_t *result);
void clock_get_system_microtime( uint32_t *secs,
uint32_t *microsecs);
void clock_get_system_nanotime( uint32_t *secs,
uint32_t *nanosecs);
void clock_get_calendar_microtime( uint32_t *secs,
uint32_t *microsecs);
void clock_get_calendar_nanotime( uint32_t *secs,
uint32_t *nanosecs);
For MacOS you can find a good information on their developers page
https://developer.apple.com/library/content/documentation/Darwin/Conceptual/KernelProgramming/services/services.html
I found another portable solution.
Declare in some header file (or even in your source one):
/* If compiled on DARWIN/Apple platforms. */
#ifdef DARWIN
#define CLOCK_REALTIME 0x2d4e1588
#define CLOCK_MONOTONIC 0x0
#endif /* DARWIN */
And the add the function implementation:
#ifdef DARWIN
/*
* Bellow we provide an alternative for clock_gettime,
* which is not implemented in Mac OS X.
*/
static inline int clock_gettime(int clock_id, struct timespec *ts)
{
struct timeval tv;
if (clock_id != CLOCK_REALTIME)
{
errno = EINVAL;
return -1;
}
if (gettimeofday(&tv, NULL) < 0)
{
return -1;
}
ts->tv_sec = tv.tv_sec;
ts->tv_nsec = tv.tv_usec * 1000;
return 0;
}
#endif /* DARWIN */
Don't forget to include <time.h>.

Calculating elapsed time in a C program in milliseconds

I want to calculate the time in milliseconds taken by the execution of some part of my program. I've been looking online, but there's not much info on this topic. Any of you know how to do this?
Best way to answer is with an example:
#include <sys/time.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
/* Return 1 if the difference is negative, otherwise 0. */
int timeval_subtract(struct timeval *result, struct timeval *t2, struct timeval *t1)
{
long int diff = (t2->tv_usec + 1000000 * t2->tv_sec) - (t1->tv_usec + 1000000 * t1->tv_sec);
result->tv_sec = diff / 1000000;
result->tv_usec = diff % 1000000;
return (diff<0);
}
void timeval_print(struct timeval *tv)
{
char buffer[30];
time_t curtime;
printf("%ld.%06ld", tv->tv_sec, tv->tv_usec);
curtime = tv->tv_sec;
strftime(buffer, 30, "%m-%d-%Y %T", localtime(&curtime));
printf(" = %s.%06ld\n", buffer, tv->tv_usec);
}
int main()
{
struct timeval tvBegin, tvEnd, tvDiff;
// begin
gettimeofday(&tvBegin, NULL);
timeval_print(&tvBegin);
// lengthy operation
int i,j;
for(i=0;i<999999L;++i) {
j=sqrt(i);
}
//end
gettimeofday(&tvEnd, NULL);
timeval_print(&tvEnd);
// diff
timeval_subtract(&tvDiff, &tvEnd, &tvBegin);
printf("%ld.%06ld\n", tvDiff.tv_sec, tvDiff.tv_usec);
return 0;
}
Another option ( at least on some UNIX ) is clock_gettime and related functions. These allow access to various realtime clocks and you can select one of the higher resolution ones and throw away the resolution you don't need.
The gettimeofday function returns the time with microsecond precision (if the platform can support that, of course):
The gettimeofday() function shall
obtain the current time, expressed as
seconds and microseconds since the
Epoch, and store it in the timeval
structure pointed to by tp. The
resolution of the system clock is
unspecified.
C libraries have a function to let you get the system time. You can calculate elapsed time after you capture the start and stop times.
The function is called gettimeofday() and you can look at the man page to find out what to include and how to use it.
On Windows, you can just do this:
DWORD dwTickCount = GetTickCount();
// Perform some things.
printf("Code took: %dms\n", GetTickCount() - dwTickCount);
Not the most general/elegant solution, but nice and quick when you need it.

Resources