High-resolutin (100 nsec) timing on Linux/C - c

I use Raspian on the Raspberry B+ to get 1700 nsec (+- 10%) pulses on a GPIO output. Thus, I need a high-resolution wallclock timer. There are several references to clock_gettime for high-resolution timing (e.g. 1, 2). However, I get 1. only a microsecond resolution and 2. not sufficient minimum time with this short code:
int start_time, current_time, elapsed_time;
struct timespec resolution;
clock_gettime(CLOCK_MONOTONIC, &resolution);
start_time = Resolution.tv_nsec;
clock_gettime(CLOCK_MONOTONIC, &resolution);
current_time = resolution.tv_nsec;
elapsed_time = current_time - start_time;
if(elapsed_time < 0) {
elapsed_time = elapsed_time + 1000000000; //in case clock_gettime wraps around
}
printf("%i\n", elapsed_time);
The result is 3000 (nanoseconds), i.e. even this shortest possible piece of code takes too much time. If I add some time-consuming code, the next greater result is 4000.
How I can I get a wallclock timer that will result in at least 100 nsec resolution and a smallest possible time of less than 1700 nsec? That the Raspberry Pi can do faster (100 nsec pulses with WiringPi) shows the GPIO Benchmark. I am aware that additional electronics (monoflop) can help me but I hope to solve the problem in a simpler way. Thank you.

Related

gettimeofday to calculate elapsed when day changes

The typical example that I see when trying to do measure elapsed time goes something like this:
#include <sys/time.h>
#include <stdio.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
//Do some operation
gettimeofday(&end, NULL);
unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
unsigned long long start_time = (start.tv_sec * 1000000 + start.tv_usec);
printf("Time taken : %ld micro seconds\n", end_time - start_time);
return 0;
}
This is great when it's somewhere mid day, but if someone were to run some tests late at night this wouldn't work. My approach is something like this to address it:
#include <sys/time.h>
#include <stdio.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
//Do some operation
gettimeofday(&end, NULL);
unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
unsigned long long start_time = (start.tv_sec * 1000000 + start.tv_usec);
unsigned long long elapsed_time = 0;
if ( end_time < start_time )
//Made up some constant that defines 86,400,000,000 microseconds in a day
elapsed_time = end_time + (NUM_OF_USEC_IN_A_DAY - start_time);
else
elapsed_time = end_time - start_time;
printf("Time taken : %ld micro seconds\n", elapsed_time);
return 0;
}
Is there a better way of anticipating day change using gettimeofday?
Despite the name, gettimeofday results do not roll over daily. The gettimeofday seconds count, just like the time seconds count, started at zero on the first day of 1970 (specifically 1970-01-01 00:00:00 +00:00) and has been incrementing steadily ever since. On a system with 32-bit time_t, it will roll over sometime in 2038; people are working right now to phase out the use of 32-bit time_t for this exact reason and we expect to be done well before 2038.
gettimeofday results are also independent of time zone and not affected by daylight savings shifts. They can go backward when the computer's clock is reset. If you don't want to worry about that, you can use clock_gettime(CLOCK_MONOTONIC) jnstead.
Why would you want to handle the case where end.tv_sec is less than start.tv_sec? Are you trying to account for ntpd changes? If so, and especially if you want to record only elapsed time, then use clock_gettime instead of gettimeofday as the former is immune to wall clock changes.
but if someone were to run some tests late at night this wouldn't work
That's an incorrect statement because gettimeofday is not relative to the start of each day but rather relative to a fixed point in time. From the gettimeofday manual:
gives the number of seconds and microseconds since the Epoch
So the first example will work as long as there are no jumps in time (e.g. due to manual time setting or NTP). Again from the manual:
The time returned by gettimeofday() is affected by discontinuous
jumps in the system time (e.g., if the system administrator manually
changes the system time). If you need a monotonically increasing
clock, see clock_gettime(2).
Is there a better way of anticipating day change using gettimeofday?
In addition to other problems identified in various answers, the following is also subject integer overflow when time_t is only 32-bit. Instead scale by a (unsigned) long long constant.
// unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
long long end_time = (end.tv_sec * 1000000ll + end.tv_usec);
// Use correct specifier, not %ld for unsigned long long
// printf("Time taken : %ld micro seconds\n", elapsed_time);
long long elapsed_time = end_time - start_time;
printf("Time taken : %lld micro seconds\n", elapsed_time);
Tip: enables all compiler warnings.

C find elapsed time in C Linux

I have to calculate the time taken by a function to complete.
This function is called in a loop and I want to find out the total time.
Usually the time is very less in either nano or micro seconds.
To find out the elapsed time I used functions gettimeofday() using struct timeval and clock_gettime() using struct timespec.
Problem is time return by timeval in seconds is correct but in micro seconds wrong.
Similarly the time returned by timespec in nano seconds is wrong.
Wrong in the sense they do not tally with the time returned in seconds.
For clock_gettime() I tried both CLOCK_PROCESS_CPUTIME_ID and CLOCK_MONOTONIC.
Using clock() also does not help.
Code snippet:
struct timeval funcTimestart_timeval, funcTimeEnd_timeval;
struct timespec funcTimeStart_timespec, funcTimeEnd_timespec;
unsigned long elapsed_nanos = 0;
unsigned long elapsed_seconds = 0;
unsigned long diffInNanos = 0;
unsigned long Func_elapsed_nanos = 0;
unsigned long Func_elapsed_seconds = 0;
while(...)
{
gettimeofday(&funcTimestart_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeStart_timespec);
...
demo_func();
...
gettimeofday(&funcTimeEnd_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeEnd_timespec);
elapsed_seconds = funcTimeEnd_timeval.tv_sec - funcTimestart_timeval.tv_sec;
Func_elapsed_seconds+= elapsed_seconds;
elapsed_nanos = funcTimeEnd_timespec.tv_nsec - funcTimeStart_timespec.tv_nsec;
Func_elapsed_nanos+ = elapsed_nanos;
}
printf("Total time taken by demo_func() is %lu seconds( %lu nanoseconds )\n", Func_elapsed_seconds, Func_elapsed_nanos );
Printf output:
Total time taken by demo_func() is 60 seconds( 76806787 nanoseconds )
See that the time in seconds and nanoseconds do not match.
How to resolve this issue or any other appropriate method to find elapsed time?
Did you read the documentation of time(7) and clock_gettime(2)? Please read it twice.
The struct timespec is not supposed to express twice the same time. The field tv_sec gives the second part ts, and the field tv_nsec gives the nanosecond part tn to express the time t = ts + 10-9 tn
I would suggest to convert that to a floating point, e.g.
printf ("total time %g\n",
(double)Func_elapsed_seconds + 1.0e-9*Func_elapsed_nanos);
Using floating point is simpler and generally the precision is enough for most needs. Otherwise, when you add or substract struct timespec you need to handle the case when the added/substracted tv_nsec field sum/difference is negative or more than 1000000000....
The problem is you are printing/comparing wrong values.
76,806,787 nanoseconds is equal to ~76 milliseconds, you cannot compare it with 60 seconds.
You are ignoring the time in seconds stored in funcTimeEnd_timespec.tv_sec.
You should also print funcTimeEnd_timespec.tv_sec - funcTimeStart_timespec.tv_sec, and as #Basile Starynkevitch suggested, add with it the nanoseconds part after multiplying it with 10e-9. Then you can compare time elapsed shown by both functions.
I am replying to the previous answers as answer as I wanted to paste code snippets.
Question is to find out elapsed time whether
first I should subtract the corresponding times with each other and then add
(end.tv_sec - start.tv_sec) + 1.0e-9*(end.tv_nsec - start.tv_nsec)
or
first add the times and then compute the difference
(end.tv_sec + 1.0e-9*end.tv_nsec) - (start.tv_sec + 1.0e-9*start.tv_nsec)
In the first case quite often end.tv_nsec is less in number then start.tv_nsec and hence the difference becomes negative number nd this give me wrong number.

choose between timeval and clock() to calculate elapsed time in C

I am using timeval as well as clock() function to see the time difference between 2 actions in my c program. Somehow timeval seems to give me the right amount of elapsed time in milliseconds where clock() gives very very less value.
g_time = clock();
gettimeofday(&decode_t,NULL);
after sometime
delay =((float)(clock()-g_time)/(float)CLOCKS_PER_SEC);
gettimeofday(&poll_t,NULL);
delay1 = ((poll_t.tv_sec - decode_t.tv_sec)*1000 + (poll_t.tv_usec - decode_t.tv_usec)/1000.0) ;
printf("\ndelay1: %f delay: %f ",delay1,delay);
usual output is:
delay1: 1577.603027 delay: 0.800000
delay1 is in milliseconds and delay is in sec.
Iam using archlinux 64 bit.I can't understand why this is happening.
From the clock(3) manual page:
The clock() function returns an approximation of processor time used by the program.
So the clock function doesn't return the amount of time passed, but a number of "ticks" that your program have run. And as you know, in a multi-tasking system your program can be paused at any time to let other programs run.

elapsed time in negative value

struct timeval start, end;
start.tv_usec = 0;
end.tv_usec = 0;
gettimeofday(&start, NULL);
functionA();
gettimeofday(&end, NULL);
long t = end.tv_usec - start.tv_usec;
printf("Total elapsed time %ld us \n", t);
I am calculating the total elapsed time like this but it sometimes shows a negative value.
What might be cause the problem?
Thanks in advance.
Keep in mind that there is both a seconds and micro-seconds field of that structure. Therefore if you are simply subtracting the micro-seconds field, you could have a time that is later in seconds, but the microseconds field is less. For instance, and end-time of 5 seconds, 100 microseconds will have a negative result compared to 4 seconds and 5000 microseconds with the subtraction method you're using. In order to get the proper result, you have to take into account both the seconds and micro-seconds fields of the structure. This can be done doing the following:
long seconds = end.tv_sec - start.tv_sec;
long micro_seconds = end.tv_usec - start.tv_usec;
if (micro_seconds < 0)
{
seconds -= 1;
}
long total_micro_seconds = (seconds * 1000000) + abs(micro_seconds);
maybe something along the lines of:
long t = (end.tv_sec*1e6 + end.tv_usec) - (start.tv_sec*1e6 + start.tv_usec);
From The GNU C Library:
Data Type: struct timeval
The struct timeval structure represents an elapsed time. It is declared in sys/time.h and has the following members:
long int tv_sec
This represents the number of whole seconds of elapsed time.
long int tv_usec
This is the rest of the elapsed time (a fraction of a second), represented as the number of microseconds. It is always less than one million.
The only thing you're subtracting is the microseconds in tv_usec above the full seconds value in tv_sec. You need to work with both values in order to find the exact microsecond difference between the two times.

How to measure time in milliseconds using ANSI C?

Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.
There is no ANSI C function that provides better than 1 second time resolution but the POSIX function gettimeofday provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.
You can use this function like this:
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
// Some code you want to time, for example:
sleep(1);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);
This returns Time elapsed: 1.000870 on my machine.
#include <time.h>
clock_t uptime = clock() / (CLOCKS_PER_SEC / 1000);
I always use the clock_gettime() function, returning time from the CLOCK_MONOTONIC clock. The time returned is the amount of time, in seconds and nanoseconds, since some unspecified point in the past, such as system startup of the epoch.
#include <stdio.h>
#include <stdint.h>
#include <time.h>
int64_t timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
int main(int argc, char **argv)
{
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
// Some code I am interested in measuring
clock_gettime(CLOCK_MONOTONIC, &end);
uint64_t timeElapsed = timespecDiff(&end, &start);
}
Implementing a portable solution
As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.
Monotonic clock vs. time stamps
Generally speaking there are two ways of time measurement:
monotonic clock;
current (date)time stamp.
The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that's why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).
The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.
So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.
Fallback strategy
When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.
Windows
There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:
query a timer frequency (ticks per second) with QueryPerformanceFrequency:
LARGE_INTEGER tcounter;
LARGE_INTEGER freq;
if (QueryPerformanceFrequency (&tcounter) != 0)
freq = tcounter.QuadPart;
The timer frequency is fixed on the system boot so you need to get it only once.
query the current ticks value with QueryPerformanceCounter:
LARGE_INTEGER tcounter;
LARGE_INTEGER tick_value;
if (QueryPerformanceCounter (&tcounter) != 0)
tick_value = tcounter.QuadPart;
scale the ticks to elapsed time, i.e. to microseconds:
LARGE_INTEGER usecs = (tick_value - prev_tick_value) / (freq / 1000000);
According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:
GetTickCount provides the number of milliseconds that have elapsed since the system was started. It wraps every 49.7 days, so be careful in measuring longer intervals.
GetTickCount64 is a 64-bit version of GetTickCount, but it is available starting from Windows Vista and above.
OS X (macOS)
OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple's article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:
get the clock frequency numerator and denominator:
#include <mach/mach_time.h>
#include <stdint.h>
static uint64_t freq_num = 0;
static uint64_t freq_denom = 0;
void init_clock_frequency ()
{
mach_timebase_info_data_t tb;
if (mach_timebase_info (&tb) == KERN_SUCCESS && tb.denom != 0) {
freq_num = (uint64_t) tb.numer;
freq_denom = (uint64_t) tb.denom;
}
}
You need to do that only once.
query the current tick value with mach_absolute_time:
uint64_t tick_value = mach_absolute_time ();
scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:
uint64_t value_diff = tick_value - prev_tick_value;
/* To prevent overflow */
value_diff /= 1000;
value_diff *= freq_num;
value_diff /= freq_denom;
The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by 1000 to get microseconds. You can find the same approach used in Chromium's time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.
Linux and UNIX
The clock_gettime call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need is CLOCK_MONOTONIC. Not all systems which have clock_gettime support CLOCK_MONOTONIC, so the first thing you need to do is to check its availability:
if _POSIX_MONOTONIC_CLOCK is defined to a value >= 0 it means that CLOCK_MONOTONIC is avaiable;
if _POSIX_MONOTONIC_CLOCK is defined to 0 it means that you should additionally check if it works at runtime, I suggest to use sysconf:
#include <unistd.h>
#ifdef _SC_MONOTONIC_CLOCK
if (sysconf (_SC_MONOTONIC_CLOCK) > 0) {
/* A monotonic clock presents */
}
#endif
otherwise a monotonic clock is not supported and you should use a fallback strategy (see below).
Usage of clock_gettime is pretty straight forward:
get the time value:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_posix_clock_time ()
{
struct timespec ts;
if (clock_gettime (CLOCK_MONOTONIC, &ts) == 0)
return (uint64_t) (ts.tv_sec * 1000000 + ts.tv_nsec / 1000);
else
return 0;
}
I've scaled down the time to microseconds here.
calculate the difference with the previous time value received the same way:
uint64_t prev_time_value, time_value;
uint64_t time_diff;
/* Initial time */
prev_time_value = get_posix_clock_time ();
/* Do some work here */
/* Final time */
time_value = get_posix_clock_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
The best fallback strategy is to use the gettimeofday call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as with clock_gettime, but to get a time value you should:
#include <time.h>
#include <sys/time.h>
#include <stdint.h>
uint64_t get_gtod_clock_time ()
{
struct timeval tv;
if (gettimeofday (&tv, NULL) == 0)
return (uint64_t) (tv.tv_sec * 1000000 + tv.tv_usec);
else
return 0;
}
Again, the time value is scaled down to microseconds.
SGI IRIX
IRIX has the clock_gettime call, but it lacks CLOCK_MONOTONIC. Instead it has its own monotonic clock source defined as CLOCK_SGI_CYCLE which you should use instead of CLOCK_MONOTONIC with clock_gettime.
Solaris and HP-UX
Solaris has its own high-resolution timer interface gethrtime which returns the current timer value in nanoseconds. Though the newer versions of Solaris may have clock_gettime, you can stick to gethrtime if you need to support old Solaris versions.
Usage is simple:
#include <sys/time.h>
void time_measure_example ()
{
hrtime_t prev_time_value, time_value;
hrtime_t time_diff;
/* Initial time */
prev_time_value = gethrtime ();
/* Do some work here */
/* Final time */
time_value = gethrtime ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
HP-UX lacks clock_gettime, but it supports gethrtime which you should use in the same way as on Solaris.
BeOS
BeOS also has its own high-resolution timer interface system_time which returns the number of microseconds have elapsed since the computer was booted.
Example usage:
#include <kernel/OS.h>
void time_measure_example ()
{
bigtime_t prev_time_value, time_value;
bigtime_t time_diff;
/* Initial time */
prev_time_value = system_time ();
/* Do some work here */
/* Final time */
time_value = system_time ();
/* Time difference */
time_diff = time_value - prev_time_value;
}
OS/2
OS/2 has its own API to retrieve high-precision time stamps:
query a timer frequency (ticks per unit) with DosTmrQueryFreq (for GCC compiler):
#define INCL_DOSPROFILE
#define INCL_DOSERRORS
#include <os2.h>
#include <stdint.h>
ULONG freq;
DosTmrQueryFreq (&freq);
query the current ticks value with DosTmrQueryTime:
QWORD tcounter;
unit64_t time_low;
unit64_t time_high;
unit64_t timestamp;
if (DosTmrQueryTime (&tcounter) == NO_ERROR) {
time_low = (unit64_t) tcounter.ulLo;
time_high = (unit64_t) tcounter.ulHi;
timestamp = (time_high << 32) | time_low;
}
scale the ticks to elapsed time, i.e. to microseconds:
uint64_t usecs = (prev_timestamp - timestamp) / (freq / 1000000);
Example implementation
You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).
timespec_get from C11
Returns up to nanoseconds, rounded to the resolution of the implementation.
Looks like an ANSI ripoff from POSIX' clock_gettime.
Example: a printf is done every 100ms on Ubuntu 15.10:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
static long get_nanos(void) {
struct timespec ts;
timespec_get(&ts, TIME_UTC);
return (long)ts.tv_sec * 1000000000L + ts.tv_nsec;
}
int main(void) {
long nanos;
long last_nanos;
long start;
nanos = get_nanos();
last_nanos = nanos;
start = nanos;
while (1) {
nanos = get_nanos();
if (nanos - last_nanos > 100000000L) {
printf("current nanos: %ld\n", nanos - start);
last_nanos = nanos;
}
}
return EXIT_SUCCESS;
}
The C11 N1570 standard draft 7.27.2.5 "The timespec_get function says":
If base is TIME_UTC, the tv_sec member is set to the number of seconds since an
implementation defined epoch, truncated to a whole value and the tv_nsec member is
set to the integral number of nanoseconds, rounded to the resolution of the system clock. (321)
321) Although a struct timespec object describes times with nanosecond resolution, the available
resolution is system dependent and may even be greater than 1 second.
C++11 also got std::chrono::high_resolution_clock: C++ Cross-Platform High-Resolution Timer
glibc 2.21 implementation
Can be found under sysdeps/posix/timespec_get.c as:
int
timespec_get (struct timespec *ts, int base)
{
switch (base)
{
case TIME_UTC:
if (__clock_gettime (CLOCK_REALTIME, ts) < 0)
return 0;
break;
default:
return 0;
}
return base;
}
so clearly:
only TIME_UTC is currently supported
it forwards to __clock_gettime (CLOCK_REALTIME, ts), which is a POSIX API: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
Linux x86-64 has a clock_gettime system call.
Note that this is not a fail-proof micro-benchmarking method because:
man clock_gettime says that this measure may have discontinuities if you change some system time setting while your program runs. This should be a rare event of course, and you might be able to ignore it.
this measures wall time, so if the scheduler decides to forget about your task, it will appear to run for longer.
For those reasons getrusage() might be a better better POSIX benchmarking tool, despite it's lower microsecond maximum precision.
More information at: Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?
The best precision you can possibly get is through the use of the x86-only "rdtsc" instruction, which can provide clock-level resolution (ne must of course take into account the cost of the rdtsc call itself, which can be measured easily on application startup).
The main catch here is measuring the number of clocks per second, which shouldn't be too hard.
The accepted answer is good enough.But my solution is more simple.I just test in Linux, use gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0.
Alse use gettimeofday, the tv_sec is the part of second, and the tv_usec is microseconds, not milliseconds.
long currentTimeMillis() {
struct timeval time;
gettimeofday(&time, NULL);
return time.tv_sec * 1000 + time.tv_usec / 1000;
}
int main() {
printf("%ld\n", currentTimeMillis());
// wait 1 second
sleep(1);
printf("%ld\n", currentTimeMillis());
return 0;
}
It print:
1522139691342
1522139692342, exactly a second.
^
As of ANSI/ISO C11 or later, you can use timespec_get() to obtain millisecond, microsecond, or nanosecond timestamps, like this:
#include <time.h>
/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)
/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns) ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns) ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns) ((ns)/1000)
/// Get a time stamp in milliseconds.
uint64_t millis()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
return ms;
}
/// Get a time stamp in microseconds.
uint64_t micros()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
return us;
}
/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
return ns;
}
// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.
For a much-more-thorough answer of mine, including with an entire timing library I wrote, see here: How to get a simple timestamp in C.
#Ciro Santilli Путлер also presents a concise demo of C11's timespec_get() function here, which is how I first learned how to use that function.
In my more-thorough answer, I explain that on my system, the best resolution possible is ~20ns, but the resolution is hardware-dependent and can vary from system to system.
Under windows:
SYSTEMTIME t;
GetLocalTime(&t);
swprintf_s(buff, L"[%02d:%02d:%02d:%d]\t", t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);

Resources