C microseconds in windows? [duplicate] - c

I'm looking for a way to measure microsecs in C++/Windows.
I read about the "clock" function, but it returns only milliseconds...
Is there a way to do it?

Use QueryPerformanceCounter and QueryPerformanceFrequency for finest grain timing on Windows.
MSDN article on code timing with these APIs here (sample code is in VB - sorry).

There are two high-precision (100 ns resolution) clocks available in Windows:
GetSystemTimePreciseAsFileTime: 100ns resolution, synchronized to UTC
QueryPerformanceCounter: 100ns resolution, not synchronized to UTC
QueryPerformanceCounter is independant of, and isn't synchronized to, any external time reference. It is useful for measuring absolute timespans.
GetSystemTimePreciseAsFileTime is synchronized. If your PC is in the process of speeding up, or slowing down, your clock to bring it gradually into sync with a time server, GetSystemTimePreciseAsFileTime will appropriately be slower or faster than absolute timespans.
The guidance is:
if you need UTC synchronized timestamps, for use across multiple systems for example: use GetSystemTimePreciseAsFileTime
if you only need absolute timespans: use QueryPerformanceCounter
Bonus Reading
MSDN: Acquiring high-resolution time stamps
MSDN: QueryPerformanceCounter function
MSDN: GetSystemTimePreciseAsFileTime function
MSDN: GetSystemTimeAdjustment function (where you can see if Windows is currently running your clock faster or slower in order to catch up to current true UTC time)
All kernel-level tracing infrastructure in Windows use QueryPerformanceCounter for measuring absolute timespans.
GetSystemTimeAsFileTime would be useful for something like logging.

http://www.boost.org/doc/libs/1_45_0/doc/html/date_time/posix_time.html
altough
Get the UTC time using a sub second resolution clock. On Unix systems this is implemented using GetTimeOfDay. On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.

I guess there's nothing wrong with the QuerPerformance* answer already given: the question was for a Windows-specific solution, and this is it. For a cross-platform C++ solution, I guess boost::chrono makes most sense. The Windows implementation uses the QuerPerformance* methods, and you immediately have a Linux and Mac solution too.

More recent implementations can provide microsecond resolution timestamps on windows
with high accuracy. The joint use of system filetime and performance counter allows
such accuracies see this thread or also this one
One of the recent implementations can be found at the Windows Timestamp Project

(since no-one has mentioned a pure c++ approach yet),
as of c++11:
#include <chrono>
std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch())
gets you the number of microseconds since 1970-01-01, and a port of php's microtime(true) api would be
#include <chrono>
double microtime(){
return (double(std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::system_clock::now().time_since_epoch()).count()) / double(1000000));
}
gets you the number of seconds since 1970-01-01 with microsecond precision

Related

Timing/Clocks in the Linux Kernel

I am writing a device driver and want to benchmark a few pieces of code to get a feel for where I could be experiencing some bottlenecks. As a result, I want to time a few segments of code.
In userspace, I'm used to using clock_gettime() with CLOCK_MONOTONIC. Looking at the kernel sources (note that I am running kernel 4.4, but will be upgrading eventually), it appears I have a few choices:
getnstimeofday()
getrawmonotonic()
get_monotonic_coarse()
getboottime()
For convenience, I have written a function (see below) to get me the current time. I am currently using getrawmonotonic() because I figured this is what I wanted. My function returns the current time as a ktime_t, so then I can use ktime_sub() to get the elapsed time between two times.
static ktime_t get_time_now(void) {
struct timespec time_now;
getrawmonotonic(&time_now);
return timespec_to_ktime(time_now);
}
Given the available high resolution clocking functions (jiffies won't work for me), what is the best function for my given application? More generally, I'm interested in any/all documentation about these functions and the underlying clocks. Primarily, I am curious if the clocks are affected by any timing adjustments and what their epochs are.
Are you comparing measurements you're making in the kernel directly with measurements you've made in userspace? I'm wondering about your choice to use CLOCK_MONOTONIC_RAW as the timebase in the kernel, since you chose to use CLOCK_MONOTONIC in userspace. If you're looking for an analogous and non-coarse function in the kernel which returns CLOCK_MONOTONIC (and not CLOCK_MONOTONIC_RAW) time, look at ktime_get_ts().
It's possible you could also be using raw kernel ticks to be measuring what you're trying to measure (rather than jiffies, which represent multiple kernel ticks), but I do not know how to do that off the top of my head.
In general if you're trying to find documentation about Linux timekeeping, you can take a look at Documentation/timers/timekeeping.txt. Usually when I try to figure out kernel timekeeping I also unfortunately just spend a lot of time reading through the kernel source in time/ (time/timekeeping.c is where most of the functions you're thinking of using right now live... it's not super well-commented, but you can probably wrap your head around it with a little bit of time). And if you're feeling altruistic after learning, remember that updating documentation is a good way to contribute to the kernel :)
To your question at the end about how clocks are affected by timing adjustments and what epochs are used:
CLOCK_REALTIME always starts at Jan 01, 1970 at midnight (colloquially known as the Unix Epoch) if there are no RTC's present or if it hasn't already been set by an application in userspace (or I guess a kernel module if you want to be weird). Usually the userspace application which sets this is the ntp daemon, ntpd or chrony or similar. Its value represents the number of seconds passed since 1970.
CLOCK_MONTONIC represents the number of seconds passed since the device was booted up, and if the device is suspended at a CLOCK_MONOTONIC value of x, when it's resumed, it resumes with CLOCK_MONOTONIC set to x as well. It's not supported on ancient kernels.
CLOCK_BOOTTIME is like CLOCK_MONOTONIC, but has time added to it across suspend/resume -- so if you suspend at a CLOCK_BOOTTIME value of x, for 5 seconds, you'll come back with a CLOCK_BOOTTIME value of x+5. It's not supported on old kernels (its support came about after CLOCK_MONOTONIC).
Fully-fleshed NTP daemons (not SNTP daemons -- that's a more lightweight and less accuracy-creating protocol) set the system clock, or CLOCK_REALTIME, using settimeofday() for large adjustments ("steps" or "jumps") -- these immediately affect the total value of CLOCK_REALTIME, and using adjtime() for smaller adjustments ("slewing" or "skewing") -- these affect the rate at which CLOCK_REALTIME moves forward per CPU clock cycle. I think for some architectures you can actually tune the CPU clock cycle through some means or other, and the kernel implements adjtime() this way if possible, but don't quote me on that. From both the bulk of the kernel's perspective and userspace's perspective, it doesn't actually matter.
CLOCK_MONOTONIC, CLOCK_BOOTTIME, and all other friends slew at the same rate as CLOCK_REALTIME, which is actually fairly convenient in most situations. They're not affected by steps in CLOCK_REALTIME, only by slews.
CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME_RAW, and friends do NOT slew at the same rate as CLOCK_REALTIME, CLOCK_MONOTONIC, and CLOCK_BOOTIME. I guess this is useful sometimes.
Linux provides some process/thread-specific clocks to userspace (CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID), which I know nothing about. I do not know if they're easily accessible in the kernel.

Avoid use of gettimeofday() API

gettimeofday() is hardware dependent with RTC.
Can some one suggest how we can avoid the use of the same in Application Programming.
How we can approach the use of System ticks ?
thanks in advance !
To get time in ticks you might like to use times().
However is is not clear whether those ticks are measured from boot-time.
From man times:
RETURN VALUE
times() returns the number of clock ticks that have elapsed since an
arbitrary point in the past. [...]
[...]
NOTES
On Linux, the "arbitrary point in the past" from which the return
value of times() is measured has varied across kernel versions. On
Linux 2.4 and earlier this point is the moment the system was booted.
Since Linux 2.6, this point is (2^32/HZ) - 300 (i.e., about 429
million) seconds before system boot time. This variability across
kernel versions (and across UNIX implementations), combined with the
fact that the returned value may overflow the range of clock_t, means
that a portable application would be wise to avoid using this value.
To measure changes in elapsed time, use clock_gettime(2) instead.
Reading this using clock_gettitme() with the CLOCK_BOOTTIME timer might be the more secure and more portable way to go. If this function and/or timer is available for system without RTC I'm not sure. Others are encouraged to clarfiy this.

Invoke function at future time

In a long-running server program (built in C) in a Posix (Linux) environment: what is the best approach to get a function to execute at a specific time in the future? It doesn't need to execute in any particular thread, but the accuracy of the execution time needs to be a few milliseconds. General approaches or specific code is appreciated.
There are some high resolution clock functions in the GNU C library (sys/timex.h), so although they are not POSIX, they will be portable linux wise.
High Accuracy Clock -- The GNU C Library
Those functions are prefixed 'ntp' although they do not require or make use of any ntp service, so the relationship is purely superficial.
Beware that although the granularity is in microseconds, the linux kernel has a userspace latency of 10ms so don't bother or expect anything more accurate than that.
Once you have the current high resolution time, you could then calculate a duration and use (posix) nanosleep (but again, round to 10ms) to set a delay. There is also a clock_nanosleep which might be of interest.
You should look up posix timers. It gives you a simple interface to scheduling future work. You can manage it to send you a signal in x seconds/ nanoseconds (ad then you can put your function as it's signal handler). Lookup timer_create

Using ANSI-C on Windows platform can I get time of system upto milliseconds accuracy?

I need to get the millisecond accuracy. I take a look on this question but I am working on Windows: it gives linking errors for POSIX functions.
It will be very good if I can get UTC time since 1970 with milliseconds precision.
Not in ANSI C, but the Windows API provides a GetSystemTime function as illustrated here: https://learn.microsoft.com/en-us/windows/win32/api/minwinbase/ns-minwinbase-systemtime
Sorry, but you can't do that using neither ANSI C nor the Windows API.
You can get the system time with a millisecond resolution using GetSystemTime or with a 100-nanosecond resolution using GetSystemTimeAsFileTime, but the accuracy will not be that good. The system time is only updated at each clock interval, which is somewhere around 10-15 milliseconds depending on the underlying architecture (SMP, Uniprocessor, ...).
There are ways to extrapolate the system time using different algorithms of varying complexity, but without the support of the operating system you'll never be guaranteed a correct high-resolution clock time.
In Windows API there is a SYSTEMTIME structure and some system calls to get system time and local time of your machine, you can implement it in this way:
SYSTEMTIME systime;
GetSystemTime(&systime);
unsigned int millisec = systime.wMilliseconds;

How can I find the execution time of a section of my program in C?

I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision?
This only needs to be able to compile on Linux.
You referred to clock() and time() - were you looking for gettimeofday()?
That will fill in a struct timeval, which contains seconds and microseconds.
Of course the actual resolution is up to the hardware.
For what it's worth, here's one that's just a few macros:
#include <time.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%6.3f seconds used by the processor.", ((double)stopm-startm)/CLOCKS_PER_SEC);
Then just use it with:
main() {
START;
// Do stuff you want to time
STOP;
PRINTTIME;
}
From http://ctips.pbwiki.com/Timer
You want a profiler application.
Search keywords at SO and search engines: linux profiling
Have a look at gettimeofday,
clock_*, or get/setitimer.
Try "bench.h"; it lets you put a START_TIMER; and STOP_TIMER("name"); into your code, allowing you to arbitrarily benchmark any section of code (note: only recommended for short sections, not things taking dozens of milliseconds or more). Its accurate to the clock cycle, though in some rare cases it can change how the code in between is compiled, in which case you're better off with a profiler (though profilers are generally more effort to use for specific sections of code).
It only works on x86.
You might want to google for an instrumentation tool.
You won't find a library call which lets you get past the clock resolution of your platform. Either use a profiler (man gprof) as another poster suggested, or - quick & dirty - put a loop around the offending section of code to execute it many times, and use clock().
gettimeofday() provides you with a resolution of microseconds, whereas clock_gettime() provides you with a resolution of nanoseconds.
int clock_gettime(clockid_t clk_id, struct timespec *tp);
The clk_id identifies the clock to be used. Use CLOCK_REALTIME if you want a system-wide clock visible to all processes. Use CLOCK_PROCESS_CPUTIME_ID for per-process timer and CLOCK_THREAD_CPUTIME_ID for a thread-specific timer.
It depends on the conditions.. Profilers are nice for general global views however if you really need an accurate view my recommendation is KISS. Simply run the code in a loop such that it takes a minute or so to complete. Then compute a simple average based on the total run time and iterations executed.
This approach allows you to:
Obtain accurate results with low resolution timers.
Not run into issues where instrumentation interferes with high speed caches (l2,l1,branch..etc) close to the processor. However running the same code in a tight loop can also provide optimistic results that may not reflect real world conditions.
Don't know which enviroment/OS you are working on, but your timing may be inaccurate if another thread, task, or process preempts your timed code in the middle. I suggest exploring mechanisms such as mutexes or semaphores to prevent other threads from preemting your process.
If you are developing on x86 or x64 why not use the Time Stamp Counter: RDTSC.
It will be more reliable then Ansi C functions like time() or clock() as RDTSC is an atomic function. Using C functions for this purpose can introduce problems as you have no guarantee that the thread they are executing in will not be switched out and as a result the value they return will not be an accurate description of the actual execution time you are trying to measure.
With RDTSC you can better measure this. You will need to convert the tick count back into a human readable time H:M:S format which will depend on the processors clock frequency but google around and I am sure you will find examples.
However even with RDTSC you will be including the time your code was switched out of execution, while a better solution than using time()/clock() if you need an exact measurement you will have to turn to a profiler that will instrument your code and take into account when your code is not actually executing due to context switches or whatever.

Resources