light-weight timer in C - c

I'm looking for a light weight timer to measure timing of few sections of a C code. This timer implementation shouldn't add to the overall program execution time.

Look at clock_gettime for POSIX-compliant platforms; you can do it yourself really easily by comparing one timestamp with one generated a little later.
Remember to use the CLOCK_PROCESS_CPUTIME_ID or CLOCK_THREAD_CPUTIME_ID parameters to specify that you want CPU time taken just by that process (and its children) or thread, and not the wider, absolute, "wall" time.
An alternative on Windows might be GetProcessTimes.

What is time?
Imagine your code took 3 miliseconds ... but it runs on 3 cores ... it used 2 milliseconds on core 1; 1.5 milliseconds on core 2; and 1.2 milliseconds on core 3 for a total of 4.7 milliseconds.
So ... is 3 milliseconds the same as 4.7 milliseconds?
Oh, don't forget that those 4.7 milliseconds were in fact also used to filter incoming internet connections and to download anti-virus database.
Use a profiler, and even then, don't trust the results :)
For POSIX, try gettimeofday() (obsolescent) clock_gettime().
For Windows, apparently, you can use GetSystemTime().

The closest thing to profiling without observation interference is oprofile. But it can't directly measure intervals; it only gives you a statistical map of where the whole program (or whole system) is spending its time.
If you really want cheap interval timing, on x86 you can use the rdtsc instruction in inline asm.
static inline unsigned rdtsc()
{
unsigned x;
__asm__ __volatile__ ( "rdtsc" : "=a"(x) : : "edx" );
return x;
}
Use this to save the timestamp before and after and take the difference. You could modify this code to save the full 64-bit result, but I opted just for the 32-bit result assuming you'll be timing intervals shorter than 4 billion cycles and don't want to waste time on 64-bit subtraction.

Related

Do CLOCK_MONOTONIC and CLOCK_MONOTONIC_COARSE have the same base?

The man page for clock_gettime() describes CLOCK_MONOTONIC_COARSE as:
A faster but less precise version of CLOCK_MONOTONIC. Use when you need very fast, but
not fine-grained timestamps.
What does it mean for one to be a "version of" the other?
Can I validly compare one to the other, assuming I truncate a CLOCK_MONOTONIC value to the same precision as the coarse one?
Here is the man page that lists the different "versions" of Posix/Linux clocks:
https://linux.die.net/man/2/clock_gettime
Sufficiently recent versions of glibc and the Linux kernel support the
following clocks:
CLOCK_REALTIME
System-wide clock that measures real (i.e., wall-clock) time.
Setting this clock requires appropriate privileges. This clock is
affected by discontinuous jumps in the system time (e.g., if the
system administrator manually changes the clock), and by the
incremental adjustments performed by adjtime(3) and NTP.
CLOCK_REALTIME_COARSE (since Linux 2.6.32; Linux-specific)
A faster but less precise version of CLOCK_REALTIME. Use when you
need very fast, but not fine-grained timestamps.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some
unspecified starting point. This clock is not affected by
discontinuous jumps in the system time (e.g., if the system
administrator manually changes the clock), but is affected by the
incremental adjustments performed by adjtime(3) and NTP.
CLOCK_MONOTONIC_COARSE (since Linux 2.6.32; Linux-specific)
A faster but less precise version of CLOCK_MONOTONIC. Use when you
need very fast, but not fine-grained timestamps.
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments or the
incremental adjustments performed by adjtime(3).
CLOCK_BOOTTIME (since Linux 2.6.39; Linux-specific)
Identical to CLOCK_MONOTONIC, except it also includes any time that the system is suspended. This allows applications to get a
suspend-aware monotonic clock without having to deal with the
complications of CLOCK_REALTIME, which may have discontinuities if the
time is changed using settimeofday(2).
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
As you can see above, CLOCK_MONOTONIC_COARSE was introduced in Linux 2.6.32. Here is the rationale (and the specific source patch):
https://lwn.net/Articles/347811/
fter talking with some application writers who want very fast, but not
fine-grained timestamps, I decided to try to implement a new clock_ids
to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
which returns the time at the last tick. This is very fast as we don't
have to access any hardware (which can be very painful if you're using
something like the acpi_pm clocksource), and we can even use the vdso
clock_gettime() method to avoid the syscall. The only trade off is you
only get low-res tick grained time resolution.
This isn't a new idea, I know Ingo has a patch in the -rt tree that
made the vsyscall gettimeofday() return coarse grained time when the
vsyscall64 sysctrl was set to 2. However this affects all applications
on a system.
With this method, applications can choose the proper speed/granularity
trade-off for themselves.
thanks
-john
ADDENDUM:
Q: What use cases might benefit from using CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE?
A: In Linux 2.6.32 time frame (2010-2011), "...application workloads (especially databases and financial service applications) perform extremely frequent gettimeofday or similar time function calls":
Redhat Enterprise: 2.6. gettimeofday speedup
Many application workloads (especially databases and financial service
applications) perform extremely frequent gettimeofday or similar time
function calls. Optimizing the efficiency of this calls can provide
major benefits.
CLOCK_MONOTONIC_COARSE uses the same timebase as CLOCK_MONOTONIC (but specifically does NOT as compared to CLOCK_MONOTONIC_RAW). Specifically, they both use wall_to_monotonic to convert a value derived from tk's xtime. RAW uses a completely different time source.
Remember, that CLOCK_MONOTONIC_COARSE is only updated once per tick (so usually about 1 ms, but ask clock_getres() to be sure). If that accuracy is good enough, then by all means subtract your clock values.
The short answer is YES (at least for Linux!), you can compare them, compute delays, etc...
The precision would be that of the less precise, most probably COARSE one.
See this short program:
#include <time.h>
#include <stdio.h>
int main()
{
int ret;
struct timespec res;
ret = clock_getres(CLOCK_MONOTONIC, &res);
if (0 != ret)
return ret;
printf("CLOCK_MONOTONIC resolution is: %ld sec, %ld nsec\n", (long)res.tv_sec, (long)res.tv_nsec);
ret = clock_getres(CLOCK_MONOTONIC_COARSE, &res);
if (0 != ret)
return ret;
printf("CLOCK_MONOTONIC_COARSE resolution is: %ld sec, %ld nsec\n", (long)res.tv_sec, (long)res.tv_nsec);
return 0;
}
It returns (Ubuntu 20.04 - 64bits - kernel 5.4)
CLOCK_MONOTONIC resolution is: 0 sec, 1 nsec
CLOCK_MONOTONIC_COARSE resolution is: 0 sec, 4000000 nsec
So MONOTONIC has nanosecond precision, and COARSE has 4 milliseconds precision.
Unlike above comment, I would on the contrary recommend to use the COARSE version when the timings you need allow.
Calls to the clock are so frequent in user programs that they have a place in vDSO
When you use COARSE versions, you have exactly zero system call, and it is as fast as your machine can run a few instructions. Thanks to vDSO your program fully stays in "userland" during the call with COARSE.
With other types of clocks, you will have some system calls, and potential access to hardware. So at least a switch to "kernel" and back to "userland".
This of course has zero importance if your program just needs a dozen of calls, but it can be a huge time saver if, on the contrary, the program relies heavily on the clock. That is why, vDSO is there in the first place: performance!
Define first what is the accuracy you need for your timings. Is
second enough, do you need milli second, micro, etc...
Have in mind, unless you are tinkering with RT systems, that time is a
relative value! Imagine you called clock_gettime, and immediately
after returning your thread gets interrupted for any kernel business:
what is the accuracy you get? That is exactly the famous question
that defeated HAL in 2001: A space Odyssey: "what time is it?".
From that you can derive what is the type of clock you need.
You can mix MONOTONIC and the COARSE version of it and still compute delays or compare (that was the original question). But of course the precision is that of the less precise.
The monotonics are best suited to do time delays and do comparisons since they don't depend on the real time (as your watch displays). They don't change when the user changes the actual time.
On the contrary, if you need to display at what time (meaningful for the user) an event occurred, don't use monotonic!

Precise Linux Timing - What Determines the Resolution of clock_gettime()?

I need to do precision timing to the 1 us level to time a change in duty cycle of a pwm wave.
Background
I am using a Gumstix Over Water COM (https://www.gumstix.com/store/app.php/products/265/) that has a single core ARM Cortex-A8 processor running at 499.92 BogoMIPS (the Gumstix page claims up to 1Ghz with 800Mhz recommended) according to /proc/cpuinfo. The OS is an Angstrom Image version of Linux based of kernel version 2.6.34 and it is stock on the Gumstix Water COM.
The Problem
I have done a fair amount of reading about precise timing in Linux (and have tried most of it) and the consensus seems to be that using clock_gettime() and referencing CLOCK_MONOTONIC is the best way to do it. (I would have liked to use the RDTSC register for timing since I have one core with minimal power saving abilities but this is not an Intel processor.) So here is the odd part, while clock_getres() returns 1, suggesting resolution at 1 ns, actual timing tests suggest a minimum resolution of 30517ns or (it can't be coincidence) exactly the time between a 32.768KHz clock ticks. Here's what I mean:
// Stackoverflow example
#include <stdio.h>
#include <time.h>
#define SEC2NANOSEC 1000000000
int main( int argc, const char* argv[] )
{
// //////////////// Min resolution test //////////////////////
struct timespec resStart, resEnd, ts;
ts.tv_sec = 0; // s
ts.tv_nsec = 1; // ns
int iters = 100;
double resTime,sum = 0;
int i;
for (i = 0; i<iters; i++)
{
clock_gettime(CLOCK_MONOTONIC, &resStart); // start timer
// clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, &ts);
clock_gettime(CLOCK_MONOTONIC, &resEnd); // end timer
resTime = ((double)resEnd.tv_sec*SEC2NANOSEC + (double)resEnd.tv_nsec
- ((double)resStart.tv_sec*SEC2NANOSEC + (double)resStart.tv_nsec);
sum = sum + resTime;
printf("resTime = %f\n",resTime);
}
printf("Average = %f\n",sum/(double)iters);
}
(Don't fret over the double casting, tv_sec in a time_t and tv_nsec is a long.)
Compile with:
gcc soExample.c -o runSOExample -lrt
Run with:
./runSOExample
With the nanosleep commented out as shown, the result is either 0ns or 30517ns with the majority being 0ns. This leads me to believe that CLOCK_MONOTONIC is updated at 32.768kHz and most of the time the clock has not been updated before the second clock_gettime() call is made and in cases where the result is 30517ns the clock has been updated between calls.
When I do the same thing on my development computer (AMD FX(tm)-6100 Six-Core Processor running at 1.4 GHz) the minimum delay is a more constant 149-151ns with no zeros.
So, let's compare those results to the CPU speeds. For the Gumstix, that 30517ns (32.768kHz) equates to 15298 cycles of the 499.93MHz cpu. For my dev computer that 150ns equates to 210 cycles of the 1.4Ghz CPU.
With the clock_nanosleep() call uncommented the average results are these:
Gumstix: Avg value = 213623 and the result varies, up and down, by multiples of that min resolution of 30517ns
Dev computer: 57710-68065 ns with no clear trend. In the case of the dev computer I expect the resolution to actually be at the 1 ns level and the measured ~150ns truly is the time elapsed between the two clock_gettime() calls.
So, my question's are these:
What determines that minimum resolution?
Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
Thanks! If you need more info or clarification just ask.
As I understand, the difference between two environments(Gumstix and your Dev-computer) might be the underlying timer h/w they are using.
Commented nanosleep() case:
You are using clock_gettime() twice. To give you a rough idea of what this clock_gettime() will ultimately get mapped to(in kernel):
clock_gettime -->clock_get() -->posix_ktime_get_ts -->ktime_get_ts() -->timekeeping_get_ns()
-->clock->read()
clock->read() basically reads the value of the counter provided by underlying timer driver and corresponding h/w. A simple difference with stored value of the counter in the past and current counter value and then nanoseconds conversion mathematics will yield you the nanoseconds elapsed and will update the time-keeping data structures in kernel.
For example , if you have a HPET timer which gives you a 10 MHz clock, the h/w counter will get updated at 100 ns time interval.
Lets say, on first clock->read(), you get a counter value of X.
Linux Time-keeping data structures will read this value of X, get the difference 'D'compared to some old stored counter value.Do some counter-difference 'D' to nanoseconds 'n' conversion mathematics, update the data-structure by 'n'
Yield this new time value to the user space.
When second clock->read() is issued, it will again read the counter and update the time.
Now, for a HPET timer, this counter is getting updated every 100ns and hence , you will see this difference being reported to the user-space.
Now, Let's replace this HPET timer with a slow 32.768 KHz clock. Now , clock->read()'s counter will updated only after 30517 ns seconds, so, if you second call to clock_gettime() is before this period, you will get 0(which is majority of the cases) and in some cases, your second function call will be placed after counter has incremented by 1, i.e 30517 ns has elapsed. Hence , the value of 30517 ns sometimes.
Uncommented Nanosleep() case:
Let's trace the clock_nanosleep() for monotonic clocks:
clock_nanosleep() -->nsleep --> common_nsleep() -->hrtimer_nanosleep() -->do_nanosleep()
do_nanosleep() will simply put the current task in INTERRUPTIBLE state, will wait for the timer to expire(which is 1 ns) and then set the current task in RUNNING state again. You see, there are lot of factors involved now, mainly when your kernel thread (and hence the user space process) will be scheduled again. Depending on your OS, you will always face some latency when your doing a context-switch and this is what we observe with the average values.
Now Your questions:
What determines that minimum resolution?
I think the resolution/precision of your system will depend on the underlying timer hardware being used(assuming your OS is able to provide that precision to the user space process).
*Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?*
Sorry, I missed you here. How it is 30000x faster? To me , it looks like something 200x faster(30714 ns/ 150 ns ~ 200X ? ) .But anyway, as I understand, CPU speed may or may not have to do with the timer resolution/precision. So, this assumption may be right in some architectures(when you are using TSC H/W), though, might fail in others(using HPET, PIT etc).
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
you can always look into the kernel code for details(that's how i looked into it).
In linux kernel code , look for these source files and Documentation:
kernel/posix-timers.c
kernel/hrtimer.c
Documentation/timers/hrtimers.txt
I do not have gumstix on hand, but it looks like your clocksource is slow.
run:
$ dmesg | grep clocksource
If you get back
[ 0.560455] Switching to clocksource 32k_counter
This might explain why your clock is so slow.
In the recent kernels there is a directory /sys/devices/system/clocksource/clocksource0 with two files: available_clocksource and current_clocksource. If you have this directory, try switching to a different source by echo'ing its name into second file.

faster equivalent of gettimeofday

In trying to build a very latency sensitive application, that needs to send 100s of messages a seconds, each message having the time field, we wanted to consider optimizing gettimeofday.
Out first thought was rdtsc based optimization. Any thoughts ? Any other pointers ?
Required accurancy of the time value returned is in milliseconds, but it isn't a big deal if the value is occasionally out of sync with the receiver for 1-2 milliseconds.
Trying to do better than the 62 nanoseconds gettimeofday takes
POSIX Clocks
I wrote a benchmark for POSIX clock sources:
time (s) => 3 cycles
ftime (ms) => 54 cycles
gettimeofday (us) => 42 cycles
clock_gettime (ns) => 9 cycles (CLOCK_MONOTONIC_COARSE)
clock_gettime (ns) => 9 cycles (CLOCK_REALTIME_COARSE)
clock_gettime (ns) => 42 cycles (CLOCK_MONOTONIC)
clock_gettime (ns) => 42 cycles (CLOCK_REALTIME)
clock_gettime (ns) => 173 cycles (CLOCK_MONOTONIC_RAW)
clock_gettime (ns) => 179 cycles (CLOCK_BOOTTIME)
clock_gettime (ns) => 349 cycles (CLOCK_THREAD_CPUTIME_ID)
clock_gettime (ns) => 370 cycles (CLOCK_PROCESS_CPUTIME_ID)
rdtsc (cycles) => 24 cycles
These numbers are from an Intel Core i7-4771 CPU # 3.50GHz on Linux 4.0. These measurements were taken using the TSC register and running each clock method thousands of times and taking the minimum cost value.
You'll want to test on the machines you intend to run on though as how these are implemented varies from hardware and kernel version. The code can be found here. It relies on the TSC register for cycle counting, which is in the same repo (tsc.h).
TSC
Access the TSC (processor time-stamp counter) is the most accurate and cheapest way to time things. Generally, this is what the kernel is using itself. It's also quite straight-forward on modern Intel chips as the TSC is synchronized across cores and unaffected by frequency scaling. So it provides a simple, global time source. You can see an example of using it here with a walkthrough of the assembly code here.
The main issue with this (other than portability) is that there doesn't seem to be a good way to go from cycles to nanoseconds. The Intel docs as far as I can find state that the TSC runs at a fixed frequency, but that this frequency may differ from the processors stated frequency. Intel doesn't appear to provide a reliable way to figure out the TSC frequency. The Linux kernel appears to solve this by testing how many TSC cycles occur between two hardware timers (see here).
Memcached
Memcached bothers to do the cache method. It may simply be to make sure the performance is more predictable across platforms, or scale better with multiple cores. It may also no be a worthwhile optimization.
Have you actually benchmarked, and found gettimeofday to be unacceptably slow?
At the rate of 100 messages a second, you have 10ms of CPU time per message. If you have multiple cores, assuming it can be fully parallelized, you can easily increase that by 4-6x - that's 40-60ms per message! The cost of gettimeofday is unlikely to be anywhere near 10ms - I'd suspect it to be more like 1-10 microseconds (on my system, microbenchmarking it gives about 1 microsecond per call - try it for yourself). Your optimization efforts would be better spent elsewhere.
While using the TSC is a reasonable idea, modern Linux already has a userspace TSC-based gettimeofday - where possible, the vdso will pull in an implementation of gettimeofday that applies an offset (read from a shared kernel-user memory segment) to rdtsc's value, thus computing the time of day without entering the kernel. However, some CPU models don't have a TSC synchronized between different cores or different packages, and so this can end up being disabled. If you want high performance timing, you might first want to consider finding a CPU model that does have a synchronized TSC.
That said, if you're willing to sacrifice a significant amount of resolution (your timing will only be accurate to the last tick, meaning it could be off by tens of milliseconds), you could use CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE with clock_gettime. This is also implemented with the vdso as well, and guaranteed not to call into the kernel (for recent kernels and glibc).
Like bdonian says, if you're only sending a few hundred messages per second, gettimeofday is going to be fast enough.
However, if you were sending millions of messages per second, it might be different (but you should still measure that it is a bottleneck). In that case, you might want to consider something like this:
have a global variable, giving the current timestamp in your desired accuracy
have a dedicated background thread that does nothing except update the timestamp (if timestamp should be updated every T units of time, then have the thread sleep some fraction of T and then update the timestamp; use real-time features if you need to)
all other threads (or the main process, if you don't use threads otherwise) just reads the global variable
The C language does not guarantee that you can read the timestamp value if it is larger than sig_atomic_t. You could use locking to deal with that, but locking is heavy. Instead, you could use a volatile sig_atomic_t typed variable to index an array of timestamps: the background thread updates the next element in the array, and then updates the index. The other threads read the index, and then read the array: they might get a tiny bit out-of-date timestamp (but they get the right one next time), but they do not run into the problem where they read the timestamp at the same time it is being updated, and get some bytes of the old value and some of the new value.
But all this is much overkill for just hundreds of messages per second.
Below is a benchmark. I see about 30ns. printTime() from rashad How to get current time and date in C++?
#include <string>
#include <iostream>
#include <sys/time.h>
using namespace std;
void printTime(time_t now)
{
struct tm tstruct;
char buf[80];
tstruct = *localtime(&now);
strftime(buf, sizeof(buf), "%Y-%m-%d.%X", &tstruct);
cout << buf << endl;
}
int main()
{
timeval tv;
time_t tm;
gettimeofday(&tv,NULL);
printTime((time_t)tv.tv_sec);
for(int i=0; i<100000000; i++)
gettimeofday(&tv,NULL);
gettimeofday(&tv,NULL);
printTime((time_t)tv.tv_sec);
printTime(time(NULL));
for(int i=0; i<100000000; i++)
tm=time(NULL);
printTime(time(NULL));
return 0;
}
3 sec for 100,000,000 calls or 30ns;
2014-03-20.09:23:35
2014-03-20.09:23:38
2014-03-20.09:23:38
2014-03-20.09:23:41
Do you need the millisecond precision? If not you could simply use time() and deal with the unix timestamp.

How can I find the execution time of a section of my program in C?

I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision?
This only needs to be able to compile on Linux.
You referred to clock() and time() - were you looking for gettimeofday()?
That will fill in a struct timeval, which contains seconds and microseconds.
Of course the actual resolution is up to the hardware.
For what it's worth, here's one that's just a few macros:
#include <time.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%6.3f seconds used by the processor.", ((double)stopm-startm)/CLOCKS_PER_SEC);
Then just use it with:
main() {
START;
// Do stuff you want to time
STOP;
PRINTTIME;
}
From http://ctips.pbwiki.com/Timer
You want a profiler application.
Search keywords at SO and search engines: linux profiling
Have a look at gettimeofday,
clock_*, or get/setitimer.
Try "bench.h"; it lets you put a START_TIMER; and STOP_TIMER("name"); into your code, allowing you to arbitrarily benchmark any section of code (note: only recommended for short sections, not things taking dozens of milliseconds or more). Its accurate to the clock cycle, though in some rare cases it can change how the code in between is compiled, in which case you're better off with a profiler (though profilers are generally more effort to use for specific sections of code).
It only works on x86.
You might want to google for an instrumentation tool.
You won't find a library call which lets you get past the clock resolution of your platform. Either use a profiler (man gprof) as another poster suggested, or - quick & dirty - put a loop around the offending section of code to execute it many times, and use clock().
gettimeofday() provides you with a resolution of microseconds, whereas clock_gettime() provides you with a resolution of nanoseconds.
int clock_gettime(clockid_t clk_id, struct timespec *tp);
The clk_id identifies the clock to be used. Use CLOCK_REALTIME if you want a system-wide clock visible to all processes. Use CLOCK_PROCESS_CPUTIME_ID for per-process timer and CLOCK_THREAD_CPUTIME_ID for a thread-specific timer.
It depends on the conditions.. Profilers are nice for general global views however if you really need an accurate view my recommendation is KISS. Simply run the code in a loop such that it takes a minute or so to complete. Then compute a simple average based on the total run time and iterations executed.
This approach allows you to:
Obtain accurate results with low resolution timers.
Not run into issues where instrumentation interferes with high speed caches (l2,l1,branch..etc) close to the processor. However running the same code in a tight loop can also provide optimistic results that may not reflect real world conditions.
Don't know which enviroment/OS you are working on, but your timing may be inaccurate if another thread, task, or process preempts your timed code in the middle. I suggest exploring mechanisms such as mutexes or semaphores to prevent other threads from preemting your process.
If you are developing on x86 or x64 why not use the Time Stamp Counter: RDTSC.
It will be more reliable then Ansi C functions like time() or clock() as RDTSC is an atomic function. Using C functions for this purpose can introduce problems as you have no guarantee that the thread they are executing in will not be switched out and as a result the value they return will not be an accurate description of the actual execution time you are trying to measure.
With RDTSC you can better measure this. You will need to convert the tick count back into a human readable time H:M:S format which will depend on the processors clock frequency but google around and I am sure you will find examples.
However even with RDTSC you will be including the time your code was switched out of execution, while a better solution than using time()/clock() if you need an exact measurement you will have to turn to a profiler that will instrument your code and take into account when your code is not actually executing due to context switches or whatever.

GetLocalTime() API time resolution

I need to find out time taken by a function in my application. Application is a MS VIsual Studio 2005 solution, all C code.
I used thw windows API GetLocalTime(SYSTEMTIME *) to get the current system time before and after the function call which I want to measure time of.
But this has shortcoming that it lowest resolution is only 1msec. Nothing below that. So I cannot get any time granularity in micro seconds.
I know that time() which gives the time elapsed since the epoch time, also has resolution of 1msec (No microseconds)
1.) Is there any other Windows API which gives time in microseconds which I can use to measure the time consumed by my function?
-AD
There are some other possibilities.
QueryPerformanceCounter and QueryPerformanceFrequency
QueryPerformanceCounter will return a "performance counter" which is actually a CPU-managed 64-bit counter that increments from 0 starting with the computer power-on. The frequency of this counter is returned by the QueryPerformanceFrequency. To get the time reference in seconds, divide performance counter by performance frequency. In Delphi:
function QueryPerfCounterAsUS: int64;
begin
if QueryPerformanceCounter(Result) and
QueryPerformanceFrequency(perfFreq)
then
Result := Round(Result / perfFreq * 1000000);
else
Result := 0;
end;
On multiprocessor platforms, QueryPerformanceCounter should return consistent results regardless of the CPU the thread is currently running on. There are occasional problems, though, usually caused by bugs in hardware chips or BIOSes. Usually, patches are provided by motherboard manufacturers. Two examples from the MSDN:
Programs that use the QueryPerformanceCounter function may perform poorly in Windows Server 2003 and in Windows XP
Performance counter value may unexpectedly leap forward
Another problem with QueryPerformanceCounter is that it is quite slow.
RDTSC instruction
If you can limit your code to one CPU (SetThreadAffinity), you can use RDTSC assembler instruction to query performance counter directly from the processor.
function CPUGetTick: int64;
asm
dw 310Fh // rdtsc
end;
RDTSC result is incremented with same frequency as QueryPerformanceCounter. Divide it by QueryPerformanceFrequency to get time in seconds.
QueryPerformanceCounter is much slower thatn RDTSC because it must take into account multiple CPUs and CPUs with variable frequency. From Raymon Chen's blog:
(QueryPerformanceCounter) counts elapsed time. It has to, since its value is
governed by the QueryPerformanceFrequency function, which returns a number
specifying the number of units per second, and the frequency is spec'd as not
changing while the system is running.
For CPUs that can run at variable speed, this means that the HAL cannot
use an instruction like RDTSC, since that does not correlate with elapsed time.
timeGetTime
TimeGetTime belongs to the Win32 multimedia Win32 functions. It returns time in milliseconds with 1 ms resolution, at least on a modern hardware. It doesn't hurt if you run timeBeginPeriod(1) before you start measuring time and timeEndPeriod(1) when you're done.
GetLocalTime and GetSystemTime
Before Vista, both GetLocalTime and GetSystemTime return current time with millisecond precision, but they are not accurate to a millisecond. Their accuracy is typically in the range of 10 to 55 milliseconds. (Precision is not the same as accuracy)
On Vista, GetLocalTime and GetSystemTime both work with 1 ms resolution.
You can try to use clock() which will provide the number of "ticks" between two points. A "tick" is the smallest unit of time a processor can measure.
As a side note, you can't use clock() to determine the actual time - only the number of ticks between two points in your program.
One caution on multiprocessor systems:
from http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
On a multiprocessor computer, it should not matter which processor is called. However, you can get different results on different processors due to bugs in the basic input/output system (BIOS) or the hardware abstraction layer (HAL). To specify processor affinity for a thread, use the SetThreadAffinityMask function.
Al Weiner
On Windows you can use the 'high performance counter API'. Check out: QueryPerformanceCounter and QueryPerformanceCounterFrequency for the details.

Resources