I have a question regarding GetTickCount function,
I have two calls to this function in my code with several commands between them and the function in both calls returns same count.
i.e.
var1 = GetTickCount();
code
:
:
var2 = GetTickCount();
var1 and var2 has same values in it.
can someone help?
Assuming this is the Windows GetTickCount call, that's entirely reasonable:
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds.
Note that it's only measuring milliseconds to start with - and you can do an awful lot in a millisecond these days.
The docs go on to say:
If you need a higher resolution timer,
use a multimedia timer or a
high-resolution timer.
Perhaps QueryPerformanceCounter would be more appropriate?
If you are referring to the Windows API call then read this.
I would guess that you are trying to time a short interval so this paragraph is relevant. Are you timing something shorter than that interval? If so look into QueryPerformanceCounter instead perhaps.
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds. The
resolution of the GetTickCount
function is not affected by
adjustments made by the
GetSystemTimeAdjustment function.
If you go the QueryPerformanceCounter route you need to watch out for hardware dependent wierdness. Its been awhile so I don't know if this kinda stuff still happens.
You might also want to take a look at this link since it has a nice sample app which compares QueryPerformanceCounter, GetTickCount and TimeGetTime
From MSDN
The resolution of the GetTickCount
function is limited to the resolution
of the system timer, which is
typically in the range of 10
milliseconds to 16 milliseconds. The
resolution of the GetTickCount
function is not affected by
adjustments made by the
GetSystemTimeAdjustment function.
The elapsed time is stored as a DWORD
value. Therefore, the time will wrap
around to zero if the system is run
continuously for 49.7 days. To avoid
this problem, use the GetTickCount64
function. Otherwise, check for an
overflow condition when comparing
times.
If you need a higher resolution timer,
use a multimedia timer or a
high-resolution timer.
GetTickCount has a resolution of one millisecond (in practice, it's several milliseconds). It's highly likely that the functions you're calling in between are taking considerably less than 1 millisecond.
Related
gettimeofday() is hardware dependent with RTC.
Can some one suggest how we can avoid the use of the same in Application Programming.
How we can approach the use of System ticks ?
thanks in advance !
To get time in ticks you might like to use times().
However is is not clear whether those ticks are measured from boot-time.
From man times:
RETURN VALUE
times() returns the number of clock ticks that have elapsed since an
arbitrary point in the past. [...]
[...]
NOTES
On Linux, the "arbitrary point in the past" from which the return
value of times() is measured has varied across kernel versions. On
Linux 2.4 and earlier this point is the moment the system was booted.
Since Linux 2.6, this point is (2^32/HZ) - 300 (i.e., about 429
million) seconds before system boot time. This variability across
kernel versions (and across UNIX implementations), combined with the
fact that the returned value may overflow the range of clock_t, means
that a portable application would be wise to avoid using this value.
To measure changes in elapsed time, use clock_gettime(2) instead.
Reading this using clock_gettitme() with the CLOCK_BOOTTIME timer might be the more secure and more portable way to go. If this function and/or timer is available for system without RTC I'm not sure. Others are encouraged to clarfiy this.
i am using msecs_to_jiffies(msecs) to get delay. I need a delay of 16 ms. But the problem is the function return 1 for input 1-10, 2 for 11-20, 3 for 21-30 and son on. Hence i am unable to set proper delay. I can set delay only in factors of 10 ms. I can't change the HZ value and the function cant sleep also.
Kindly suggest solution to this problem.
Thanks
It seems your system HZ value is set to 100.
If you wish to suspend execution for a period of time in a resolution lower then the system HZ, you need to use high resolution timers (which use nsec resolution, not jiffies) supported in your board and enabled in the kernel. See here for the interface of how to use them: http://lwn.net/Articles/167897/
So, either change the system HZ to 1000 and get a jiffie resolution of 1 msec or use a high resolution timer.
You can't sleep for exactly 16ms. You can sleep for at least 16ms, but not 16ms. That's not the way Linux (or any other desktop OS) works - they're not realtime OSes and they are scheduled in a non-deterministic manner and there's nothing you can do about it.
Whatever you're trying to do, you'll have to go about it another way. With what little info you've provided, all I can say is that what you're trying to do can't be done.
I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision?
This only needs to be able to compile on Linux.
You referred to clock() and time() - were you looking for gettimeofday()?
That will fill in a struct timeval, which contains seconds and microseconds.
Of course the actual resolution is up to the hardware.
For what it's worth, here's one that's just a few macros:
#include <time.h>
clock_t startm, stopm;
#define START if ( (startm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define STOP if ( (stopm = clock()) == -1) {printf("Error calling clock");exit(1);}
#define PRINTTIME printf( "%6.3f seconds used by the processor.", ((double)stopm-startm)/CLOCKS_PER_SEC);
Then just use it with:
main() {
START;
// Do stuff you want to time
STOP;
PRINTTIME;
}
From http://ctips.pbwiki.com/Timer
You want a profiler application.
Search keywords at SO and search engines: linux profiling
Have a look at gettimeofday,
clock_*, or get/setitimer.
Try "bench.h"; it lets you put a START_TIMER; and STOP_TIMER("name"); into your code, allowing you to arbitrarily benchmark any section of code (note: only recommended for short sections, not things taking dozens of milliseconds or more). Its accurate to the clock cycle, though in some rare cases it can change how the code in between is compiled, in which case you're better off with a profiler (though profilers are generally more effort to use for specific sections of code).
It only works on x86.
You might want to google for an instrumentation tool.
You won't find a library call which lets you get past the clock resolution of your platform. Either use a profiler (man gprof) as another poster suggested, or - quick & dirty - put a loop around the offending section of code to execute it many times, and use clock().
gettimeofday() provides you with a resolution of microseconds, whereas clock_gettime() provides you with a resolution of nanoseconds.
int clock_gettime(clockid_t clk_id, struct timespec *tp);
The clk_id identifies the clock to be used. Use CLOCK_REALTIME if you want a system-wide clock visible to all processes. Use CLOCK_PROCESS_CPUTIME_ID for per-process timer and CLOCK_THREAD_CPUTIME_ID for a thread-specific timer.
It depends on the conditions.. Profilers are nice for general global views however if you really need an accurate view my recommendation is KISS. Simply run the code in a loop such that it takes a minute or so to complete. Then compute a simple average based on the total run time and iterations executed.
This approach allows you to:
Obtain accurate results with low resolution timers.
Not run into issues where instrumentation interferes with high speed caches (l2,l1,branch..etc) close to the processor. However running the same code in a tight loop can also provide optimistic results that may not reflect real world conditions.
Don't know which enviroment/OS you are working on, but your timing may be inaccurate if another thread, task, or process preempts your timed code in the middle. I suggest exploring mechanisms such as mutexes or semaphores to prevent other threads from preemting your process.
If you are developing on x86 or x64 why not use the Time Stamp Counter: RDTSC.
It will be more reliable then Ansi C functions like time() or clock() as RDTSC is an atomic function. Using C functions for this purpose can introduce problems as you have no guarantee that the thread they are executing in will not be switched out and as a result the value they return will not be an accurate description of the actual execution time you are trying to measure.
With RDTSC you can better measure this. You will need to convert the tick count back into a human readable time H:M:S format which will depend on the processors clock frequency but google around and I am sure you will find examples.
However even with RDTSC you will be including the time your code was switched out of execution, while a better solution than using time()/clock() if you need an exact measurement you will have to turn to a profiler that will instrument your code and take into account when your code is not actually executing due to context switches or whatever.
I need a way to get the elapsed time (wall-clock time) since a program started, in a way that is resilient to users meddling with the system clock.
On windows, the non standard clock() implementation doesn't do the trick, as it appears to work just by calculating the difference with the time sampled at start up, so that I get negative values if I "move the clock hands back".
On UNIX, clock/getrusage refer to system time, whereas using function such as gettimeofday to sample timestamps has the same problem as using clock on windows.
I'm not really interested in precision, and I've hacked a solution by having a half a second resolution timer spinning in the background countering the clock skews when they happen
(if the difference between the sampled time and the expected exceeds 1 second i use the expected timer for the new baseline) but I think there must be a better way.
I guess you can always start some kind of timer. For example under Linux a thread
that would have a loop like this :
static void timer_thread(void * arg)
{
struct timespec delay;
unsigned int msecond_delay = ((app_state_t*)arg)->msecond_delay;
delay.tv_sec = 0;
delay.tv_nsec = msecond_delay * 1000000;
while(1) {
some_global_counter_increment();
nanosleep(&delay, NULL);
}
}
Where app_state_t is an application structure of your choice were you store variables. If you want to prevent tampering, you need to be sure no one killed your thread
For POSIX, use clock_gettime() with CLOCK_MONOTONIC.
I don't think you'll find a cross-platform way of doing that.
On Windows what you need is GetTickCount (or maybe QueryPerformanceCounter and QueryPerformanceFrequency for a high resolution timer). I don't have experience with that on Linux, but a search on Google gave me clock_gettime.
Wall clock time can bit calculated with the time() call.
If you have a network connection, you can always acquire the time from an NTP server. This will obviously not be affected in any the local clock.
/proc/uptime on linux maintains the number of seconds that the system has been up (and the number of seconds it has been idle), which should be unaffected by changes to the clock as it's maintained by the system interrupt (jiffies / HZ). Perhaps windows has something similar?
I need to find out time taken by a function in my application. Application is a MS VIsual Studio 2005 solution, all C code.
I used thw windows API GetLocalTime(SYSTEMTIME *) to get the current system time before and after the function call which I want to measure time of.
But this has shortcoming that it lowest resolution is only 1msec. Nothing below that. So I cannot get any time granularity in micro seconds.
I know that time() which gives the time elapsed since the epoch time, also has resolution of 1msec (No microseconds)
1.) Is there any other Windows API which gives time in microseconds which I can use to measure the time consumed by my function?
-AD
There are some other possibilities.
QueryPerformanceCounter and QueryPerformanceFrequency
QueryPerformanceCounter will return a "performance counter" which is actually a CPU-managed 64-bit counter that increments from 0 starting with the computer power-on. The frequency of this counter is returned by the QueryPerformanceFrequency. To get the time reference in seconds, divide performance counter by performance frequency. In Delphi:
function QueryPerfCounterAsUS: int64;
begin
if QueryPerformanceCounter(Result) and
QueryPerformanceFrequency(perfFreq)
then
Result := Round(Result / perfFreq * 1000000);
else
Result := 0;
end;
On multiprocessor platforms, QueryPerformanceCounter should return consistent results regardless of the CPU the thread is currently running on. There are occasional problems, though, usually caused by bugs in hardware chips or BIOSes. Usually, patches are provided by motherboard manufacturers. Two examples from the MSDN:
Programs that use the QueryPerformanceCounter function may perform poorly in Windows Server 2003 and in Windows XP
Performance counter value may unexpectedly leap forward
Another problem with QueryPerformanceCounter is that it is quite slow.
RDTSC instruction
If you can limit your code to one CPU (SetThreadAffinity), you can use RDTSC assembler instruction to query performance counter directly from the processor.
function CPUGetTick: int64;
asm
dw 310Fh // rdtsc
end;
RDTSC result is incremented with same frequency as QueryPerformanceCounter. Divide it by QueryPerformanceFrequency to get time in seconds.
QueryPerformanceCounter is much slower thatn RDTSC because it must take into account multiple CPUs and CPUs with variable frequency. From Raymon Chen's blog:
(QueryPerformanceCounter) counts elapsed time. It has to, since its value is
governed by the QueryPerformanceFrequency function, which returns a number
specifying the number of units per second, and the frequency is spec'd as not
changing while the system is running.
For CPUs that can run at variable speed, this means that the HAL cannot
use an instruction like RDTSC, since that does not correlate with elapsed time.
timeGetTime
TimeGetTime belongs to the Win32 multimedia Win32 functions. It returns time in milliseconds with 1 ms resolution, at least on a modern hardware. It doesn't hurt if you run timeBeginPeriod(1) before you start measuring time and timeEndPeriod(1) when you're done.
GetLocalTime and GetSystemTime
Before Vista, both GetLocalTime and GetSystemTime return current time with millisecond precision, but they are not accurate to a millisecond. Their accuracy is typically in the range of 10 to 55 milliseconds. (Precision is not the same as accuracy)
On Vista, GetLocalTime and GetSystemTime both work with 1 ms resolution.
You can try to use clock() which will provide the number of "ticks" between two points. A "tick" is the smallest unit of time a processor can measure.
As a side note, you can't use clock() to determine the actual time - only the number of ticks between two points in your program.
One caution on multiprocessor systems:
from http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
On a multiprocessor computer, it should not matter which processor is called. However, you can get different results on different processors due to bugs in the basic input/output system (BIOS) or the hardware abstraction layer (HAL). To specify processor affinity for a thread, use the SetThreadAffinityMask function.
Al Weiner
On Windows you can use the 'high performance counter API'. Check out: QueryPerformanceCounter and QueryPerformanceCounterFrequency for the details.