Anybody knows, how to get real value of clock per sec? clock() from time.h returns clocks from start of my process, so it need to be divided by CLOCKS_PER_SEC, but this constant has always value 1000000.
Is there some POSIX standard for this?
That's how it specified in the C specification.
If you want to measure elapsed time, there are other (and better) functions, like gettimeofday for example.
Related
I tried to measure the time of some operations on the telosB platform. For that I wanted to count the clock ticks of the processor with the clock() function from time.h but it does not compile on contiki.
Are there mechanisms to measure passed time, preferably in actual clock ticks, on contiki?
Regards
The latest timer documentation is here: https://github.com/contiki-ng/contiki-ng/wiki/Documentation:-Timers
You can use the clock_ticks() function. However, the resolution of those is quite low (1/128 of second). If you want measure shorter time intervals, use rtimers: RTIMER_NOW() returns the time as 16-bit integer, with platform-specific resolution. On most platforms the rtimer clock has 32678 ticks per second, but on CC26xx/CC13xx platforms it has 65536 ticks per second.
See also: Contiki difference between RTIMER_NOW() and clock_time()
gettimeofday() is hardware dependent with RTC.
Can some one suggest how we can avoid the use of the same in Application Programming.
How we can approach the use of System ticks ?
thanks in advance !
To get time in ticks you might like to use times().
However is is not clear whether those ticks are measured from boot-time.
From man times:
RETURN VALUE
times() returns the number of clock ticks that have elapsed since an
arbitrary point in the past. [...]
[...]
NOTES
On Linux, the "arbitrary point in the past" from which the return
value of times() is measured has varied across kernel versions. On
Linux 2.4 and earlier this point is the moment the system was booted.
Since Linux 2.6, this point is (2^32/HZ) - 300 (i.e., about 429
million) seconds before system boot time. This variability across
kernel versions (and across UNIX implementations), combined with the
fact that the returned value may overflow the range of clock_t, means
that a portable application would be wise to avoid using this value.
To measure changes in elapsed time, use clock_gettime(2) instead.
Reading this using clock_gettitme() with the CLOCK_BOOTTIME timer might be the more secure and more portable way to go. If this function and/or timer is available for system without RTC I'm not sure. Others are encouraged to clarfiy this.
I measured time with function clock() but it gave bad results. I mean it gives the same results for program with one thread and for the same program running with OpenMP with many threads. But in fact, I notice with my watch that with many threads program counts faster.
So I need some wall-clock timer...
My question is: What is better function for this issue?
clock_gettime() or mb gettimeofday() ? or mb something else?
if clock_gettime(),then with which clock? CLOCK_REALTIME or CLOCK_MONOTONIC?
using mac os x (snow leopard)
If you want wall-clock time, and clock_gettime() is available, it's a good choice. Use it with CLOCK_MONOTONIC if you're measuring intervals of time, and CLOCK_REALTIME to get the actual time of day.
CLOCK_REALTIME gives you the actual time of day, but is affected by adjustments to the system time -- so if the system time is adjusted while your program runs that will mess up measurements of intervals using it.
CLOCK_MONOTONIC doesn't give you the correct time of day, but it does count at the same rate and is immune to changes to the system time -- so it's ideal for measuring intervals, but useless when correct time of day is needed for display or for timestamps.
I think clock() counts the total CPU usage among all threads, I had this problem too...
The choice of wall-clock timing method is personal preference. I use an inline wrapper function to take time-stamps (take the difference of 2 time-stamps to time your processing). I've used floating point for convenience (units are in seconds, don't have to worry about integer overflow). With multi-threading, there are so many asynchronous events that in my opinion it doesn't make sense to time below 1 microsecond. This has worked very well for me so far :)
Whatever you choose, a wrapper is the easiest way to experiment
inline double my_clock(void) {
struct timeval t;
gettimeofday(&t, NULL);
return (1.0e-6*t.tv_usec + t.tv_sec);
}
usage:
double start_time, end_time;
start_time = my_clock();
//some multi-threaded processing
end_time = my_clock();
printf("time is %lf\n", end_time-start_time);
I'm using something like this to count how long does it takes my program from start to finish:
int main(){
clock_t startClock = clock();
.... // many codes
clock_t endClock = clock();
printf("%ld", (endClock - startClock) / CLOCKS_PER_SEC);
}
And my question is, since there are multiple process running at the same time, say if for x amount of time my process is in idle, durning that time will clock tick within my program?
So basically my concern is, say there's 1000 clock cycle passed by, but my process only uses 500 of them, will I get 500 or 1000 from (endClock - startClock)?
Thanks.
This depends on the OS. On Windows, clock() measures wall-time. On Linux/Posix, it measures the combined CPU time of all the threads.
If you want wall-time on Linux, you should use gettimeofday().
If you want CPU-time on Windows, you should use GetProcessTimes().
EDIT:
So if you're on Windows, clock() will measure idle time.
On Linux, clock() will not measure idle time.
clock on POSIX measures cpu time, but it usually has extremely poor resolution. Instead, modern programs should use clock_gettime with the CLOCK_PROCESS_CPUTIME_ID clock-id. This will give up to nanosecond-resolution results, and usually it's really just about that good.
As per the definition on the man page (in Linux),
The clock() function returns an approximation of processor time used
by the program.
it will try to be as accurate a possible, but as you say, some time (process switching, for example) is difficult to account to a process, so the numbers will be as accurate as possible, but not perfect.
i need a real time clock in ansi c which provide an accuracy upto miliseconds?
i am working on windows a windows base lib is also acceptable thanx in advance.
You can't do it with portable code prior to C11.
Starting with C11, you can use timespec_get, which will often (but doesn't necessarily) provide a resolution of milliseconds or better. Starting from C23, you can call timespec_getres to find out the resolution provided.
Since you're using Windows, you need to start out aware that Windows isn't a real-time system, so nothing you do is really guaranteed to be accurate. That said, you can start with timeBeginPeriod(1); to set the multimedia timer resolution to 1 millisecond. You can then call timeGetTime() to retrieve the current time with 1 ms resolution. When you're done doing timing, you call timeEndPeriod(1) to set the timer resolution back to the default.
You cannot be sure in ANSI C that the underlying system provides accuracy in milliseconds. However to achieve maximum detail you can use the clock() function which returns the time since process startup in ticks where the duration of a tick is defined by CLOCKS_PER_SEC:
#include <time.h>
double elapsed; // in milliseconds
clock_t start, end;
start = clock();
/* do some work */
end = clock();
elapsed = ((double) (end - start) * 1000) / CLOCKS_PER_SEC;
From GNU's documentation.
The ANSI C clock() will give you precision of milliseconds, but not the accuracy, since that is dependent on window's system clock, which has accuracy to 50msec or somewhere around that range. If you need something a little better, then you can use Windows API QueryPerformanceCounter, which has its own caveats as well.