A common way to measure elapsed time is:
const clock_t START = clock();
// ...
const clock_t END = clock();
double T_ELAPSED = (double)(END - START) / CLOCKS_PER_SEC;
I know this is not the best way to measure real time, but I wonder if it works on a system with a variable frequency CPU. Is it just wrong?
There are system architectures that change the frequency of the CPU but have a separate and constant frequency to drive a system clock. One would think that a clock() function would return a time independent of the CPU frequency but this would have to be verified on each system the code is intended to run on.
It is not good to use on a variable clock speed CPU.
http://support.ntp.org/bin/view/Support/KnownHardwareIssues
NTP (network time prototcol) daemon on linux had issues with it.
Most OS's have some API calls for more accurate values, an example being on windows, QueryPerformanceCounter
http://msdn.microsoft.com/en-us/library/ms644904%28VS.85%29.aspx
Related
What's the relationship between the real CPU frequency and the clock_t (the unit is clock tick) in C?
Let's say I have the below piece of C code which measures the time that CPU consumed for running a for loop.
But since the CLOCKS_PER_SEC is a constant value (basically 1000,000) in the C standard library, I wonder how the clock function does measure the real CPU cycles that are consumed by the program while it runs on different computers with different CPU frequencies (for my laptop, it is 2.6GHz).
And if they are not relevant, how does the CPU timer work in the mentioned scenario?
#include <time.h>
#include <stdio.h>
int main(void) {
clock_t start_time = clock();
for(int i = 0; i < 10000; i++) {}
clock_t end_time = clock();
printf("%fs\n", (double)(end_time - start_time) / CLOCKS_PER_SEC);
return 0;
}
Effectively, clock_t values are unrelated to the CPU frequency.
See longer explanation here.
While clock_t-type values could have, in theory, represented actual physical CPU clock ticks - in practice, they do not: POSIX mandates that CLOCKS_PER_SEC be equal to 1,000,000 - one million. Thus the clock_t function returns a value in microseconds.
There is no such thing as "the real CPU frequency". Not in everyone's laptop, at any rate.
On many systems, the OS can lower and raise the CPU clock speed as it sees fit. On some systems there is more than one kind of central processor or core, each with a different speed. Some CPUs are clockless (asynchronous).
Because of all this and for other reasons, most computers measure time with a separate clock device, independent from the CPU clock (if any).
For providing the information used in the shown code, measuring/knowing/using the CPU cycles is not relevant.
For providing the elapsed time, it is only necessary to measure the time.
Reading a hardware timer would be one way to do so.
Most computers (even non-embedded ones) do contain timers which are especially counting ticks of a clock with known constant frequency. (They are specifically not "CPU timers".)
Such a timer can be read and yields a value which increases once per tick (of constant period). Where "known period" means a period know to some appropriate driver for that timer, simplified "known to the clock() function, not necessarily known to you".
Note that even if the number of used CPU cycles were known, calculating the elapsed time from that info is near impossible nowadays, in the presence of:
pipelines
parallelisms
interrupts
branch prediction
More things influencing/preventing the calculation, from comment contribution:
frequency-scaling, temperature throttling and power settings
(David C. Rankin)
I've always used clock() to measure how much time my application took from start to finish, as;
int main(int argc, char *argv[]) {
const clock_t START = clock();
// ...
const double T_ELAPSED = (double)(clock() - START) / CLOCKS_PER_SEC;
}
Since I've started using POSIX threads this seem to fail. It looks like clock() increases N times faster with N threads. As I don't know how many threads are going to be running simultaneously, this approach fails. So how can I measure how much time has passed ?
clock() measure the CPU time used by your process, not the wall-clock time. When you have multiple threads running simultaneously, you can obviously burn through CPU time much faster.
If you want to know the wall-clock execution time, you need to use an appropriate function. The only one in ANSI C is time(), which typically only has 1 second resolution.
However, as you've said you're using POSIX, that means you can use clock_gettime(), defined in time.h. The CLOCK_MONOTONIC clock in particular is the best to use for this:
struct timespec start, finish;
double elapsed;
clock_gettime(CLOCK_MONOTONIC, &start);
/* ... */
clock_gettime(CLOCK_MONOTONIC, &finish);
elapsed = (finish.tv_sec - start.tv_sec);
elapsed += (finish.tv_nsec - start.tv_nsec) / 1000000000.0;
(Note that I have done the calculation of elapsed carefully to ensure that precision is not lost when timing very short intervals).
If your OS doesn't provide CLOCK_MONOTONIC (which you can check at runtime with sysconf(_SC_MONOTONIC_CLOCK)), then you can use CLOCK_REALTIME as a fallback - but note that the latter has the disadvantage that it will generate incorrect results if the system time is changed while your process is running.
What timing resolution do you need? You could use time() from time.h for second resolution. If you need higher resolution, then you could use something more system specific. See Timer function to provide time in nano seconds using C++
I've been wanting a pause function for a while and I found this. Being a beginner in C, I can't be for sure but it looks like functions from <clock.h>.
I would like to implement this into my code, but not without understanding it.
void wait(int seconds){
clock_t start, end;
start = clock();
end = clock();
while (((end-start) / CLOCKS_PER_SEC) = !seconds)
end = clock();
}
It's just a busy-wait loop, which is a very nasty way of implementing a delay, because it pegs the CPU at 100% while doing nothing. Use sleep() instead:
#include <unistd.h>
void wait(int seconds)
{
sleep(seconds);
}
Also note that the code given in the question is buggy:
while (((end-start) / CLOCKS_PER_SEC) = !seconds)
should be:
while (((end-start) / CLOCKS_PER_SEC) != seconds)
or better still:
while (((end-start) / CLOCKS_PER_SEC) < seconds)
(but as mentioned above, you shouldn't even be using this code anyway).
Generally, clock_t is a structure in library. It returns the number of clock ticks elapsed from the program initiation till end or till you want to count the time.
If you want to know more you can read details here: http://www.tutorialspoint.com/c_standard_library/c_function_clock.htm
The clock() function returns the number of system clock cycles that have occurred since the program began execution.
Here is the prototype: clock_t clock(void);
Note that, the return value is not in seconds. To convert the result into seconds, you have to divide the return value by CLOCKS_PER_SEC macro.
You program just does this. Functionally it stops the program execution for seconds seconds (you pass this value as an argument to the wait function).
By the way, it uses time.h, not clock.h. And it is not the right point to start learning C.
To learn more: http://www.cplusplus.com/reference/ctime/?kw=time.h
clock function in <ctime>:
Returns the processor time consumed by the program.
The value returned is expressed in clock ticks, which are units of
time of a constant but system-specific length (with a relation of
CLOCKS_PER_SEC clock ticks per second).
reference
So, basically it returns number of processor ticks passed since the start of the program. While the processor tick is the number of processor instructions executed by a process, it does not account for IO time and any such that does not use CPU.
CLOCKS_PER_SEC Is the average number of CPU ticks executed by machine and varies from machine to machine, even it (probably) changes time to time, because doing too much IO will cause overall decrease for each process CLOCKS_PER_SEC because more time will be spent not using CPU.
Also this statement: (end-start) / CLOCKS_PER_SEC) = !seconds
is not correct, because the right implementation is
while (((end-start) / CLOCKS_PER_SEC) != seconds)
end = clock();
Does the trick of busy waiting, program will be trapped inside this while loop until seconds seconds will be passed using CPU clocks and CLOCKS_PER_SEC to determine time passed.
Although I would suggest changing it to:
while (((end-start) / CLOCKS_PER_SEC) < seconds)
end = clock();
Because if process has low priority, or computer is too busy handling many processes chance is one CPU tick can take more than one second (probably when system is crashed, for some buggy program who take up a lot of resources and has high enough priority to cause CPU starvation).
Finally, I do not recommend using it, because you are still using CPU while waiting which can be avoided by using sleep tools discussed here
I'm using something like this to count how long does it takes my program from start to finish:
int main(){
clock_t startClock = clock();
.... // many codes
clock_t endClock = clock();
printf("%ld", (endClock - startClock) / CLOCKS_PER_SEC);
}
And my question is, since there are multiple process running at the same time, say if for x amount of time my process is in idle, durning that time will clock tick within my program?
So basically my concern is, say there's 1000 clock cycle passed by, but my process only uses 500 of them, will I get 500 or 1000 from (endClock - startClock)?
Thanks.
This depends on the OS. On Windows, clock() measures wall-time. On Linux/Posix, it measures the combined CPU time of all the threads.
If you want wall-time on Linux, you should use gettimeofday().
If you want CPU-time on Windows, you should use GetProcessTimes().
EDIT:
So if you're on Windows, clock() will measure idle time.
On Linux, clock() will not measure idle time.
clock on POSIX measures cpu time, but it usually has extremely poor resolution. Instead, modern programs should use clock_gettime with the CLOCK_PROCESS_CPUTIME_ID clock-id. This will give up to nanosecond-resolution results, and usually it's really just about that good.
As per the definition on the man page (in Linux),
The clock() function returns an approximation of processor time used
by the program.
it will try to be as accurate a possible, but as you say, some time (process switching, for example) is difficult to account to a process, so the numbers will be as accurate as possible, but not perfect.
I've always used clock() to measure how much time my application took from start to finish, as;
int main(int argc, char *argv[]) {
const clock_t START = clock();
// ...
const double T_ELAPSED = (double)(clock() - START) / CLOCKS_PER_SEC;
}
Since I've started using POSIX threads this seem to fail. It looks like clock() increases N times faster with N threads. As I don't know how many threads are going to be running simultaneously, this approach fails. So how can I measure how much time has passed ?
clock() measure the CPU time used by your process, not the wall-clock time. When you have multiple threads running simultaneously, you can obviously burn through CPU time much faster.
If you want to know the wall-clock execution time, you need to use an appropriate function. The only one in ANSI C is time(), which typically only has 1 second resolution.
However, as you've said you're using POSIX, that means you can use clock_gettime(), defined in time.h. The CLOCK_MONOTONIC clock in particular is the best to use for this:
struct timespec start, finish;
double elapsed;
clock_gettime(CLOCK_MONOTONIC, &start);
/* ... */
clock_gettime(CLOCK_MONOTONIC, &finish);
elapsed = (finish.tv_sec - start.tv_sec);
elapsed += (finish.tv_nsec - start.tv_nsec) / 1000000000.0;
(Note that I have done the calculation of elapsed carefully to ensure that precision is not lost when timing very short intervals).
If your OS doesn't provide CLOCK_MONOTONIC (which you can check at runtime with sysconf(_SC_MONOTONIC_CLOCK)), then you can use CLOCK_REALTIME as a fallback - but note that the latter has the disadvantage that it will generate incorrect results if the system time is changed while your process is running.
What timing resolution do you need? You could use time() from time.h for second resolution. If you need higher resolution, then you could use something more system specific. See Timer function to provide time in nano seconds using C++