I measured time with function clock() but it gave bad results. I mean it gives the same results for program with one thread and for the same program running with OpenMP with many threads. But in fact, I notice with my watch that with many threads program counts faster.
So I need some wall-clock timer...
My question is: What is better function for this issue?
clock_gettime() or mb gettimeofday() ? or mb something else?
if clock_gettime(),then with which clock? CLOCK_REALTIME or CLOCK_MONOTONIC?
using mac os x (snow leopard)
If you want wall-clock time, and clock_gettime() is available, it's a good choice. Use it with CLOCK_MONOTONIC if you're measuring intervals of time, and CLOCK_REALTIME to get the actual time of day.
CLOCK_REALTIME gives you the actual time of day, but is affected by adjustments to the system time -- so if the system time is adjusted while your program runs that will mess up measurements of intervals using it.
CLOCK_MONOTONIC doesn't give you the correct time of day, but it does count at the same rate and is immune to changes to the system time -- so it's ideal for measuring intervals, but useless when correct time of day is needed for display or for timestamps.
I think clock() counts the total CPU usage among all threads, I had this problem too...
The choice of wall-clock timing method is personal preference. I use an inline wrapper function to take time-stamps (take the difference of 2 time-stamps to time your processing). I've used floating point for convenience (units are in seconds, don't have to worry about integer overflow). With multi-threading, there are so many asynchronous events that in my opinion it doesn't make sense to time below 1 microsecond. This has worked very well for me so far :)
Whatever you choose, a wrapper is the easiest way to experiment
inline double my_clock(void) {
struct timeval t;
gettimeofday(&t, NULL);
return (1.0e-6*t.tv_usec + t.tv_sec);
}
usage:
double start_time, end_time;
start_time = my_clock();
//some multi-threaded processing
end_time = my_clock();
printf("time is %lf\n", end_time-start_time);
Related
I have trying to write c code where some numerical calculations containing time derivatives need to be performed in a real-time dynamic system setting. For this purpose, I need the most accurate time assessment possible from one cycle to the next in a variable called "dt":
static clock_t prev_time;
prev_time = clock();
while(1){
clock_t current_time = clock();
//do something
double dt = (double)(current_time - prev_time)/CLOCKS_PER_SEC;
prev_time = current_time;
}
However, when I test this code by integrating (continuously adding) dt, i.e
static double elapsed_time = 0;
//the above-mentioned declaration and initialization in done here
while(1){
//the above-mentioned code is executed here
elapsed_time += dt;
printf("elapsed_time : %lf\n", elapsed_time);
}
I obtain results that are significantly lower than reality, by a factor that is constant at runtime, but seems to vary when I edit unrelated parts of the code (it ranges between 1/10 and about half the actual time).
My current guess is that clock() doesn't account for time required for memory access (at several points throughout the code, I open an external textfile and save data in it for diagnosis purposes).
But I am not sure if this is the case.
Also, I couldn't find any other way to accurately measure time.
Does anyone why this is happening?
EDIT: the code is compiled and executed on a raspberry pi 3, and is used to implement a feedback controler
UPDATE: As it turns out, clock() is not suitable for real-time numerical calculation, as it only accounts for the time used on the processor. I solved the problem by using the clock_gettime() function defined in <time.h>. This returns the global time with a nanosecond resolution (of course, this comes with a certain error which is dependent on your clock resolution, but is is about the best thing out there).
The answer was surprisingly hard to find, so thanks to Ian Abbott for the very useful help
Does anyone why this is happening?
clock measures CPU time, which is the amount of time slices/clock ticks that the CPU spent on your process. This is useful when you do microbenchmarking as you often don't want to measure time that the CPU actually spent on other processes in a microbenchmark, but it's not useful when you want to measure how much time actually passes between events.
Instead, you should use time or gettimeofday to measure wall clock time.
I am working on encryption of realtime data. I have developed encryption and decryption algorithm. Now i want to measure the execution time of the same on Linux platform in C. How can i correctly measure it ?. I have tried it as below
gettimeofday(&tv1, NULL);
/* Algorithm Implementation Code*/
gettimeofday(&tv2, NULL);
Total_Runtime=(tv2.tv_usec - tv1.tv_usec) +
(tv2.tv_sec - tv1.tv_sec)*1000000);
which gives me time in microseconds. Is it correct way of time measurement or i should use some other function? Any hint will be appreciated.
clock(): The value returned is the CPU time used so far as a clock_t;
Logic
Get CPU time at program beginning and at end. Difference is what you want.
Code
clock_t begin = clock();
/**** code ****/
clock_t end = clock();
double time_spent = (double)(end - begin) //in microseconds
To get the number of seconds used,we divided the difference by CLOCKS_PER_SEC.
More accurate
In C11 timespec_get() provides time measurement in the range of nanoseconds. But the accuracy is implementation defined and can vary.
Read time(7). You probably want to use clock_gettime(2) with CLOCK_PROCESS_CPUTIME_ID or
CLOCK_MONOTONIC. Or you could just use clock(3) (for the CPU time in microseconds, since CLOCK_PER_SEC is always a million).
If you want to benchmark an entire program (executable), use time(1) command.
Measuring the execution time of a proper encryption code is simple although a bit tedious. The runtime of a good encryption code is independent of the quality of the input--no matter what you throw at it, it always needs the same amount of operations per chunk of input. If it doesn't you have a problem called a timing-attack.
So the only thing you need to do is to unroll all loops, count the opcodes and multiply the individual opcodes with their amount of clock-ticks to get the exact runtime. There is one problem: some CPUs have a variable amount of clock-ticks for some of their operations and you might have to change those to operations that have a fixed amount of clock-ticks. A pain in the behind, admitted.
If the single thing you want is to know if the code runs fast enough to fit into a slot of your real-time OS you can simply take to maximum and fill cases below with NOOPs (Your RTOS might have a routine for that).
To measure execution time of a function in C,how accurate is the POSIX function gettimeofday() in Ubuntu 12.04 on an Intel i7 ? And why ? If its hard to say,then how to find out ? I can't find a straight answer on this.
If you are building a stopwatch, you want a consistent clock that does not adjust to any time servers.
#include <time.h>
...
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
If CLOCK_MONOTONIC_RAW does not exist, use CLOCK_MONOTONIC instead. If you want time of day, you still should not use gettimeofday.
clock_gettime(CLOCK_REALTIME, &ts);
The timer for such clocks is high resolution, but it shifts. The timespec supports resolution as small as nanoseconds, but your computer will likely only be able to measure somewhere in the microseconds.
gettimeofday returns a timeval, which has a resolution in microseconds.
gettimeofday should be accurate enough, at least for comparing relative performance. But, in order to smooth out inaccuracies due to context switching and IO related delays you should run the function, say, at least hundreds of times and get the average.
You can tune the number of times you need to execute the function by making sure you get approximately the same result each time you benchmark the function.
Also see this answer to measure the accuracy of gettimeofday in your system.
You can use gettimeofday() at start and end of code to find the difference between starting time and finishing time of program. like
//start = gettimeofday
//your code here
//end = gettimeofday
Execution time will be difference between the 2
But it depends on how much load is one the machine is at that point. So not a reliable method.
You can also use:
time ./a.out
assuming your program is a.out
In section 3.9 of the classic APUE(Advanced Programming in the UNIX Environment), the author measured the user/system time consumed in his sample program which runs against varying buffer size(an I/O read/write program).
The result table goes kinda like(all the time are in the unit of second):
BUFF_SIZE USER_CPU SYSTEM_CPU CLOCK_TIME LOOPS
1 124.89 161.65 288.64 103316352
...
512 0.27 0.41 7.03 201789
...
I'm curious about and really wondering how to measure the USER/SYSTEM CPU time for a piece of program?
And in this example, what does the CLOCK TIME mean and how to measure it?
Obviously it isn't simply the sum of user CPU time and system CPU time.
You could easily measure the running time of a program using the time command under *nix:
$ time myprog
real 0m2.792s
user 0m0.099s
sys 0m0.200s
The real or CLOCK_TIME refers to the wall clock time i.e the time taken from the start of the program to finish and includes even the time slices taken by other processes when the kernel context switches them. It also includes any time, the process is blocked (on I/O events, etc.)
The user or USER_CPU refers to the CPU time spent in the user space, i.e. outside the kernel. Unlike the real time, it refers to only the CPU cycles taken by the particular process.
The sys or SYSTEM_CPU refers to the CPU time spent in the kernel space, (as part of system calls). Again this is only counting the CPU cycles spent in kernel space on behalf of the process and not any time it is blocked.
In the time utility, the user and sys are calculated from either times() or wait() system calls. The real is usually calculated using the time differences in the 2 timestamps gathered using the gettimeofday() system call at the start and end of the program.
One more thing you might want to know is real != user + sys. On a multicore system the user or sys or their sum can quite easily exceed the real time.
Partial answer:
Well, CLOCK_TIME is same as time shown by a clock, time passed in the so called "real world".
One way to measure that is to use gettimeofday POSIX function, which stores time to caller's struct timeval, containing UNIX seconds field and a microsecond field (actual accuracy is often less). Example for using that in typical benchmark code (ignoring errors etc):
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
do_operation_to_measure();
gettimeofday(&tv2, NULL);
// get difference, fix value if microseconds became negative
struct timeval tvdiff = { tv2.tv_sec - tv1.tv_sec, tv2.tv_usec - tv1.tv_usec };
if (tvdiff.tv_usec < 0) { tvdiff.tv_usec += 1000000; tvdiff.tv_sec -= 1; }
// print it
printf("Elapsed time: %ld.%06ld\n", tvdiff.tv_sec, tvdiff.tv_usec);
I'm using something like this to count how long does it takes my program from start to finish:
int main(){
clock_t startClock = clock();
.... // many codes
clock_t endClock = clock();
printf("%ld", (endClock - startClock) / CLOCKS_PER_SEC);
}
And my question is, since there are multiple process running at the same time, say if for x amount of time my process is in idle, durning that time will clock tick within my program?
So basically my concern is, say there's 1000 clock cycle passed by, but my process only uses 500 of them, will I get 500 or 1000 from (endClock - startClock)?
Thanks.
This depends on the OS. On Windows, clock() measures wall-time. On Linux/Posix, it measures the combined CPU time of all the threads.
If you want wall-time on Linux, you should use gettimeofday().
If you want CPU-time on Windows, you should use GetProcessTimes().
EDIT:
So if you're on Windows, clock() will measure idle time.
On Linux, clock() will not measure idle time.
clock on POSIX measures cpu time, but it usually has extremely poor resolution. Instead, modern programs should use clock_gettime with the CLOCK_PROCESS_CPUTIME_ID clock-id. This will give up to nanosecond-resolution results, and usually it's really just about that good.
As per the definition on the man page (in Linux),
The clock() function returns an approximation of processor time used
by the program.
it will try to be as accurate a possible, but as you say, some time (process switching, for example) is difficult to account to a process, so the numbers will be as accurate as possible, but not perfect.