Measuring time in mili or microseconds in C Language - c

I am using Microsoft Visual Studio 2010. I wanted to measure time in micro seconds in C language on windows 7 platform. How can I do that.

The way to get accurate time measurements is via performance counters.
In Windows, you can use QueryPerformanceCounter() and QueryPerformanceFrequency():
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904%28v=vs.85%29.aspx
EDIT : Here's a simple example that measures the time needed to sum up from 0 to 1000000000:
LARGE_INTEGER frequency;
LARGE_INTEGER start;
LARGE_INTEGER end;
// Get the frequency
QueryPerformanceFrequency(&frequency);
// Start timer
QueryPerformanceCounter(&start);
// Do some work
__int64 sum = 0;
int c;
for (c = 0; c < 1000000000; c++){
sum += c;
}
printf("sum = %lld\n",sum);
// End timer
QueryPerformanceCounter(&end);
// Print Difference
double duration = (double)(end.QuadPart - start.QuadPart) / frequency.QuadPart;
printf("Seconds = %f\n",duration);
Output:
sum = 499999999500000000
Seconds = 0.659352

see QueryPerformanceCounter and QueryPerformanceFrequency

Related

Measuring processor ticks in C

I wanted to calculate the difference in execution time when executing the same code inside a function. To my surprise, however, sometimes the clock difference is 0 when I use clock()/clock_t for the start and stop timer. Does this mean that clock()/clock_t does not actually return the number of clicks the processor spent on the task?
After a bit of searching, it seemed to me that clock_gettime() would return more fine grained results. And indeed it does, but I instead end up with an abitrary number of nano(?)seconds. It gives a hint of the difference in execution time, but it's hardly accurate as to exactly how many clicks difference it amounts to. What would I have to do to find this out?
#include <math.h>
#include <stdio.h>
#include <time.h>
#define M_PI_DOUBLE (M_PI * 2)
void rotatetest(const float *x, const float *c, float *result) {
float rotationfraction = *x / *c;
*result = M_PI_DOUBLE * rotationfraction;
}
int main() {
int i;
long test_total = 0;
int test_count = 1000000;
struct timespec test_time_begin;
struct timespec test_time_end;
float r = 50.f;
float c = 2 * M_PI * r;
float x = 3.f;
float result_inline = 0.f;
float result_function = 0.f;
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
float rotationfraction = x / c;
result_inline = M_PI_DOUBLE * rotationfraction;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Inline clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count,result_inline);
for (i = 0; i < test_count; i++) {
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_begin);
rotatetest(&x, &c, &result_function);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &test_time_end);
test_total += test_time_end.tv_nsec - test_time_begin.tv_nsec;
}
printf("Function clocks %li, avg %f (result is %f)\n", test_total, test_total / (float)test_count, result_inline);
return 0;
}
I am using gcc version 4.8.4 on Linux 3.13.0-37-generic (Linux Mint 16)
First of all: As already mentioned in the comments, clocking a single run of execution one by the other will probably do you no good. If all goes down the hill, the call for getting the time might actually take longer than the actual execution of the operation.
Please clock multiple runs of the operation (including a warm up phase so everything is swapped in) and calculate the average running times.
clock() isn't guaranteed to be monotonic. It also isn't the number of processor clicks (whatever you define this to be) the program has run. The best way to describe the result from clock() is probably "a best effort estimation of the time any one of the CPUs has spent on calculation for the current process". For benchmarking purposes clock() is thus mostly useless.
As per specification:
The clock() function returns the implementation's best approximation to the processor time used by the process since the beginning of an implementation-dependent time related only to the process invocation.
And additionally
To determine the time in seconds, the value returned by clock() should be divided by the value of the macro CLOCKS_PER_SEC.
So, if you call clock() more often than the resolution, you are out of luck.
For profiling/benchmarking, you should --if possible-- use one of the performance clocks that are available on modern hardware. The prime candidates are probably
The HPET
The TSC
Edit: The question now references CLOCK_PROCESS_CPUTIME_ID, which is Linux' way of exposing the TSC.
If any (or both) are available depends on the hardware in is also operating system specific.
After googling a little bit I can see that clock() function can be used as a standard mechanism to find the tome taken for execution , but be aware that the time will be varying at different time depending upon the load of your processor,
You can just use the below code for calculation
clock_t begin, end;
double time_spent;
begin = clock();
/* here, do your time-consuming job */
end = clock();
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;

how to define a loop to be run for some seconds/minutes long

My purpose is to execute a while loop for a defined time (e.g. 90 seconds for this example). It does not have to be exactly 90 s, but 1-2 second inaccuracy is acceptable. I trued to use clock()` function for this purpose:
int main(void){
clock_t start, end;
volatile double elapsed;
start = clock();
int terminate = 1;
while(terminate)
{
end = clock();
elapsed = ((double) (end-start)) / (double) CLOCKS_PER_SEC *1000;
printf("elapsed time:%f\n",elapsed);
if(elapsed >= 90.0)
terminate = 0;
usleep(50000);
}
printf("done..\n");
return 0;
}
when I run it on my laptop (x86, 3.13 kernel, gcc 4.8.2), my stopwatch measures 72 seconds for it to be completed. (1000 was necessary to have the elapsed in seconds accuracy on my laptop)
When I run it on an ARM device (armv5tejl, 3.12 kernel, gcc 4.6.3) it takes 58 seconds to complete the code. (I needed to used 100 on elapsed on the armv5).
I run the code on a room temperature, so the clock should be stable. I know that kernel sleeps the threads and has inaccuracy with time to wake them up, etc. Therefore, as I said previously I don't expect to get a perfect timing, but it should have some accuracy.
I had tried to use only usleep (even nanosleep) but the resolution was not good as well. At the end I come up with the the bottom code that fetches the system time (hour, minute, second) then calculate the elapsed time. And it works with a good accuracy.
I wonder if there is another solution that would be less costly to use?
typedef struct{
int hour;
int minute;
int second;
} timeInfo;
timeInfo getTimeInfo(void){
timeInfo value2return;
time_t rawtime;
struct tm * timeinfo;
time(&rawtime);
timeinfo = localtime(&rawtime);
value2return.hour = timeinfo->tm_hour;
value2return.minute = timeinfo->tm_min;
value2return.second = timeinfo->tm_sec;
return value2return;
}
int checkElapsedTime(const timeInfo *Start, const timeInfo *Stop, const int Reference){
if(Stop->hour < Start->hour){
printf("1:%d\n", (Stop->hour +24) *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second));
if( ( (Stop->hour +24) *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second)) >= Reference )
return 0; //while(0): terminate the loop
else
return 1; //while(1)
}else{
printf("2:%d\n",Stop->hour *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second));
if( (Stop->hour *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second)) >= Reference )
return 0;
else
return 1;
}
}
int main(void){
timeInfo stop, start = getTimeInfo();
int terminate = 1;
while(terminate)
{
stop = getTimeInfo();
terminate = checkElapsedTime(&start, &stop, 90);
usleep(5000); //to decrease the CPU load
}
printf("terminated\n");
return 0;
}
Lastly, I need to run it inside a pthread.
Use time() vs. clock(). Coding goal is to determine wall time elapsed and not processor time used.
Current code calculated the process time elapsed * 1000 and compared that to 90 seconds.
clock(), which #uesp implied, returns "The clock function determines the processor time used." C11dr §7.27.2.1 2.
time() which returns "The time function determines the current calendar time" §7.27.2.4 2
difftime() does a nice job of finding the difference between 2 time_t (in whatever units/type they are) and returning the difference in seconds.
int main(void) {
time_t start, end;
double elapsed; // seconds
start = time(NULL);
int terminate = 1;
while (terminate) {
end = time(NULL);
elapsed = difftime(end, start);
if (elapsed >= 90.0 /* seconds */)
terminate = 0;
else // No need to sleep when 90.0 seconds elapsed.
usleep(50000);
}
printf("done..\n");
return 0;
}
Minor: Note: when using clock(), no need for * 1000. On a windows based machine running gcc, for me, clock() also returned the calling process CPU time.
elapsed = ((double) (end-start)) / (double) CLOCKS_PER_SEC *1000;
elapsed = ((double) (end-start)) / CLOCKS_PER_SEC;
Minor: No need for volatile. elapsed is only changing due to this code.
// volatile double elapsed;
double elapsed;
The reason your first version doesn't seem to work is that on Linux clock() measures the used CPU time and not the real time (see here). Since you are sleeping the process then the real and CPU times don't match up. The solution is to check the real clock time as in your second example.
Note that on Windows clock() does give you the real clock time (see here).
Use alarm and catch the signal. The signal handler will interrupt the process execution. You could also try pthread_cancel. Loop or sleep based methods of individual running time t can be inaccurate by time t. If the loop is a long running, tight execution path, sleeping or breaking will not solve your problem at all.

CPU usage in C (as percentage)

How can I get CPU usage as percentage using C?
I have a function like this:
static int cpu_usage (lua_State *L) {
clock_t clock_now = clock();
double cpu_percentage = ((double) (clock_now - program_start)) / get_cpus() / CLOCKS_PER_SEC;
lua_pushnumber(L,cpu_percentage);
return 1;
}
"program_start" is a clock_t that I use when the program starts.
Another try:
static int cpu_usage(lua_State *L) {
struct rusage ru;
getrusage(RUSAGE_SELF, &ru);
lua_pushnumber(L,ru.ru_utime.tv_sec);
return 1;
}
Is there any way to measure CPU? If I call this function from time to time it keeps returning me the increasing time... but that´s not what I want.
PS: I'm using Ubuntu.
Thank you! =)
Your function should work as expected. From clock
The clock() function shall return the implementation's best approximation to the processor time used by the process since the beginning of an implementation-defined era related only to the process invocation.
This means, it returns the CPU time for this process.
If you want to calculate the CPU time relative to the wall clock time, you must do the same with gettimeofday. Save the time at program start
struct timeval wall_start;
gettimeofday(&wall_start, NULL);
and when you want to calculate the percentage
struct timeval wall_now;
gettimeofday(&wall_now, NULL);
Now you can calculate the difference of wall clock time and you get
double start = wall_start.tv_sec + wall_start.tv_usec / 1000000;
double stop = wall_now.tv_sec + wall_now.tv_usec / 1000000;
double wall_time = stop - start;
double cpu_time = ...;
double percentage = cpu_time / wall_time;

Measuring elapsed time in linux for a c program

I am trying to measure elapsed time in Linux. My answer keeps returning zero which makes no sense to me. Below is the way i measure time in my program.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
main()
{
double p16 = 1, pi = 0, precision = 1000;
int k;
unsigned long micros = 0;
float millis = 0.0;
clock_t start, end;
start = clock();
// This section calculates pi
for(k = 0; k <= precision; k++)
{
pi += 1.0 / p16 * (4.0 / (8 * k + 1) - 2.0 / (8 * k + 4) - 1.0 / (8 * k + 5) - 1.0 / (8 * k + 6));
p16 *= 16;
}
end = clock();
micros = end - start;
millis = micros / 1000;
printf("%f\n", millis); //my time keeps being returned as 0
printf("this value of pi is : %f\n", pi);
}
Three alternatives
clock()
gettimeofday()
clock_gettime()
clock_gettime() goes upto nanosecond accuracy and it supports 4 clocks.
CLOCK_REALTIME
System-wide realtime clock. Setting this clock requires appropriate privileges.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified starting point.
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
You can use it as
#include <time.h>
struct timespec start, stop;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
/// do something
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &stop);
double result = (stop.tv_sec - start.tv_sec) * 1e6 + (stop.tv_nsec - start.tv_nsec) / 1e3; // in microseconds
Note: The clock() function returns CPU time for your process, not wall clock time. I believe this is what the OP was interested in. If wall clock time is desired, then gettimeofday() is a good choice as suggested by an earlier answer. clock_gettime() can do either one if your system supports it; on my linux embedded system clock_gettime() is not supported, but clock() and gettimeofday() are.
Below is the code for getting wall clock time using gettimeofday()
#include <stdio.h> // for printf()
#include <sys/time.h> // for clock_gettime()
#include <unistd.h> // for usleep()
int main() {
struct timeval start, end;
long secs_used,micros_used;
gettimeofday(&start, NULL);
usleep(1250000); // Do the stuff you want to time here
gettimeofday(&end, NULL);
printf("start: %d secs, %d usecs\n",start.tv_sec,start.tv_usec);
printf("end: %d secs, %d usecs\n",end.tv_sec,end.tv_usec);
secs_used=(end.tv_sec - start.tv_sec); //avoid overflow by subtracting first
micros_used= ((secs_used*1000000) + end.tv_usec) - (start.tv_usec);
printf("micros_used: %d\n",micros_used);
return 0;
}
To start with you need to use floating point arithmetics. Any integer value divided by a larger integer value will be zero, always.
And of course you should actually do something between getting the start and end times.
By the way, if you have access to gettimeofday it's normally preferred over clock as it has higher resolution. Or maybe clock_gettime which has even higher resolution.
There are two issues with your code as written.
According to man 3 clock, the resolution of clock() is in CLOCKS_PER_SEC increments per second. On a recent-ish Cygwin system, it's 200. Based on the names of your variables, you are expecting the value to be 1,000,000.
This line:
millis = micros / 1000;
will compute the quotient as an integer, because both operands are integers. The promotion to a floating-point type occurs at the time of the assignment to millis, at which point the fractional part has already been discarded.
To compute the number of seconds elapsed using clock(), you need to do something like this:
clock_t start, end;
float seconds;
start = clock();
// Your code here:
end = clock();
seconds = end - start; // time difference is now a float
seconds /= CLOCKS_PER_SEC; // this division is now floating point
However, you will almost certainly not get millisecond accuracy. For that, you would need to use gettimeofday() or clock_gettime(). Further, you probably want to use double instead of float, because you are likely going to wind up subtracting very large numbers with a very tiny difference. The example using clock_gettime() would be:
#include <time.h>
/* Floating point nanoseconds per second */
#define NANO_PER_SEC 1000000000.0
int main(void)
{
struct timespec start, end;
double start_sec, end_sec, elapsed_sec;
clock_gettime(CLOCK_REALTIME, &start);
// Your code here
clock_gettime(CLOCK_REALTIME, &end);
start_sec = start.tv_sec + start.tv_nsec / NANO_PER_SEC;
end_sec = end.tv_sec + end.tv_nsec / NANO_PER_SEC;
elapsed_sec = end_sec - start_sec;
printf("The operation took %.3f seconds\n", elapsed_sec);
return 0;
}
Since NANO_PER_SEC is a floating-point value, the division operations are carried out in floating-point.
Sources:
man pages for clock(3), gettimeofday(3), and clock_gettime(3).
The C Programming Language, Kernighan and Ritchie
Try Sunil D S's answer but change micros from unsigned long to type float or double, like this:
double micros;
float seconds;
clock_t start, end;
start = clock();
/* Do something here */
end = clock();
micros = end - start;
seconds = micros / 1000000;
Alternatively, you could use rusage, like this:
struct rusage before;
struct rusage after;
float a_cputime, b_cputime, e_cputime;
float a_systime, b_systime, e_systime;
getrusage(RUSAGE_SELF, &before);
/* Do something here! or put in loop and do many times */
getrusage(RUSAGE_SELF, &after);
a_cputime = after.ru_utime.tv_sec + after.ru_utime.tv_usec / 1000000.0;
b_cputime = before.ru_utime.tv_sec + before.ru_utime.tv_usec / 1000000.0;
e_cputime = a_cputime - b_cputime;
a_systime = after.ru_stime.tv_sec + after.ru_stime.tv_usec / 1000000.0;
b_systime = before.ru_stime.tv_sec + before.ru_stime.tv_usec / 1000000.0;
e_systime = a_systime - b_systime;
printf("CPU time (secs): user=%.4f; system=%.4f; real=%.4f\n",e_cputime, e_systime, seconds);
Units and precision depend on how much time you want to measure but either of these should
provide reasonable accuracy for ms.
When you divide, you might end up with a decimal, hence you need a flaoting point number to store the number of milli seconds.
If you don't use a floating point, the decimal part is truncated. In your piece of code, the start and end are ALMOST the same. Hence the result after division when stored in a long is "0".
unsigned long micros = 0;
float millis = 0.0;
clock_t start, end;
start = clock();
//code goes here
end = clock();
micros = end - start;
millis = micros / 1000;

Time stamp in the C programming language

How do I stamp two times t1 and t2 and get the difference in milliseconds in C?
This will give you the time in seconds + microseconds
#include <sys/time.h>
struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
Standard C99:
#include <time.h>
time_t t0 = time(0);
// ...
time_t t1 = time(0);
double datetime_diff_ms = difftime(t1, t0) * 1000.;
clock_t c0 = clock();
// ...
clock_t c1 = clock();
double runtime_diff_ms = (c1 - c0) * 1000. / CLOCKS_PER_SEC;
The precision of the types is implementation-defined, ie the datetime difference might only return full seconds.
If you want to find elapsed time, this method will work as long as you don't reboot the computer between the start and end.
In Windows, use GetTickCount(). Here's how:
DWORD dwStart = GetTickCount();
...
... process you want to measure elapsed time for
...
DWORD dwElapsed = GetTickCount() - dwStart;
dwElapsed is now the number of elapsed milliseconds.
In Linux, use clock() and CLOCKS_PER_SEC to do about the same thing.
If you need timestamps that last through reboots or across PCs (which would need quite good syncronization indeed), then use the other methods (gettimeofday()).
Also, in Windows at least you can get much better than standard time resolution. Usually, if you called GetTickCount() in a tight loop, you'd see it jumping by 10-50 each time it changed. That's because of the time quantum used by the Windows thread scheduler. This is more or less the amount of time it gives each thread to run before switching to something else. If you do a:
timeBeginPeriod(1);
at the beginning of your program or process and a:
timeEndPeriod(1);
at the end, then the quantum will change to 1 ms, and you will get much better time resolution on the GetTickCount() call. However, this does make a subtle change to how your entire computer runs processes, so keep that in mind. However, Windows Media Player and many other things do this routinely anyway, so I don't worry too much about it.
I'm sure there's probably some way to do the same in Linux (probably with much better control, or maybe with sub-millisecond quantums) but I haven't needed to do that yet in Linux.
/*
Returns the current time.
*/
char *time_stamp(){
char *timestamp = (char *)malloc(sizeof(char) * 16);
time_t ltime;
ltime=time(NULL);
struct tm *tm;
tm=localtime(&ltime);
sprintf(timestamp,"%04d%02d%02d%02d%02d%02d", tm->tm_year+1900, tm->tm_mon,
tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec);
return timestamp;
}
int main(){
printf(" Timestamp: %s\n",time_stamp());
return 0;
}
Output: Timestamp: 20110912130940 // 2011 Sep 12 13:09:40
Use #Arkaitz Jimenez's code to get two timevals:
#include <sys/time.h>
//...
struct timeval tv1, tv2, diff;
// get the first time:
gettimeofday(&tv1, NULL);
// do whatever it is you want to time
// ...
// get the second time:
gettimeofday(&tv2, NULL);
// get the difference:
int result = timeval_subtract(&diff, &tv1, &tv2);
// the difference is storid in diff now.
Sample code for timeval_subtract can be found at this web site:
/* Subtract the `struct timeval' values X and Y,
storing the result in RESULT.
Return 1 if the difference is negative, otherwise 0. */
int
timeval_subtract (result, x, y)
struct timeval *result, *x, *y;
{
/* Perform the carry for the later subtraction by updating y. */
if (x->tv_usec < y->tv_usec) {
int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1;
y->tv_usec -= 1000000 * nsec;
y->tv_sec += nsec;
}
if (x->tv_usec - y->tv_usec > 1000000) {
int nsec = (x->tv_usec - y->tv_usec) / 1000000;
y->tv_usec += 1000000 * nsec;
y->tv_sec -= nsec;
}
/* Compute the time remaining to wait.
tv_usec is certainly positive. */
result->tv_sec = x->tv_sec - y->tv_sec;
result->tv_usec = x->tv_usec - y->tv_usec;
/* Return 1 if result is negative. */
return x->tv_sec < y->tv_sec;
}
how about this solution? I didn't see anything like this in my search. I am trying to avoid division and make solution simpler.
struct timeval cur_time1, cur_time2, tdiff;
gettimeofday(&cur_time1,NULL);
sleep(1);
gettimeofday(&cur_time2,NULL);
tdiff.tv_sec = cur_time2.tv_sec - cur_time1.tv_sec;
tdiff.tv_usec = cur_time2.tv_usec + (1000000 - cur_time1.tv_usec);
while(tdiff.tv_usec > 1000000)
{
tdiff.tv_sec++;
tdiff.tv_usec -= 1000000;
printf("updated tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
}
printf("end tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
Also making aware of interactions between clock() and usleep(). usleep() suspends the program, and clock() only measures the time the program is running.
If might be better off to use gettimeofday() as mentioned here
Use gettimeofday() or better clock_gettime()
U can try routines in c time library (time.h). Plus take a look at the clock() in the same lib. It gives the clock ticks since the prog has started. But you can save its value before the operation you want to concentrate on, and then after that operation capture the cliock ticks again and find the difference between then to get the time difference.
#include <sys/time.h>
time_t tm = time(NULL);
char stime[4096];
ctime_r(&tm, stime);
stime[strlen(stime) - 1] = '\0';
printf("%s",stime);
This program clearly shows how to do it. Takes time 1 pauses for 1 second and then takes time 2, the difference between the 2 times should be 1000 milliseconds. So your answer is correct
#include <stdio.h>
#include <time.h>
#include <unistd.h>
// Name: miliseconds.c
// gcc /tmp/miliseconds.c -o miliseconds
struct timespec ts1, ts2; // time1 and time2
int main (void) {
// get time1
clock_gettime(CLOCK_REALTIME, &ts1);
sleep(1); // 1 second pause
// get time2
clock_gettime(CLOCK_REALTIME, &ts2);
// nanoseconds difference in mili
long miliseconds1= (ts2.tv_nsec - ts1.tv_nsec) / 10000000 ;
// seconds difference in mili
long miliseconds2 = (ts2.tv_sec - ts1.tv_sec)*1000;
long miliseconds = miliseconds1 + miliseconds2;
printf("%ld\n", miliseconds);
return 0;
}

Resources