I added following code to measure to learn how many milliseconds program lasted.
The problem is I get 1065 Clocks and 1065 Milli seconds. Is it normal that they are equal?
Maybe my equation is wrong which converts clock into milliseconds? Thanks in advance
finishClock = clock();
timeCount = finishClock - startClock ;
printf("Clocks passed: %f\n Milli Seconds passed: %f",timeCount, timeCount*1000/CLOCKS_PER_SEC);
Based on the Tcl References here, they are synonymous:
If the -option argument is -milliseconds, then the command is
synonymous with clock milliseconds (see below). This usage is
obsolete, and clock milliseconds is to be considered the preferred way
of obtaining a count of milliseconds.
Related
I am not looking for nanoseconds or accuracy to the umpteenth decimal, however I would like to have some decent accuracy and to minimise margin of error with the results.
For this, I used sys/time.h, time.h, on a Mac Book Pro and a stop watch (my iPhone's stopwatch app, that is), and I am getting rather inconstant results.
My iPhone's stopwatch report: 140.66 seconds and the two functions with three trials are seen below:
1. time.h gave me 136.957065 seconds —— sys/time.h gave me 140.335872 seconds
2. time.h gave me 140.822794 seconds —— sys/time.h gave me 151.858111 seconds
3. time.h gave me 146.064458 seconds —— sys/time.h gave me 162.149713 seconds
And here is my code snippet:
void myExpensiveFunction()
{
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
clock_t fbegin = clock();
/* expensive algorithm */
clock_t fend = clock();
gettimeofday(&tv2, NULL);
printf("Expensive function took: %f seconds\n", (double)(fend - fbegin) / CLOCKS_PER_SEC);
printf("Expensive function took: %f seconds\n",
(double)(tv2.tv_usec - tv1.tv_usec) / 1000000 +
(double)(tv2.tv_sec - tv1.tv_sec));
}
While this isn't mission critical and I am sure there are some inconsistencies due to my computer itself, I am curious if there is a better function or methodology I should be using or be aware of. For testing and curiosity sake, I tried this same thing on my Linux mail server, and got similar results, sys/time.h was ten seconds slower.
I have sequential code to parallelize via OpenMP. I have put in the corresponding pragmas and tested it. I measure the performance gain by checking the time spent in the main function.
The weird thing is the elapsed time calculated via cpu_time() and omp_get_wtime() is different. Why?
The elapsed time according to cpu_time() is similar to the sequential time.
Before computation starts:
ctime1_ = cpu_time();
#ifdef _OPENMP
ctime1 = omp_get_wtime();
#endif
After computation ends:
ctime2_ = cpu_time();
#ifdef _OPENMP
ctime2 = omp_get_wtime();
#endif
cpu_time() function definition:
double cpu_time(void)
{
double value;
value = (double) clock () / (double) CLOCKS_PER_SEC;
return value;
}
Printing result:
printf("%f - %f seconds.\n", ctime2 - ctime1, ctime2_ - ctime1_);
Sample result:
7.009537 - 11.575277 seconds.
The clock function measures cpu time, the time you spend actively on the CPU, the OMP function measures the time as it has passed during execution, two completely different things.
Your process seems to be blocked in waiting somewhere.
What you observe is a perfectly valid result for any parallel application - the combined CPU time of all threads as returned by clock() is usually more than the wallclock time measured by omp_get_wtime() except if your application mostly sleeps or waits.
The clock() function returns CPU time, not wall time. Instead, use gettimeofday().
I have sequential code to parallelize via OpenMP. I have put in the corresponding pragmas and tested it. I measure the performance gain by checking the time spent in the main function.
The weird thing is the elapsed time calculated via cpu_time() and omp_get_wtime() is different. Why?
The elapsed time according to cpu_time() is similar to the sequential time.
Before computation starts:
ctime1_ = cpu_time();
#ifdef _OPENMP
ctime1 = omp_get_wtime();
#endif
After computation ends:
ctime2_ = cpu_time();
#ifdef _OPENMP
ctime2 = omp_get_wtime();
#endif
cpu_time() function definition:
double cpu_time(void)
{
double value;
value = (double) clock () / (double) CLOCKS_PER_SEC;
return value;
}
Printing result:
printf("%f - %f seconds.\n", ctime2 - ctime1, ctime2_ - ctime1_);
Sample result:
7.009537 - 11.575277 seconds.
The clock function measures cpu time, the time you spend actively on the CPU, the OMP function measures the time as it has passed during execution, two completely different things.
Your process seems to be blocked in waiting somewhere.
What you observe is a perfectly valid result for any parallel application - the combined CPU time of all threads as returned by clock() is usually more than the wallclock time measured by omp_get_wtime() except if your application mostly sleeps or waits.
The clock() function returns CPU time, not wall time. Instead, use gettimeofday().
I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.
POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).
clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.
You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.
I am using timeval as well as clock() function to see the time difference between 2 actions in my c program. Somehow timeval seems to give me the right amount of elapsed time in milliseconds where clock() gives very very less value.
g_time = clock();
gettimeofday(&decode_t,NULL);
after sometime
delay =((float)(clock()-g_time)/(float)CLOCKS_PER_SEC);
gettimeofday(&poll_t,NULL);
delay1 = ((poll_t.tv_sec - decode_t.tv_sec)*1000 + (poll_t.tv_usec - decode_t.tv_usec)/1000.0) ;
printf("\ndelay1: %f delay: %f ",delay1,delay);
usual output is:
delay1: 1577.603027 delay: 0.800000
delay1 is in milliseconds and delay is in sec.
Iam using archlinux 64 bit.I can't understand why this is happening.
From the clock(3) manual page:
The clock() function returns an approximation of processor time used by the program.
So the clock function doesn't return the amount of time passed, but a number of "ticks" that your program have run. And as you know, in a multi-tasking system your program can be paused at any time to let other programs run.