I have to calculate the time taken by a function to complete.
This function is called in a loop and I want to find out the total time.
Usually the time is very less in either nano or micro seconds.
To find out the elapsed time I used functions gettimeofday() using struct timeval and clock_gettime() using struct timespec.
Problem is time return by timeval in seconds is correct but in micro seconds wrong.
Similarly the time returned by timespec in nano seconds is wrong.
Wrong in the sense they do not tally with the time returned in seconds.
For clock_gettime() I tried both CLOCK_PROCESS_CPUTIME_ID and CLOCK_MONOTONIC.
Using clock() also does not help.
Code snippet:
struct timeval funcTimestart_timeval, funcTimeEnd_timeval;
struct timespec funcTimeStart_timespec, funcTimeEnd_timespec;
unsigned long elapsed_nanos = 0;
unsigned long elapsed_seconds = 0;
unsigned long diffInNanos = 0;
unsigned long Func_elapsed_nanos = 0;
unsigned long Func_elapsed_seconds = 0;
while(...)
{
gettimeofday(&funcTimestart_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeStart_timespec);
...
demo_func();
...
gettimeofday(&funcTimeEnd_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeEnd_timespec);
elapsed_seconds = funcTimeEnd_timeval.tv_sec - funcTimestart_timeval.tv_sec;
Func_elapsed_seconds+= elapsed_seconds;
elapsed_nanos = funcTimeEnd_timespec.tv_nsec - funcTimeStart_timespec.tv_nsec;
Func_elapsed_nanos+ = elapsed_nanos;
}
printf("Total time taken by demo_func() is %lu seconds( %lu nanoseconds )\n", Func_elapsed_seconds, Func_elapsed_nanos );
Printf output:
Total time taken by demo_func() is 60 seconds( 76806787 nanoseconds )
See that the time in seconds and nanoseconds do not match.
How to resolve this issue or any other appropriate method to find elapsed time?
Did you read the documentation of time(7) and clock_gettime(2)? Please read it twice.
The struct timespec is not supposed to express twice the same time. The field tv_sec gives the second part ts, and the field tv_nsec gives the nanosecond part tn to express the time t = ts + 10-9 tn
I would suggest to convert that to a floating point, e.g.
printf ("total time %g\n",
(double)Func_elapsed_seconds + 1.0e-9*Func_elapsed_nanos);
Using floating point is simpler and generally the precision is enough for most needs. Otherwise, when you add or substract struct timespec you need to handle the case when the added/substracted tv_nsec field sum/difference is negative or more than 1000000000....
The problem is you are printing/comparing wrong values.
76,806,787 nanoseconds is equal to ~76 milliseconds, you cannot compare it with 60 seconds.
You are ignoring the time in seconds stored in funcTimeEnd_timespec.tv_sec.
You should also print funcTimeEnd_timespec.tv_sec - funcTimeStart_timespec.tv_sec, and as #Basile Starynkevitch suggested, add with it the nanoseconds part after multiplying it with 10e-9. Then you can compare time elapsed shown by both functions.
I am replying to the previous answers as answer as I wanted to paste code snippets.
Question is to find out elapsed time whether
first I should subtract the corresponding times with each other and then add
(end.tv_sec - start.tv_sec) + 1.0e-9*(end.tv_nsec - start.tv_nsec)
or
first add the times and then compute the difference
(end.tv_sec + 1.0e-9*end.tv_nsec) - (start.tv_sec + 1.0e-9*start.tv_nsec)
In the first case quite often end.tv_nsec is less in number then start.tv_nsec and hence the difference becomes negative number nd this give me wrong number.
Related
The typical example that I see when trying to do measure elapsed time goes something like this:
#include <sys/time.h>
#include <stdio.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
//Do some operation
gettimeofday(&end, NULL);
unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
unsigned long long start_time = (start.tv_sec * 1000000 + start.tv_usec);
printf("Time taken : %ld micro seconds\n", end_time - start_time);
return 0;
}
This is great when it's somewhere mid day, but if someone were to run some tests late at night this wouldn't work. My approach is something like this to address it:
#include <sys/time.h>
#include <stdio.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
//Do some operation
gettimeofday(&end, NULL);
unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
unsigned long long start_time = (start.tv_sec * 1000000 + start.tv_usec);
unsigned long long elapsed_time = 0;
if ( end_time < start_time )
//Made up some constant that defines 86,400,000,000 microseconds in a day
elapsed_time = end_time + (NUM_OF_USEC_IN_A_DAY - start_time);
else
elapsed_time = end_time - start_time;
printf("Time taken : %ld micro seconds\n", elapsed_time);
return 0;
}
Is there a better way of anticipating day change using gettimeofday?
Despite the name, gettimeofday results do not roll over daily. The gettimeofday seconds count, just like the time seconds count, started at zero on the first day of 1970 (specifically 1970-01-01 00:00:00 +00:00) and has been incrementing steadily ever since. On a system with 32-bit time_t, it will roll over sometime in 2038; people are working right now to phase out the use of 32-bit time_t for this exact reason and we expect to be done well before 2038.
gettimeofday results are also independent of time zone and not affected by daylight savings shifts. They can go backward when the computer's clock is reset. If you don't want to worry about that, you can use clock_gettime(CLOCK_MONOTONIC) jnstead.
Why would you want to handle the case where end.tv_sec is less than start.tv_sec? Are you trying to account for ntpd changes? If so, and especially if you want to record only elapsed time, then use clock_gettime instead of gettimeofday as the former is immune to wall clock changes.
but if someone were to run some tests late at night this wouldn't work
That's an incorrect statement because gettimeofday is not relative to the start of each day but rather relative to a fixed point in time. From the gettimeofday manual:
gives the number of seconds and microseconds since the Epoch
So the first example will work as long as there are no jumps in time (e.g. due to manual time setting or NTP). Again from the manual:
The time returned by gettimeofday() is affected by discontinuous
jumps in the system time (e.g., if the system administrator manually
changes the system time). If you need a monotonically increasing
clock, see clock_gettime(2).
Is there a better way of anticipating day change using gettimeofday?
In addition to other problems identified in various answers, the following is also subject integer overflow when time_t is only 32-bit. Instead scale by a (unsigned) long long constant.
// unsigned long long end_time = (end.tv_sec * 1000000 + end.tv_usec);
long long end_time = (end.tv_sec * 1000000ll + end.tv_usec);
// Use correct specifier, not %ld for unsigned long long
// printf("Time taken : %ld micro seconds\n", elapsed_time);
long long elapsed_time = end_time - start_time;
printf("Time taken : %lld micro seconds\n", elapsed_time);
Tip: enables all compiler warnings.
I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?
I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.
POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).
clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.
You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.
struct timeval start, end;
start.tv_usec = 0;
end.tv_usec = 0;
gettimeofday(&start, NULL);
functionA();
gettimeofday(&end, NULL);
long t = end.tv_usec - start.tv_usec;
printf("Total elapsed time %ld us \n", t);
I am calculating the total elapsed time like this but it sometimes shows a negative value.
What might be cause the problem?
Thanks in advance.
Keep in mind that there is both a seconds and micro-seconds field of that structure. Therefore if you are simply subtracting the micro-seconds field, you could have a time that is later in seconds, but the microseconds field is less. For instance, and end-time of 5 seconds, 100 microseconds will have a negative result compared to 4 seconds and 5000 microseconds with the subtraction method you're using. In order to get the proper result, you have to take into account both the seconds and micro-seconds fields of the structure. This can be done doing the following:
long seconds = end.tv_sec - start.tv_sec;
long micro_seconds = end.tv_usec - start.tv_usec;
if (micro_seconds < 0)
{
seconds -= 1;
}
long total_micro_seconds = (seconds * 1000000) + abs(micro_seconds);
maybe something along the lines of:
long t = (end.tv_sec*1e6 + end.tv_usec) - (start.tv_sec*1e6 + start.tv_usec);
From The GNU C Library:
Data Type: struct timeval
The struct timeval structure represents an elapsed time. It is declared in sys/time.h and has the following members:
long int tv_sec
This represents the number of whole seconds of elapsed time.
long int tv_usec
This is the rest of the elapsed time (a fraction of a second), represented as the number of microseconds. It is always less than one million.
The only thing you're subtracting is the microseconds in tv_usec above the full seconds value in tv_sec. You need to work with both values in order to find the exact microsecond difference between the two times.
do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);
This one is made before computing, second one after. But do you know how to subtracts it?
I need result in miliseconds
To subtract timevals:
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:
long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
The answer offered by #Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.
Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).
The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.
Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.
<<
If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:
#include <time.h>
int main()
{
clock_t start = clock();
//... do work here
clock_t end = clock();
double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
return 0;
}
hth
No. gettimeofday should NEVER be used to measure time.
This is causing bugs all over the place. Please don't add more bugs.
Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.