do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);
This one is made before computing, second one after. But do you know how to subtracts it?
I need result in miliseconds
To subtract timevals:
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:
long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
The answer offered by #Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.
Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).
The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.
Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.
<<
If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:
#include <time.h>
int main()
{
clock_t start = clock();
//... do work here
clock_t end = clock();
double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
return 0;
}
hth
No. gettimeofday should NEVER be used to measure time.
This is causing bugs all over the place. Please don't add more bugs.
Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.
Related
The only thing I know is time(NULL), but it return seconds since 1970.
It's fine to me to use WinApi functions if C doesn't have needed function.
I even found GetLocalTime WinApi function, but it return current date-time as struct...
The reason I don't want to multiply seconds by 1000 is because I inject my code into some program, and my code used to capture certain events from two sources, and I need to know their exact time, because if both events happened at same amount of seconds since 1970, but different amount of milliseconds, then for me they seems like happen in same time, and it's hard to determine what happened first (I doing events sorting later...)
In Unix, you have (probably you'll get some of these apis also working in windows) gettimeofday(2), which is BSD implementation of time, it is based on struct timeval which is a struct that has two fields, tv_sec (time in seconds since epoch, as given by time(2)) and tv_usec (time in µsec, as an integer, between 0 and 999999)
This will suffice for your requirements, but today, it is common to use Posix' calls clock_gettime(2), which allow you to select the type of time you want to get (wall clock time, cpu time, etc.) clock_gettime(2) uses a similar struct timespec (this time it has tv_sec and tv_nsec ---nanosecond--- resolution)
No clock is warranted to get nanosecond resolution (but someones do), but at least you get up to the µsec level, which is more than you want)
In order to be able to give the time as milliseconds, you have just to multiply the tv_sec by 1000, and then add the tv_usec(if using gettimeofday()) value divided by 1000. Or if you prefer to use clock_gettime(), you will add the tv_sec field multiplied by 1000, and then add the tv_nsec field divided by 1000000.
If you just need to compare which timestamp is earlier than other, you can just compare both tv_sec fields, and if they happen to be equal, then compare the tv_usec fields. Every unix I know about (except SCO UNIX) do implement gettimeofday() to µsec resolution.
This is WinApi way to determine milliseconds passed since 1970.
I made it since no one provide answer with native C way (maybe there is no native way at all)...
SYSTEMTIME unix_epoch;
unix_epoch.wYear = 1970;
unix_epoch.wMonth = 1;
unix_epoch.wDay = 1;
unix_epoch.wDayOfWeek = 4;
unix_epoch.wHour = 0;
unix_epoch.wMilliseconds = 0;
unix_epoch.wMinute = 0;
unix_epoch.wSecond = 0;
FILETIME curr_time_as_filetime;
GetSystemTimeAsFileTime(&curr_time_as_filetime);
FILETIME unix_epoch_as_filetime;
SystemTimeToFileTime(&unix_epoch, &unix_epoch_as_filetime);
ULARGE_INTEGER curr_time_as_uint64;
ULARGE_INTEGER unix_epoch_as_uint64;
curr_time_as_uint64.HighPart = curr_time_as_filetime.dwHighDateTime;
curr_time_as_uint64.LowPart = curr_time_as_filetime.dwLowDateTime;
unix_epoch_as_uint64.HighPart = unix_epoch_as_filetime.dwHighDateTime;
unix_epoch_as_uint64.LowPart = unix_epoch_as_filetime.dwLowDateTime;
ULARGE_INTEGER milliseconds_since_1970;
milliseconds_since_1970.QuadPart = (curr_time_as_uint64.QuadPart - unix_epoch_as_uint64.QuadPart) / 10000;
An alternative would be to use clock() in combination with time(NULL) at start of program
start_time = time(NULL)
....
clock() + start_time * CLOCKS_PER_SEC
That would give you a better estimate and maybe exact enough for your needs?
I have to calculate the time taken by a function to complete.
This function is called in a loop and I want to find out the total time.
Usually the time is very less in either nano or micro seconds.
To find out the elapsed time I used functions gettimeofday() using struct timeval and clock_gettime() using struct timespec.
Problem is time return by timeval in seconds is correct but in micro seconds wrong.
Similarly the time returned by timespec in nano seconds is wrong.
Wrong in the sense they do not tally with the time returned in seconds.
For clock_gettime() I tried both CLOCK_PROCESS_CPUTIME_ID and CLOCK_MONOTONIC.
Using clock() also does not help.
Code snippet:
struct timeval funcTimestart_timeval, funcTimeEnd_timeval;
struct timespec funcTimeStart_timespec, funcTimeEnd_timespec;
unsigned long elapsed_nanos = 0;
unsigned long elapsed_seconds = 0;
unsigned long diffInNanos = 0;
unsigned long Func_elapsed_nanos = 0;
unsigned long Func_elapsed_seconds = 0;
while(...)
{
gettimeofday(&funcTimestart_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeStart_timespec);
...
demo_func();
...
gettimeofday(&funcTimeEnd_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeEnd_timespec);
elapsed_seconds = funcTimeEnd_timeval.tv_sec - funcTimestart_timeval.tv_sec;
Func_elapsed_seconds+= elapsed_seconds;
elapsed_nanos = funcTimeEnd_timespec.tv_nsec - funcTimeStart_timespec.tv_nsec;
Func_elapsed_nanos+ = elapsed_nanos;
}
printf("Total time taken by demo_func() is %lu seconds( %lu nanoseconds )\n", Func_elapsed_seconds, Func_elapsed_nanos );
Printf output:
Total time taken by demo_func() is 60 seconds( 76806787 nanoseconds )
See that the time in seconds and nanoseconds do not match.
How to resolve this issue or any other appropriate method to find elapsed time?
Did you read the documentation of time(7) and clock_gettime(2)? Please read it twice.
The struct timespec is not supposed to express twice the same time. The field tv_sec gives the second part ts, and the field tv_nsec gives the nanosecond part tn to express the time t = ts + 10-9 tn
I would suggest to convert that to a floating point, e.g.
printf ("total time %g\n",
(double)Func_elapsed_seconds + 1.0e-9*Func_elapsed_nanos);
Using floating point is simpler and generally the precision is enough for most needs. Otherwise, when you add or substract struct timespec you need to handle the case when the added/substracted tv_nsec field sum/difference is negative or more than 1000000000....
The problem is you are printing/comparing wrong values.
76,806,787 nanoseconds is equal to ~76 milliseconds, you cannot compare it with 60 seconds.
You are ignoring the time in seconds stored in funcTimeEnd_timespec.tv_sec.
You should also print funcTimeEnd_timespec.tv_sec - funcTimeStart_timespec.tv_sec, and as #Basile Starynkevitch suggested, add with it the nanoseconds part after multiplying it with 10e-9. Then you can compare time elapsed shown by both functions.
I am replying to the previous answers as answer as I wanted to paste code snippets.
Question is to find out elapsed time whether
first I should subtract the corresponding times with each other and then add
(end.tv_sec - start.tv_sec) + 1.0e-9*(end.tv_nsec - start.tv_nsec)
or
first add the times and then compute the difference
(end.tv_sec + 1.0e-9*end.tv_nsec) - (start.tv_sec + 1.0e-9*start.tv_nsec)
In the first case quite often end.tv_nsec is less in number then start.tv_nsec and hence the difference becomes negative number nd this give me wrong number.
I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?
I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.
POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).
clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.
You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.
I want to know the time that has passed between the occurrence of two events.
Now, the simple way would be to use something like:
time_t x, y;
x = time(NULL);
/* Some other stuff happens */
y = time(NULL);
printf("Time passed: %i", y-x);
However, it is possible that the system time is changed between these two events.
Is there an alternative way to know the time that has passed between the two events? Or is there a way to detect changes to the system time?
Since you're apparently on Linux, you can use the POSIX CLOCK_MONOTONIC clock to get a timer that is unaffected by system time changes:
struct timespec ts1, ts2;
clock_gettime(CLOCK_MONOTONIC, &ts1);
/* Things happen */
clock_gettime(CLOCK_MONOTONIC, &ts2);
ts2.tv_sec -= ts1.tv_sec;
ts2.tv_nsec -= ts1.tv_nsec;
if (ts2.tv_nsec < 0)
{
ts2.tv_nsec += 1000000000L;
ts2.tv_sec -= 1;
}
printf("Elapsed time: %lld.%09lds\n", (long long)ts2.tv_sec, ts2.tv_nsec);
To check if the system supports CLOCK_MONOTONIC, check for sysconf(_SC_MONOTONIC_CLOCK) > 0.
You can use clock_gettime() on Linux or gethrtime() on some other Unix systems. Although the difference between two such values gives you a time interval, those call are not giving you regular time values. As HP-UX says regarding gethrtime():
The gethrtime() function returns the current high-resolution real time. Time is expressed as nanoseconds since a certain time in the past.