C - Timing my program in seconds with microsecond precision on - c

I'm trying to find out the user time (on a Linux machine) of C program I've written. Currently I'm calling gettimeofday() once at the beginning of my code and once at the end. I'm using the timeval struct and difftime(stop.tv_sec,start.tv_sec) to get the number of seconds elapsed. This returns whole seconds, like "1.000000". However, my project requires that my program be timed in seconds with microsecond precision, for example "1.234567". How can I find this value? I know gettimeofday() also records microseconds in .tv_usec, but I'm not sure how to use this value and format it correctly.

Use the timersub function. It takes three arguments: the first is the initial time, the second the final one, and the third is the result (the diference). All of the three arguments are pointers to timeval struct.

Try putting the initial time into storage then subtract that from the final time when you are done.
storedValue = timer.tv_sec + 0.000001 * timer.tv_usec
timer.tv_sec + 0.000001 * timer.tv_usec - storedValue

If you want microseconds:
long long usec;
usec = tv.tv_sec;
usec *= 1000000;
usec += tv.tv_usec;
If you want fractional seconds [floating point]:
double asec;
asec = tv.tv_usec;
asec /= 1e6;
asec += tv.tv_sec;
Here are some complete functions:
// tvsec -- get current time in absolute microseconds
long long
tvsec(void)
{
struct timeval tv;
long long usec;
gettimeofday(&tv,NULL);
usec = tv.tv_sec;
usec *= 1000000;
usec += tv.tv_usec;
return usec;
}
// tvsecf -- get current time in fractional seconds
double
tvsecf(void)
{
struct timeval tv;
double asec;
gettimeofday(&tv,NULL);
asec = tv.tv_usec;
asec /= 1e6;
asec += tv.tv_sec;
return asec;
}
Note: If you want even higher accuracy (e.g. nanoseconds), you can use clock_gettime(CLOCK_REALTIME,...) and apply similar conversions

Related

Convert milliseconds to timespec for GNU port

I want to convert milliseconds into timespec structure used by GNU Linux. I have tried following code for the same.
timespec GetTimeSpecValue(unsigned long milisec)
{
struct timespec req;
//long sec = (milisecondtime /1000);
time_t sec = (time_t)(milisec/1000);
req->tv_sec = sec;
req->tv_nsec = 0;
return req;
}
Running this code gives me the following error.
expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘GetTimeSpecValue’
I have also include time.h file in the code.
The timespec structure represents time in two portions — seconds and nanoseconds. Thus, the algorithm for conversion from milliseconds is pretty darn simple. One seconds has thousand milliseconds, one milliseconds has a thousand microseconds and one microsecond has a thousand nanoseconds, for which we are grateful to SI. Therefore, we first need to divide milliseconds by a thousand to get a number of seconds. Say, for example, 1500 milliseconds / 1000 = 1.5 seconds. Given integer arithmetics (not a floating point), the remainder is dropped (i.e. 1500 / 1000 is equal to just 1, not 1.5). Then we need to take a remainder that denotes a number of milliseconds that is definitely less than one second, and multiply it by a million to convert it to nanoseconds. To get a remainder of dividing by 1000, we use a module operator (%) (i.e. 1500 % 1000 is equal to 500). For example, let's convert 4321 milliseconds to seconds and nanoseconds:
4321 (milliseconds) / 1000 = 4 (seconds)
4321 (milliseconds) % 1000 = 321 (milliseconds)
321 (milliseconds) * 1000000 = 321000000 (nanoseconds)
Knowing the above, the only thing that is left is to write a little bit of C code. There are few things that you didn't get right:
In C, you have to prefix structure data types with struct. For example, instead of saying timespec you say struct timespec. In C++, however, you don't have to do it (unfortunately, in my opinion).
You cannot return structures from the function in C. Therefore, you need to pass a structure by pointer into a function that does something with that structure.
Edit: This contradicts (Return a `struct` from a function in C).
OK, enough talking. Below is a simple C code example:
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
static void ms2ts(struct timespec *ts, unsigned long ms)
{
ts->tv_sec = ms / 1000;
ts->tv_nsec = (ms % 1000) * 1000000;
}
static void print_ts(unsigned long ms)
{
struct timespec ts;
ms2ts(&ts, ms);
printf("%lu milliseconds is %ld seconds and %ld nanoseconds.\n",
ms, ts.tv_sec, ts.tv_nsec);
}
int main()
{
print_ts(1000);
print_ts(2500);
print_ts(4321);
return EXIT_SUCCESS;
}
Hope it helps. Good Luck!
try this:
struct timespec GetTimeSpecValue(unsigned long millisec) {
struct timespec req;
req.tv_sec= (time_t)(millisec/1000);
req.tv_nsec = (millisec % 1000) * 1000000;
return req;
}
I don't think struct timespec is typedef'ed,hence you need to prepend timespec with struct. And work out the nano second part if you want to be precise. Note that req is not a pointer. Thus members cannot be accessed with '->'
Incorporating a few tweaks to the answer including Geoffrey's comment, the code below avoids divides for small delay and modulo for long delay:
void msec_to_timespec(unsigned long msec, struct timespec *ts)
{
if (msec < 1000){
ts->tv_sec = 0;
ts->tv_nsec = msec * 1000000;
}
else {
ts->tv_sec = msec / 1000;
ts->tv_nsec = (msec - ts->tv_sec * 1000) * 1000000;
}
}

Measuring elapsed time in linux for a c program

I am trying to measure elapsed time in Linux. My answer keeps returning zero which makes no sense to me. Below is the way i measure time in my program.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
main()
{
double p16 = 1, pi = 0, precision = 1000;
int k;
unsigned long micros = 0;
float millis = 0.0;
clock_t start, end;
start = clock();
// This section calculates pi
for(k = 0; k <= precision; k++)
{
pi += 1.0 / p16 * (4.0 / (8 * k + 1) - 2.0 / (8 * k + 4) - 1.0 / (8 * k + 5) - 1.0 / (8 * k + 6));
p16 *= 16;
}
end = clock();
micros = end - start;
millis = micros / 1000;
printf("%f\n", millis); //my time keeps being returned as 0
printf("this value of pi is : %f\n", pi);
}
Three alternatives
clock()
gettimeofday()
clock_gettime()
clock_gettime() goes upto nanosecond accuracy and it supports 4 clocks.
CLOCK_REALTIME
System-wide realtime clock. Setting this clock requires appropriate privileges.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified starting point.
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
You can use it as
#include <time.h>
struct timespec start, stop;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
/// do something
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &stop);
double result = (stop.tv_sec - start.tv_sec) * 1e6 + (stop.tv_nsec - start.tv_nsec) / 1e3; // in microseconds
Note: The clock() function returns CPU time for your process, not wall clock time. I believe this is what the OP was interested in. If wall clock time is desired, then gettimeofday() is a good choice as suggested by an earlier answer. clock_gettime() can do either one if your system supports it; on my linux embedded system clock_gettime() is not supported, but clock() and gettimeofday() are.
Below is the code for getting wall clock time using gettimeofday()
#include <stdio.h> // for printf()
#include <sys/time.h> // for clock_gettime()
#include <unistd.h> // for usleep()
int main() {
struct timeval start, end;
long secs_used,micros_used;
gettimeofday(&start, NULL);
usleep(1250000); // Do the stuff you want to time here
gettimeofday(&end, NULL);
printf("start: %d secs, %d usecs\n",start.tv_sec,start.tv_usec);
printf("end: %d secs, %d usecs\n",end.tv_sec,end.tv_usec);
secs_used=(end.tv_sec - start.tv_sec); //avoid overflow by subtracting first
micros_used= ((secs_used*1000000) + end.tv_usec) - (start.tv_usec);
printf("micros_used: %d\n",micros_used);
return 0;
}
To start with you need to use floating point arithmetics. Any integer value divided by a larger integer value will be zero, always.
And of course you should actually do something between getting the start and end times.
By the way, if you have access to gettimeofday it's normally preferred over clock as it has higher resolution. Or maybe clock_gettime which has even higher resolution.
There are two issues with your code as written.
According to man 3 clock, the resolution of clock() is in CLOCKS_PER_SEC increments per second. On a recent-ish Cygwin system, it's 200. Based on the names of your variables, you are expecting the value to be 1,000,000.
This line:
millis = micros / 1000;
will compute the quotient as an integer, because both operands are integers. The promotion to a floating-point type occurs at the time of the assignment to millis, at which point the fractional part has already been discarded.
To compute the number of seconds elapsed using clock(), you need to do something like this:
clock_t start, end;
float seconds;
start = clock();
// Your code here:
end = clock();
seconds = end - start; // time difference is now a float
seconds /= CLOCKS_PER_SEC; // this division is now floating point
However, you will almost certainly not get millisecond accuracy. For that, you would need to use gettimeofday() or clock_gettime(). Further, you probably want to use double instead of float, because you are likely going to wind up subtracting very large numbers with a very tiny difference. The example using clock_gettime() would be:
#include <time.h>
/* Floating point nanoseconds per second */
#define NANO_PER_SEC 1000000000.0
int main(void)
{
struct timespec start, end;
double start_sec, end_sec, elapsed_sec;
clock_gettime(CLOCK_REALTIME, &start);
// Your code here
clock_gettime(CLOCK_REALTIME, &end);
start_sec = start.tv_sec + start.tv_nsec / NANO_PER_SEC;
end_sec = end.tv_sec + end.tv_nsec / NANO_PER_SEC;
elapsed_sec = end_sec - start_sec;
printf("The operation took %.3f seconds\n", elapsed_sec);
return 0;
}
Since NANO_PER_SEC is a floating-point value, the division operations are carried out in floating-point.
Sources:
man pages for clock(3), gettimeofday(3), and clock_gettime(3).
The C Programming Language, Kernighan and Ritchie
Try Sunil D S's answer but change micros from unsigned long to type float or double, like this:
double micros;
float seconds;
clock_t start, end;
start = clock();
/* Do something here */
end = clock();
micros = end - start;
seconds = micros / 1000000;
Alternatively, you could use rusage, like this:
struct rusage before;
struct rusage after;
float a_cputime, b_cputime, e_cputime;
float a_systime, b_systime, e_systime;
getrusage(RUSAGE_SELF, &before);
/* Do something here! or put in loop and do many times */
getrusage(RUSAGE_SELF, &after);
a_cputime = after.ru_utime.tv_sec + after.ru_utime.tv_usec / 1000000.0;
b_cputime = before.ru_utime.tv_sec + before.ru_utime.tv_usec / 1000000.0;
e_cputime = a_cputime - b_cputime;
a_systime = after.ru_stime.tv_sec + after.ru_stime.tv_usec / 1000000.0;
b_systime = before.ru_stime.tv_sec + before.ru_stime.tv_usec / 1000000.0;
e_systime = a_systime - b_systime;
printf("CPU time (secs): user=%.4f; system=%.4f; real=%.4f\n",e_cputime, e_systime, seconds);
Units and precision depend on how much time you want to measure but either of these should
provide reasonable accuracy for ms.
When you divide, you might end up with a decimal, hence you need a flaoting point number to store the number of milli seconds.
If you don't use a floating point, the decimal part is truncated. In your piece of code, the start and end are ALMOST the same. Hence the result after division when stored in a long is "0".
unsigned long micros = 0;
float millis = 0.0;
clock_t start, end;
start = clock();
//code goes here
end = clock();
micros = end - start;
millis = micros / 1000;

elapsed time in negative value

struct timeval start, end;
start.tv_usec = 0;
end.tv_usec = 0;
gettimeofday(&start, NULL);
functionA();
gettimeofday(&end, NULL);
long t = end.tv_usec - start.tv_usec;
printf("Total elapsed time %ld us \n", t);
I am calculating the total elapsed time like this but it sometimes shows a negative value.
What might be cause the problem?
Thanks in advance.
Keep in mind that there is both a seconds and micro-seconds field of that structure. Therefore if you are simply subtracting the micro-seconds field, you could have a time that is later in seconds, but the microseconds field is less. For instance, and end-time of 5 seconds, 100 microseconds will have a negative result compared to 4 seconds and 5000 microseconds with the subtraction method you're using. In order to get the proper result, you have to take into account both the seconds and micro-seconds fields of the structure. This can be done doing the following:
long seconds = end.tv_sec - start.tv_sec;
long micro_seconds = end.tv_usec - start.tv_usec;
if (micro_seconds < 0)
{
seconds -= 1;
}
long total_micro_seconds = (seconds * 1000000) + abs(micro_seconds);
maybe something along the lines of:
long t = (end.tv_sec*1e6 + end.tv_usec) - (start.tv_sec*1e6 + start.tv_usec);
From The GNU C Library:
Data Type: struct timeval
The struct timeval structure represents an elapsed time. It is declared in sys/time.h and has the following members:
long int tv_sec
This represents the number of whole seconds of elapsed time.
long int tv_usec
This is the rest of the elapsed time (a fraction of a second), represented as the number of microseconds. It is always less than one million.
The only thing you're subtracting is the microseconds in tv_usec above the full seconds value in tv_sec. You need to work with both values in order to find the exact microsecond difference between the two times.

Regarding getting time in milliseconds

I am working on logger using C language on QNX platform using Momnetics to print time in following format
2010-11-02 14:45:15.000
I able to get date, hour, minutes, and seconds using
time(&timeSpec);
struct tm gmt;
int iSysTimeSec = timeSpec;
gmtime_r((time_t *)&iSysTimeSec, &gmt);
sprintf(&MsgStamp[0], SYS_MSG_STAMP_PRINTF_FORMAT, gmt.tm_year+1900, gmt.tm_mon + 1, gmt.tm_mday, gmt.tm_hour, gmt.tm_min, gmt.tm_sec, iSysTimeMs );
Question is how do i get milliseconds granularity using QNX Momentics.
I tried to get granulaity for milliseconds using QNX specific
int iSysTimeMs = ( (ClockCycles () * 1000) / SYSPAGE_ENTRY(qtime)->cycles_per_sec ) % 1000;
but i want to do this POSIX way so that it is portable. How do we do this?
Thanks!
Venkata
In QNX6 You can use the clock_gettime to have the max granularity
allowed by system.
struct timespec start;
clock_gettime( CLOCK_REALTIME, &start);
The gettimeofday() system call will return a structure holding the current Unix time in seconds and the number of microseconds belonging to the current second.
To get the total number of microseconds:
struct timeval tv;
gettimeofday(&tv, NULL);
u_int64_t now = tv.tv_sec * 1000000ULL + tv.tv_usec;

Time stamp in the C programming language

How do I stamp two times t1 and t2 and get the difference in milliseconds in C?
This will give you the time in seconds + microseconds
#include <sys/time.h>
struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
Standard C99:
#include <time.h>
time_t t0 = time(0);
// ...
time_t t1 = time(0);
double datetime_diff_ms = difftime(t1, t0) * 1000.;
clock_t c0 = clock();
// ...
clock_t c1 = clock();
double runtime_diff_ms = (c1 - c0) * 1000. / CLOCKS_PER_SEC;
The precision of the types is implementation-defined, ie the datetime difference might only return full seconds.
If you want to find elapsed time, this method will work as long as you don't reboot the computer between the start and end.
In Windows, use GetTickCount(). Here's how:
DWORD dwStart = GetTickCount();
...
... process you want to measure elapsed time for
...
DWORD dwElapsed = GetTickCount() - dwStart;
dwElapsed is now the number of elapsed milliseconds.
In Linux, use clock() and CLOCKS_PER_SEC to do about the same thing.
If you need timestamps that last through reboots or across PCs (which would need quite good syncronization indeed), then use the other methods (gettimeofday()).
Also, in Windows at least you can get much better than standard time resolution. Usually, if you called GetTickCount() in a tight loop, you'd see it jumping by 10-50 each time it changed. That's because of the time quantum used by the Windows thread scheduler. This is more or less the amount of time it gives each thread to run before switching to something else. If you do a:
timeBeginPeriod(1);
at the beginning of your program or process and a:
timeEndPeriod(1);
at the end, then the quantum will change to 1 ms, and you will get much better time resolution on the GetTickCount() call. However, this does make a subtle change to how your entire computer runs processes, so keep that in mind. However, Windows Media Player and many other things do this routinely anyway, so I don't worry too much about it.
I'm sure there's probably some way to do the same in Linux (probably with much better control, or maybe with sub-millisecond quantums) but I haven't needed to do that yet in Linux.
/*
Returns the current time.
*/
char *time_stamp(){
char *timestamp = (char *)malloc(sizeof(char) * 16);
time_t ltime;
ltime=time(NULL);
struct tm *tm;
tm=localtime(&ltime);
sprintf(timestamp,"%04d%02d%02d%02d%02d%02d", tm->tm_year+1900, tm->tm_mon,
tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec);
return timestamp;
}
int main(){
printf(" Timestamp: %s\n",time_stamp());
return 0;
}
Output: Timestamp: 20110912130940 // 2011 Sep 12 13:09:40
Use #Arkaitz Jimenez's code to get two timevals:
#include <sys/time.h>
//...
struct timeval tv1, tv2, diff;
// get the first time:
gettimeofday(&tv1, NULL);
// do whatever it is you want to time
// ...
// get the second time:
gettimeofday(&tv2, NULL);
// get the difference:
int result = timeval_subtract(&diff, &tv1, &tv2);
// the difference is storid in diff now.
Sample code for timeval_subtract can be found at this web site:
/* Subtract the `struct timeval' values X and Y,
storing the result in RESULT.
Return 1 if the difference is negative, otherwise 0. */
int
timeval_subtract (result, x, y)
struct timeval *result, *x, *y;
{
/* Perform the carry for the later subtraction by updating y. */
if (x->tv_usec < y->tv_usec) {
int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1;
y->tv_usec -= 1000000 * nsec;
y->tv_sec += nsec;
}
if (x->tv_usec - y->tv_usec > 1000000) {
int nsec = (x->tv_usec - y->tv_usec) / 1000000;
y->tv_usec += 1000000 * nsec;
y->tv_sec -= nsec;
}
/* Compute the time remaining to wait.
tv_usec is certainly positive. */
result->tv_sec = x->tv_sec - y->tv_sec;
result->tv_usec = x->tv_usec - y->tv_usec;
/* Return 1 if result is negative. */
return x->tv_sec < y->tv_sec;
}
how about this solution? I didn't see anything like this in my search. I am trying to avoid division and make solution simpler.
struct timeval cur_time1, cur_time2, tdiff;
gettimeofday(&cur_time1,NULL);
sleep(1);
gettimeofday(&cur_time2,NULL);
tdiff.tv_sec = cur_time2.tv_sec - cur_time1.tv_sec;
tdiff.tv_usec = cur_time2.tv_usec + (1000000 - cur_time1.tv_usec);
while(tdiff.tv_usec > 1000000)
{
tdiff.tv_sec++;
tdiff.tv_usec -= 1000000;
printf("updated tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
}
printf("end tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
Also making aware of interactions between clock() and usleep(). usleep() suspends the program, and clock() only measures the time the program is running.
If might be better off to use gettimeofday() as mentioned here
Use gettimeofday() or better clock_gettime()
U can try routines in c time library (time.h). Plus take a look at the clock() in the same lib. It gives the clock ticks since the prog has started. But you can save its value before the operation you want to concentrate on, and then after that operation capture the cliock ticks again and find the difference between then to get the time difference.
#include <sys/time.h>
time_t tm = time(NULL);
char stime[4096];
ctime_r(&tm, stime);
stime[strlen(stime) - 1] = '\0';
printf("%s",stime);
This program clearly shows how to do it. Takes time 1 pauses for 1 second and then takes time 2, the difference between the 2 times should be 1000 milliseconds. So your answer is correct
#include <stdio.h>
#include <time.h>
#include <unistd.h>
// Name: miliseconds.c
// gcc /tmp/miliseconds.c -o miliseconds
struct timespec ts1, ts2; // time1 and time2
int main (void) {
// get time1
clock_gettime(CLOCK_REALTIME, &ts1);
sleep(1); // 1 second pause
// get time2
clock_gettime(CLOCK_REALTIME, &ts2);
// nanoseconds difference in mili
long miliseconds1= (ts2.tv_nsec - ts1.tv_nsec) / 10000000 ;
// seconds difference in mili
long miliseconds2 = (ts2.tv_sec - ts1.tv_sec)*1000;
long miliseconds = miliseconds1 + miliseconds2;
printf("%ld\n", miliseconds);
return 0;
}

Resources