clock() returning a negative value in C - c

I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.

POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).

clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.

You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.

Related

C find elapsed time in C Linux

I have to calculate the time taken by a function to complete.
This function is called in a loop and I want to find out the total time.
Usually the time is very less in either nano or micro seconds.
To find out the elapsed time I used functions gettimeofday() using struct timeval and clock_gettime() using struct timespec.
Problem is time return by timeval in seconds is correct but in micro seconds wrong.
Similarly the time returned by timespec in nano seconds is wrong.
Wrong in the sense they do not tally with the time returned in seconds.
For clock_gettime() I tried both CLOCK_PROCESS_CPUTIME_ID and CLOCK_MONOTONIC.
Using clock() also does not help.
Code snippet:
struct timeval funcTimestart_timeval, funcTimeEnd_timeval;
struct timespec funcTimeStart_timespec, funcTimeEnd_timespec;
unsigned long elapsed_nanos = 0;
unsigned long elapsed_seconds = 0;
unsigned long diffInNanos = 0;
unsigned long Func_elapsed_nanos = 0;
unsigned long Func_elapsed_seconds = 0;
while(...)
{
gettimeofday(&funcTimestart_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeStart_timespec);
...
demo_func();
...
gettimeofday(&funcTimeEnd_timeval, NULL);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &funcTimeEnd_timespec);
elapsed_seconds = funcTimeEnd_timeval.tv_sec - funcTimestart_timeval.tv_sec;
Func_elapsed_seconds+= elapsed_seconds;
elapsed_nanos = funcTimeEnd_timespec.tv_nsec - funcTimeStart_timespec.tv_nsec;
Func_elapsed_nanos+ = elapsed_nanos;
}
printf("Total time taken by demo_func() is %lu seconds( %lu nanoseconds )\n", Func_elapsed_seconds, Func_elapsed_nanos );
Printf output:
Total time taken by demo_func() is 60 seconds( 76806787 nanoseconds )
See that the time in seconds and nanoseconds do not match.
How to resolve this issue or any other appropriate method to find elapsed time?
Did you read the documentation of time(7) and clock_gettime(2)? Please read it twice.
The struct timespec is not supposed to express twice the same time. The field tv_sec gives the second part ts, and the field tv_nsec gives the nanosecond part tn to express the time t = ts + 10-9 tn
I would suggest to convert that to a floating point, e.g.
printf ("total time %g\n",
(double)Func_elapsed_seconds + 1.0e-9*Func_elapsed_nanos);
Using floating point is simpler and generally the precision is enough for most needs. Otherwise, when you add or substract struct timespec you need to handle the case when the added/substracted tv_nsec field sum/difference is negative or more than 1000000000....
The problem is you are printing/comparing wrong values.
76,806,787 nanoseconds is equal to ~76 milliseconds, you cannot compare it with 60 seconds.
You are ignoring the time in seconds stored in funcTimeEnd_timespec.tv_sec.
You should also print funcTimeEnd_timespec.tv_sec - funcTimeStart_timespec.tv_sec, and as #Basile Starynkevitch suggested, add with it the nanoseconds part after multiplying it with 10e-9. Then you can compare time elapsed shown by both functions.
I am replying to the previous answers as answer as I wanted to paste code snippets.
Question is to find out elapsed time whether
first I should subtract the corresponding times with each other and then add
(end.tv_sec - start.tv_sec) + 1.0e-9*(end.tv_nsec - start.tv_nsec)
or
first add the times and then compute the difference
(end.tv_sec + 1.0e-9*end.tv_nsec) - (start.tv_sec + 1.0e-9*start.tv_nsec)
In the first case quite often end.tv_nsec is less in number then start.tv_nsec and hence the difference becomes negative number nd this give me wrong number.

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

choose between timeval and clock() to calculate elapsed time in C

I am using timeval as well as clock() function to see the time difference between 2 actions in my c program. Somehow timeval seems to give me the right amount of elapsed time in milliseconds where clock() gives very very less value.
g_time = clock();
gettimeofday(&decode_t,NULL);
after sometime
delay =((float)(clock()-g_time)/(float)CLOCKS_PER_SEC);
gettimeofday(&poll_t,NULL);
delay1 = ((poll_t.tv_sec - decode_t.tv_sec)*1000 + (poll_t.tv_usec - decode_t.tv_usec)/1000.0) ;
printf("\ndelay1: %f delay: %f ",delay1,delay);
usual output is:
delay1: 1577.603027 delay: 0.800000
delay1 is in milliseconds and delay is in sec.
Iam using archlinux 64 bit.I can't understand why this is happening.
From the clock(3) manual page:
The clock() function returns an approximation of processor time used by the program.
So the clock function doesn't return the amount of time passed, but a number of "ticks" that your program have run. And as you know, in a multi-tasking system your program can be paused at any time to let other programs run.

elapsed time in C

#include <time.h>
time_t start,end;
time (&start);
//code here
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );
I'm getting 0.000 for both start and end times. I'm not understanding the source of error.
Also is it better to use time(start) and time(end) or start=clock() and end=clock() for computing the elapsed time.
On most (practically all?) systems, time() only has a granularity of one second, so any sub-second lengths of time can't be measured with it. If you're on Unix, try using gettimeofday instead.
If you do want to use clock() make sure you understand that it measures CPU time only. Also, to convert to seconds, you need to divide by CLOCKS_PER_SEC.
Short excerpts of code typically don't take long enough to run for profiling purposes. A common technique is to repeat the call many many (millions) times and then divide the resultant time delta with the iteration count. Pseudo-code:
count = 10,000,000
start = readCurrentTime()
loop count times:
myCode()
end = readCurrentTime()
elapsedTotal = end - start
elapsedForOneIteration = elapsedTotal / count
If you want accuracy, you can discount the loop overhead. For example:
loop count times:
myCode()
myCode()
and measure elapsed1 (2 x count iterations + loop overhead)
loop count times:
myCode()
and measure elapsed2 (count iterations + loop overhead)
actualElapsed = elapsed1 - elapsed2
(count iterations -- because rest of terms cancel out)
time has (at best) second resolution. If your code runs in much less than that, you aren't likely to see a difference.
Use a profiler (such a gprof on *nix, Instruments on OS X; for Windows, see "Profiling C code on Windows when using Eclipse") to time your code.
The code you're using between the measurements is running too fast. Just tried your code printing numbers from 0 till 99,999 and I got
Elasped time is 1.00 seconds.
Your code is taking less than a second to run.

C - gettimeofday for computing time?

do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);
This one is made before computing, second one after. But do you know how to subtracts it?
I need result in miliseconds
To subtract timevals:
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:
long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
The answer offered by #Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.
Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).
The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.
Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.
<<
If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:
#include <time.h>
int main()
{
clock_t start = clock();
//... do work here
clock_t end = clock();
double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
return 0;
}
hth
No. gettimeofday should NEVER be used to measure time.
This is causing bugs all over the place. Please don't add more bugs.
Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.

Resources