time(time_t *timer) gives wrong results without time drift in system - c

below code is for doing some stuff and sleep till next interval.
for calculation of sleep time, I am using below logic.
time_t start = time(0);
time_t end = time(0);
int timeLeft = 0;
int interval = 300;
while(1)
{
/* do something
* lets say takes 5 to 20 seconds to execute
*/
end = time(0);
timeLeft = interval -(end - start);
printf("timeLeft: %d, interval: %d, end: %u, start: %u",timeLeft,interval,end,start);
sleep(timeLeft)
start = time(0);
}
After an uptime of 34 hours of the system, timeLeft is coming as greater than 300, which should never happen.
I have checked, there is no time drift in the system.
for an instance, when the issue happened,
timeLeft: 11484, interval: 300, end: 1549402241, start: 1549413425
Don't know how can above happen?
need help.

This analysis of the issue doesn't actually give a single answer, but it was too long to fit in a comment so here goes.
After looking at your code, it appears that the problem is that end < start. But end = time(0) is always called after start = time(0).
Your example assumes that you can subtract time_t values and get the elapsed seconds between them, but the C standard does not guarantee that. Your code seems to be running on a recent version of Linux so I took a look at the GNU Libc manual, and apparently time returns the number of secs since 00:00:00 on January 1, 1970, and a time_t is a long int. I also checked the values of start and end, when the issue occurred, and their values are about 49 years worth of secs, and since it been 49 years since January 1, 1970, it appears that the assumption in your example is valid.
My best guess is that there is some significant difference between the actual code and your example. Perhaps a call to start = time(0) that takes place after the call to end=time(0), or maybe situations where a call to end=time(0) does not take place.
You can rule this out by converting your example into an actual program and see if that program has the same issue. If it does, you can post the code for the program. This site encourages people to create Minimal, Complete, and Verifiable examples.
In the meantime, I can make some other guesses:
Your program has more than one thread, and your threads are stepping on each other.
Some weird compiler optimization is causing the calls to time(0) to be moved around.

Related

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

clock() returning a negative value in C

I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.
POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).
clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.
You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.

an efficient way to detect when the system's hour changed from xx:59 to xy:00

I have an application on Linux that needs to change some parameters each hour, e.g. at 11:00, 12:00, etc. and the system's date can be changed by the user anytime.
Is there any signal, posix function that would provides me when a hour changes from xx:59 to xx+1:00?
Normally, I use localtime(3) to fetch the current time each seconds then compare if the minute part is equal to 0. however, it does not look a good way to do it, in order to detect a change, I need to call the same function each second for an hour. Especially I run the code on an embedded board that would be good to use less resources.
Here is an example code how I do it:
static char *fetch_time() { // I use this fcn for some other purpose to fetch the time info
char *p;
time_t rawtime;
struct tm * timeinfo;
char buffer[13];
time(&rawtime);
timeinfo = localtime(&rawtime);
strftime (buffer,13,"%04Y%02m%02d%02k%02M",timeinfo);
p = (char *)malloc(sizeof(buffer));
strcpy(p, buffer);
return p;
}
static int hour_change_check(){
char *p;
p = fetch_time();
char current_minute[3] = {'\0'};
current_minute[0] = p[10];
current_minute[1] = p[11];
int current_minute_as_int = atoi(current_minute);
if (current_minute_as_int == 0){
printf("current_min: %d\n",current_minute_as_int);
free(p);
return 1;
}
free(p);
return 0;
}
int main(void){
while(1){
int x = hour_change_check();
printf("x:%d\n",x);
sleep(1);
}
return 0;
}
There is no such signal, but traditionally the method of waiting until some target time is to compute how long it is between "now" and "then", and then call sleep():
now = time(NULL);
when = (some calculation);
if (when > now)
sleep(when - now);
If you need to be very precise about the transition from, e.g., 3:59:59 to 4:00:00, you may want to sleep for a slightly shorter time in case of time adjustments due to leap seconds. (If you are running in a portable device in which time zones can change, you also need to worry about picking up the new location, and if it runs on a half-hour offset, redo all computations. There's even Solar Time in Saudi Arabia....)
Edit: per the suggestion from R.., if clock_nanosleep() is available, calculate a timespec value for the absolute wakeup time and call it with the TIMER_ABSTIME flag. See http://pubs.opengroup.org/onlinepubs/009695399/functions/clock_nanosleep.html for the definition for clock_nanosleep(). However, if time is allowed to step backwards (e.g., localtime with zone shifts), you may still have to do some maintenance checking.
Have you actually measured the overhead used in your solution of polling the time once per second (or even two given some of your other comments)?
The number of instructions that are invoked is minimal AND you do not have any looping. So at worse maybe the cpu uses 100 micro-seconds (0.1 ms, or 0.0001 s) time. This estimate is very dependent on the processor used in your embedded system and its clock speed, but the idea is that maybe the polling logic uses 1/1000 of the total time available.
Also, you could optimize your hour_change_check code to do all of the time calcs and not call another function that issues malloc which has to be immediately freed! Also, if this is an embedded *nix system, can you still run this polling logic in its own thread so that when it issues sleep() it will not interfere or delay other units of work.
Hence, measure the problem and see if it is a significant problem. The polling's performance must be balanced against the requirement that when a user changes the time then the hour change MUST be detected. That is, I think polling every second will catch the hour rollover even if the user changes the time, but is the overhead worth it. Well, how much, exactly, overhead is there?

elapsed time in C

#include <time.h>
time_t start,end;
time (&start);
//code here
time (&end);
double dif = difftime (end,start);
printf ("Elasped time is %.2lf seconds.", dif );
I'm getting 0.000 for both start and end times. I'm not understanding the source of error.
Also is it better to use time(start) and time(end) or start=clock() and end=clock() for computing the elapsed time.
On most (practically all?) systems, time() only has a granularity of one second, so any sub-second lengths of time can't be measured with it. If you're on Unix, try using gettimeofday instead.
If you do want to use clock() make sure you understand that it measures CPU time only. Also, to convert to seconds, you need to divide by CLOCKS_PER_SEC.
Short excerpts of code typically don't take long enough to run for profiling purposes. A common technique is to repeat the call many many (millions) times and then divide the resultant time delta with the iteration count. Pseudo-code:
count = 10,000,000
start = readCurrentTime()
loop count times:
myCode()
end = readCurrentTime()
elapsedTotal = end - start
elapsedForOneIteration = elapsedTotal / count
If you want accuracy, you can discount the loop overhead. For example:
loop count times:
myCode()
myCode()
and measure elapsed1 (2 x count iterations + loop overhead)
loop count times:
myCode()
and measure elapsed2 (count iterations + loop overhead)
actualElapsed = elapsed1 - elapsed2
(count iterations -- because rest of terms cancel out)
time has (at best) second resolution. If your code runs in much less than that, you aren't likely to see a difference.
Use a profiler (such a gprof on *nix, Instruments on OS X; for Windows, see "Profiling C code on Windows when using Eclipse") to time your code.
The code you're using between the measurements is running too fast. Just tried your code printing numbers from 0 till 99,999 and I got
Elasped time is 1.00 seconds.
Your code is taking less than a second to run.

C - gettimeofday for computing time?

do you know how to use gettimeofday for measuring computing time? I can measure one time by this code:
char buffer[30];
struct timeval tv;
time_t curtime;
gettimeofday(&tv, NULL);
curtime=tv.tv_sec;
strftime(buffer,30,"%m-%d-%Y %T.",localtime(&curtime));
printf("%s%ld\n",buffer,tv.tv_usec);
This one is made before computing, second one after. But do you know how to subtracts it?
I need result in miliseconds
To subtract timevals:
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;
This is assuming you'll be working with intervals shorter than ~2000 seconds, at which point the arithmetic may overflow depending on the types used. If you need to work with longer intervals just change the last line to:
long long elapsed = (t1.tv_sec-t0.tv_sec)*1000000LL + t1.tv_usec-t0.tv_usec;
The answer offered by #Daniel Kamil Kozar is the correct answer - gettimeofday actually should not be used to measure the elapsed time. Use clock_gettime(CLOCK_MONOTONIC) instead.
Man Pages say - The time returned by gettimeofday() is affected by discontinuous jumps in the system time (e.g., if the system administrator manually changes the system time). If you need a monotonically increasing clock, see clock_gettime(2).
The Opengroup says - Applications should use the clock_gettime() function instead of the obsolescent gettimeofday() function.
Everyone seems to love gettimeofday until they run into a case where it does not work or is not there (VxWorks) ... clock_gettime is fantastically awesome and portable.
<<
If you want to measure code efficiency, or in any other way measure time intervals, the following will be easier:
#include <time.h>
int main()
{
clock_t start = clock();
//... do work here
clock_t end = clock();
double time_elapsed_in_seconds = (end - start)/(double)CLOCKS_PER_SEC;
return 0;
}
hth
No. gettimeofday should NEVER be used to measure time.
This is causing bugs all over the place. Please don't add more bugs.
Your curtime variable holds the number of seconds since the epoch. If you get one before and one after, the later one minus the earlier one is the elapsed time in seconds. You can subtract time_t values just fine.

Resources