What is the use of CLOCK_REALTIME? - c

I am reading about the difference between CLOCK_REALTIME and CLOCK_MONOTONIC
Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?
The CLOCK_REALTIME has discontinuities in time, can jump forwards as well as backwards: is that a bug in this clock? How could a clock that gives inconsistent time be reliable?

Despite its imperfections, CLOCK_REALTIME should be the system's best estimate of the current UTC or civil time. It's the basis for the system's ability to display the same time you'd see if you looked at your watch, or a clock on the wall, or your cell phone, or listened to a time broadcast on a radio station, etc. (The display does involve a conversion from UTC to local time; more on this later.)
But if CLOCK_REALTIME is going to match the UTC time out there in the real world, there are at least two pretty significant issues:
What if someone accidentally sets the clock on your computer wrong? They're going to have to fix it, and the fix might involve a time jump. Pretty much no way around that, especially if the error is large (like, hours or days).
Most computers unfortunately have no way of representing leap seconds. Therefore, when there's a leap second out in the real world, most computer clocks have to jump a little.
So when you read that CLOCK_REALTIME might have discontinuities, might jump forwards as well as backwards, that's not a bug, it's a feature: CLOCK_REALTIME must have those possibilities, if it's to cope with the real world with leap seconds and occasionally-wrong clocks.
So if you're writing code which is supposed to work with times matching those in the real world, CLOCK_REALTIME is what you want, warts and all. Ideally, though, you'll write your code in such a way that it behaves reasonably gracefully (does not crash or do something bizarre) if, once in a while, the system clock jumps forwards or backwards for some reason.
As you probably know from the other question you referenced, CLOCK_MONOTONIC is guaranteed to always step forward at exactly one second per second, with no jumps or discontinuities, but the absolute value of the clock doesn't mean much. If the CLOCK_MONOTONIC value is 13:05, that doesn't mean it's just after one in the afternoon, it typically means that the computer has been up and running for 13 hours and 5 minutes.
So if all you're interested in is relative times, CLOCK_MONOTONIC is fine. In particular, if you want to time how long something took, by subtracting the start time from the end time, it's preferable to use CLOCK_MONOTONIC values for this, since they won't give you a wrong answer if there was some kind of a time jump (that would have affected CLOCK_REALTIME) in between.
Or, in summary, as people said in the comments thread, CLOCK_REALTIME is what you need for absolute time, while CLOCK_MONOTONIC is better for relative time.
Now, a few more points.
As mentioned, CLOCK_REALTIME is not quite "wall time", because it actually deals in UTC. It uses the famous (infamous?) Unix/Posix representation of UTC seconds since 1970. For example, a CLOCK_REALTIME value of 1457852399 translates to 06:59:59 UTC on March 13, 2016. Where I live, five hours west of Greenwich, that corresponds to 01:59:59 local time. But one second later was not 2:00 in the morning for me! In fact, 1457852399 + 1 = 1457852400 corresponds to 03:00:00 Eastern time, because that's when Daylight Saving Time kicked in here.
I suggested that if your clock was wrong, a time jump was pretty much the only way to fix it, but that's not quite true. If your clock is only slightly off, it's possible to correct it by "slewing" the time gradually (by changing the clock frequency slightly) so that after a few minutes or hours it will have drifted to the correct time without a jump. That's what NTP tries to do, although depending on its configuration it may only be willing to do that for errors that are pretty small.
I said that CLOCK_MONOTONIC was typically the time the computer has been up and running. That's not guaranteed by the standard; all the standard says is that CLOCK_MONOTONIC counts time since some arbitrary timepoint. On systems that do implement CLOCK_MONOTONIC as the time the system has been up, there can be two interpretations: is it time since boot, or the time the system has been up and running (that is, minus any time it was asleep or suspended)? On many systems, there's yet another clock, CLOCK_BOOTTIME, that counts time since boot (whether up or suspended), while CLOCK_MONOTONIC counts only time the system was up and running.
I said, "CLOCK_MONOTONIC is guaranteed to always step forward at exactly one second per second", but that may not be strictly correct, either. If your computer is in the middle of a time-slewing operation, trying to gradually correct an absolute time error, it may actually be the case that CLOCK_MONOTONIC is temporarily stepping at 1.001 seconds per second, or 0.999 seconds per second, or something like that. The discrepancy will usually be quite small, but in case it matters to you, some systems have yet other clock types you can use, such as CLOCK_MONOTONIC_RAW, that are supposed to be free of such perturbations.
Finally, if you want to track proper time, and you want to avoid jumps or discontinuities at leap seconds, you've got a problem, because of the poor handling of leap seconds in traditional Unix/Linux (and Windows, and all other) computer systems. Under recent (4.x?) Linux kernels, there's a CLOCK_TAI which may help. Some experimental systems may implement yet another clock, CLOCK_UTC, which handles UTC time with leap seconds properly. Both of those have some other costs, though, and you'd have to really know what you were doing to use them effectively, at least with today's level of support. See the LEAPSECS mailing list for more information.

Related

Timing/Clocks in the Linux Kernel

I am writing a device driver and want to benchmark a few pieces of code to get a feel for where I could be experiencing some bottlenecks. As a result, I want to time a few segments of code.
In userspace, I'm used to using clock_gettime() with CLOCK_MONOTONIC. Looking at the kernel sources (note that I am running kernel 4.4, but will be upgrading eventually), it appears I have a few choices:
getnstimeofday()
getrawmonotonic()
get_monotonic_coarse()
getboottime()
For convenience, I have written a function (see below) to get me the current time. I am currently using getrawmonotonic() because I figured this is what I wanted. My function returns the current time as a ktime_t, so then I can use ktime_sub() to get the elapsed time between two times.
static ktime_t get_time_now(void) {
struct timespec time_now;
getrawmonotonic(&time_now);
return timespec_to_ktime(time_now);
}
Given the available high resolution clocking functions (jiffies won't work for me), what is the best function for my given application? More generally, I'm interested in any/all documentation about these functions and the underlying clocks. Primarily, I am curious if the clocks are affected by any timing adjustments and what their epochs are.
Are you comparing measurements you're making in the kernel directly with measurements you've made in userspace? I'm wondering about your choice to use CLOCK_MONOTONIC_RAW as the timebase in the kernel, since you chose to use CLOCK_MONOTONIC in userspace. If you're looking for an analogous and non-coarse function in the kernel which returns CLOCK_MONOTONIC (and not CLOCK_MONOTONIC_RAW) time, look at ktime_get_ts().
It's possible you could also be using raw kernel ticks to be measuring what you're trying to measure (rather than jiffies, which represent multiple kernel ticks), but I do not know how to do that off the top of my head.
In general if you're trying to find documentation about Linux timekeeping, you can take a look at Documentation/timers/timekeeping.txt. Usually when I try to figure out kernel timekeeping I also unfortunately just spend a lot of time reading through the kernel source in time/ (time/timekeeping.c is where most of the functions you're thinking of using right now live... it's not super well-commented, but you can probably wrap your head around it with a little bit of time). And if you're feeling altruistic after learning, remember that updating documentation is a good way to contribute to the kernel :)
To your question at the end about how clocks are affected by timing adjustments and what epochs are used:
CLOCK_REALTIME always starts at Jan 01, 1970 at midnight (colloquially known as the Unix Epoch) if there are no RTC's present or if it hasn't already been set by an application in userspace (or I guess a kernel module if you want to be weird). Usually the userspace application which sets this is the ntp daemon, ntpd or chrony or similar. Its value represents the number of seconds passed since 1970.
CLOCK_MONTONIC represents the number of seconds passed since the device was booted up, and if the device is suspended at a CLOCK_MONOTONIC value of x, when it's resumed, it resumes with CLOCK_MONOTONIC set to x as well. It's not supported on ancient kernels.
CLOCK_BOOTTIME is like CLOCK_MONOTONIC, but has time added to it across suspend/resume -- so if you suspend at a CLOCK_BOOTTIME value of x, for 5 seconds, you'll come back with a CLOCK_BOOTTIME value of x+5. It's not supported on old kernels (its support came about after CLOCK_MONOTONIC).
Fully-fleshed NTP daemons (not SNTP daemons -- that's a more lightweight and less accuracy-creating protocol) set the system clock, or CLOCK_REALTIME, using settimeofday() for large adjustments ("steps" or "jumps") -- these immediately affect the total value of CLOCK_REALTIME, and using adjtime() for smaller adjustments ("slewing" or "skewing") -- these affect the rate at which CLOCK_REALTIME moves forward per CPU clock cycle. I think for some architectures you can actually tune the CPU clock cycle through some means or other, and the kernel implements adjtime() this way if possible, but don't quote me on that. From both the bulk of the kernel's perspective and userspace's perspective, it doesn't actually matter.
CLOCK_MONOTONIC, CLOCK_BOOTTIME, and all other friends slew at the same rate as CLOCK_REALTIME, which is actually fairly convenient in most situations. They're not affected by steps in CLOCK_REALTIME, only by slews.
CLOCK_MONOTONIC_RAW, CLOCK_BOOTTIME_RAW, and friends do NOT slew at the same rate as CLOCK_REALTIME, CLOCK_MONOTONIC, and CLOCK_BOOTIME. I guess this is useful sometimes.
Linux provides some process/thread-specific clocks to userspace (CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID), which I know nothing about. I do not know if they're easily accessible in the kernel.

Measuring Elapsed Time In Linux (CLOCK_MONOTONIC vs. CLOCK_MONOTONIC_RAW)

I am currently trying to talk to a piece of hardware in userspace (underneath the hood, everything is using the spidev kernel driver, but that's a different story).
The hardware will tell me that a command has been completed by indicating so with a special value in a register, that I am reading from. The hardware also has a requirement to get back to me in a certain time, otherwise the command has failed. Different commands take different times.
As a result, I am implementing a way to set a timeout and then check for that timeout using clock_gettime(). In my "set" function, I take the current time and add the time interval I should wait for (usually this anywhere from a few ms to a couple of seconds). I then store this value for safe keeping later.
In my "check" function, I once again, get the current time and then compare it against the time I have saved. This seems to work as I had hoped.
Given my use case, should I be using CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW? I'm assuming CLOCK_MONOTONIC_RAW is better suited, since I have short intervals that I am checking. I am worried that such a short interval might represent a system-wide outlier, in which NTP was doing alot of adjusting. Note that my target system is only Linux kernels 4.4 and newer.
Thanks in advance for the help.
Edited to add: given my use case, I need "wall clock" time, not CPU time. That is, I am checking to see if the hardware has responded in some wall clock time interval.
References:
Rutgers Course Notes
What is the difference between CLOCK_MONOTONIC & CLOCK_MONOTONIC_RAW?
Elapsed Time in C Tutorial

Adjust CLOCK_REALTIME based timer on settimeofday

I have a POSIX timer that should fire every time 3 AM rolls around.
The obvious implementation is to use timer_create with CLOCK_REALTIME, and set the timeout to 3 AM, either today or tomorrow depending on the current time.
That works well for all cases except when the system time is corrected backwards. If the user sets the clock one year in the future and then back to the correct date, the new timer value will be set to tomorrow in one year, so the timer will not elapse for an entire year.
As a workaround, I can probably determine for each repeating timer when the previous timeout should have occurred, and if I ever see a system time before that, I correct all existing timers, however that still requires a separate wake up reason.
Is there a way to be notified if the system clock is set backwards, either in general or before a given time, or is there a better way to handle this?
First of all, I think you're overthinking this - it's 2017, a system's clock never needs to be set (rather than just drift-corrected) except at boot, and the clock should never be wrong by more than a few milliseconds (ideally microseconds), all kept in proper shape by ntp, ptp, or similar. In my book, it's perfectly acceptable to simply document that your software will not behave as expected under discontinuous clock adjustments.
If that's not acceptable to you, your options are limited:
You could use CLOCK_MONOTONIC instead of CLOCK_REALTIME, and set the timer without TIMER_ABSTIME, using the difference between 3am and the current time reported by CLOCK_REALTIME as the interval. This has the advantage that, even with arbitrary clock resets, you will never miss the 3am event by more than 24 hours, but you can still miss it if the time within the day is adjusted.
Use a timer with short expiry and poll on each timer even whether you've just crossed the 3am boundary. This will never miss the event by more than your timer interval, but wastes considerable cpu resources (and prevents deep sleep) polling.
There are other variants of these methods but in terms of pros and cons they're essentially equivalent.

Internal working of timer

I know QueryPerformanceCounter() can be used for timing functions. I want to know:
1-Can I increase the resolution of the timer by over-clocking the CPU (so it ticks faster)?
2-Basically what makes some timers more precise than others, (e.g, QueryPerformanceCounter() is more precise as compared to GetTickCount())? If there is single crystal oscillator on the motherboard , why some timers are slower as compared to others?
QueryPerformanceCounter has very high resolution - normally less than one nanosecond. I don't see why you'd like to increase it. Overclocking will increase it, but it seems like a very weak reason for overclocking.
QueryPerformanceCounter is very accurate, but somewhat expensive and not very convenient.
a. It's expensive because it uses the expensive rdtsc instruction. Faster timers can just read an integer from memory. This integer needs to be updated, and we don't want to do it too often (1000 times a second is reasonable), so we get a very cheap timer, with low precision. That's basically GetTickCount.
b. It's inconvenient because it uses units which change between computers. Sometimes it will be nanoseconds, sometimes half-nano, or other values. It makes it harder to calculate with.
a. Another source of inconvenience is that it returns very large numbers, which may overflow when you try to do math with them, so you need to be careful.
The timing source for QPC is machine dependent. It is typically picked up from a frequency available somewhere in the chipset. Whether overclocking the cpu is going to affect it is highly dependent on your motherboard design. The simplest way is to just try it, use QueryPerformanceFrequency to see the effect.
GetTickCount is driven from an entirely different timer source, the signal that also generates the clock interrupt. It is not very precise, 1/64 of second normally, but it is highly accurate. Your machine contacts a time server from time to time to recalibrate the clock and adjust the clock correction factor. Which makes it accurate to about a second over an entire year. QPC is very precise, but not nearly as accurate. Use it only to time short intervals.
1 - Yes, Internally, one of the better timers is rdtsc, which does give you the clock value. Combining this with information from cpuid instruction, gives you time.
2 - The other timers rely upon various timing sources, such as the 8253 timer, for instance.
QPF is a wrapper added by Microsoft on and over what rdtsc provides. Read this article for more info:
http://www.strchr.com/performance_measurements_with_rdtsc

How does GetThreadTimes report time spent hibernating?

I am currently using GetThreadTimes to tell me how much time I spend in my application's event loop.
I wonder how this will be affected by hibernating. Is hibernation time reported at all? Or perhaps as system time? Is the behavior the same on all versions of Windows?
Note I asked the same question for Posix here.
No, hibernation time is not reported
How could it?
When hibernated the computer is actually turned off, no timer is ticking and no counting is taking place.
The timings you see with GetThreadTimes are computed at each tick (i.e. interrupt) of the system timer1.
I've made a small C program to test this.
It logs the timings every 1000 ms, I run it and hibernated my laptop (at second 9 circa).
No gaps are shown in the counting, as expected, each line is about 1000 ms from the previous.
So hibernation time doesn't count either as system time or user time.
I don't know it this is consistent with all old versions of Windows, specially the ones with different meaning of hibernation but you can expect this to be consistent among all the important versions.
1HPET, LAPIC timer or the old good PIT, whatever is available

Resources