I have a POSIX timer that should fire every time 3 AM rolls around.
The obvious implementation is to use timer_create with CLOCK_REALTIME, and set the timeout to 3 AM, either today or tomorrow depending on the current time.
That works well for all cases except when the system time is corrected backwards. If the user sets the clock one year in the future and then back to the correct date, the new timer value will be set to tomorrow in one year, so the timer will not elapse for an entire year.
As a workaround, I can probably determine for each repeating timer when the previous timeout should have occurred, and if I ever see a system time before that, I correct all existing timers, however that still requires a separate wake up reason.
Is there a way to be notified if the system clock is set backwards, either in general or before a given time, or is there a better way to handle this?
First of all, I think you're overthinking this - it's 2017, a system's clock never needs to be set (rather than just drift-corrected) except at boot, and the clock should never be wrong by more than a few milliseconds (ideally microseconds), all kept in proper shape by ntp, ptp, or similar. In my book, it's perfectly acceptable to simply document that your software will not behave as expected under discontinuous clock adjustments.
If that's not acceptable to you, your options are limited:
You could use CLOCK_MONOTONIC instead of CLOCK_REALTIME, and set the timer without TIMER_ABSTIME, using the difference between 3am and the current time reported by CLOCK_REALTIME as the interval. This has the advantage that, even with arbitrary clock resets, you will never miss the 3am event by more than 24 hours, but you can still miss it if the time within the day is adjusted.
Use a timer with short expiry and poll on each timer even whether you've just crossed the 3am boundary. This will never miss the event by more than your timer interval, but wastes considerable cpu resources (and prevents deep sleep) polling.
There are other variants of these methods but in terms of pros and cons they're essentially equivalent.
Related
I am currently trying to talk to a piece of hardware in userspace (underneath the hood, everything is using the spidev kernel driver, but that's a different story).
The hardware will tell me that a command has been completed by indicating so with a special value in a register, that I am reading from. The hardware also has a requirement to get back to me in a certain time, otherwise the command has failed. Different commands take different times.
As a result, I am implementing a way to set a timeout and then check for that timeout using clock_gettime(). In my "set" function, I take the current time and add the time interval I should wait for (usually this anywhere from a few ms to a couple of seconds). I then store this value for safe keeping later.
In my "check" function, I once again, get the current time and then compare it against the time I have saved. This seems to work as I had hoped.
Given my use case, should I be using CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW? I'm assuming CLOCK_MONOTONIC_RAW is better suited, since I have short intervals that I am checking. I am worried that such a short interval might represent a system-wide outlier, in which NTP was doing alot of adjusting. Note that my target system is only Linux kernels 4.4 and newer.
Thanks in advance for the help.
Edited to add: given my use case, I need "wall clock" time, not CPU time. That is, I am checking to see if the hardware has responded in some wall clock time interval.
References:
Rutgers Course Notes
What is the difference between CLOCK_MONOTONIC & CLOCK_MONOTONIC_RAW?
Elapsed Time in C Tutorial
I am reading about the difference between CLOCK_REALTIME and CLOCK_MONOTONIC
Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?
The CLOCK_REALTIME has discontinuities in time, can jump forwards as well as backwards: is that a bug in this clock? How could a clock that gives inconsistent time be reliable?
Despite its imperfections, CLOCK_REALTIME should be the system's best estimate of the current UTC or civil time. It's the basis for the system's ability to display the same time you'd see if you looked at your watch, or a clock on the wall, or your cell phone, or listened to a time broadcast on a radio station, etc. (The display does involve a conversion from UTC to local time; more on this later.)
But if CLOCK_REALTIME is going to match the UTC time out there in the real world, there are at least two pretty significant issues:
What if someone accidentally sets the clock on your computer wrong? They're going to have to fix it, and the fix might involve a time jump. Pretty much no way around that, especially if the error is large (like, hours or days).
Most computers unfortunately have no way of representing leap seconds. Therefore, when there's a leap second out in the real world, most computer clocks have to jump a little.
So when you read that CLOCK_REALTIME might have discontinuities, might jump forwards as well as backwards, that's not a bug, it's a feature: CLOCK_REALTIME must have those possibilities, if it's to cope with the real world with leap seconds and occasionally-wrong clocks.
So if you're writing code which is supposed to work with times matching those in the real world, CLOCK_REALTIME is what you want, warts and all. Ideally, though, you'll write your code in such a way that it behaves reasonably gracefully (does not crash or do something bizarre) if, once in a while, the system clock jumps forwards or backwards for some reason.
As you probably know from the other question you referenced, CLOCK_MONOTONIC is guaranteed to always step forward at exactly one second per second, with no jumps or discontinuities, but the absolute value of the clock doesn't mean much. If the CLOCK_MONOTONIC value is 13:05, that doesn't mean it's just after one in the afternoon, it typically means that the computer has been up and running for 13 hours and 5 minutes.
So if all you're interested in is relative times, CLOCK_MONOTONIC is fine. In particular, if you want to time how long something took, by subtracting the start time from the end time, it's preferable to use CLOCK_MONOTONIC values for this, since they won't give you a wrong answer if there was some kind of a time jump (that would have affected CLOCK_REALTIME) in between.
Or, in summary, as people said in the comments thread, CLOCK_REALTIME is what you need for absolute time, while CLOCK_MONOTONIC is better for relative time.
Now, a few more points.
As mentioned, CLOCK_REALTIME is not quite "wall time", because it actually deals in UTC. It uses the famous (infamous?) Unix/Posix representation of UTC seconds since 1970. For example, a CLOCK_REALTIME value of 1457852399 translates to 06:59:59 UTC on March 13, 2016. Where I live, five hours west of Greenwich, that corresponds to 01:59:59 local time. But one second later was not 2:00 in the morning for me! In fact, 1457852399 + 1 = 1457852400 corresponds to 03:00:00 Eastern time, because that's when Daylight Saving Time kicked in here.
I suggested that if your clock was wrong, a time jump was pretty much the only way to fix it, but that's not quite true. If your clock is only slightly off, it's possible to correct it by "slewing" the time gradually (by changing the clock frequency slightly) so that after a few minutes or hours it will have drifted to the correct time without a jump. That's what NTP tries to do, although depending on its configuration it may only be willing to do that for errors that are pretty small.
I said that CLOCK_MONOTONIC was typically the time the computer has been up and running. That's not guaranteed by the standard; all the standard says is that CLOCK_MONOTONIC counts time since some arbitrary timepoint. On systems that do implement CLOCK_MONOTONIC as the time the system has been up, there can be two interpretations: is it time since boot, or the time the system has been up and running (that is, minus any time it was asleep or suspended)? On many systems, there's yet another clock, CLOCK_BOOTTIME, that counts time since boot (whether up or suspended), while CLOCK_MONOTONIC counts only time the system was up and running.
I said, "CLOCK_MONOTONIC is guaranteed to always step forward at exactly one second per second", but that may not be strictly correct, either. If your computer is in the middle of a time-slewing operation, trying to gradually correct an absolute time error, it may actually be the case that CLOCK_MONOTONIC is temporarily stepping at 1.001 seconds per second, or 0.999 seconds per second, or something like that. The discrepancy will usually be quite small, but in case it matters to you, some systems have yet other clock types you can use, such as CLOCK_MONOTONIC_RAW, that are supposed to be free of such perturbations.
Finally, if you want to track proper time, and you want to avoid jumps or discontinuities at leap seconds, you've got a problem, because of the poor handling of leap seconds in traditional Unix/Linux (and Windows, and all other) computer systems. Under recent (4.x?) Linux kernels, there's a CLOCK_TAI which may help. Some experimental systems may implement yet another clock, CLOCK_UTC, which handles UTC time with leap seconds properly. Both of those have some other costs, though, and you'd have to really know what you were doing to use them effectively, at least with today's level of support. See the LEAPSECS mailing list for more information.
Short question: How to get seconds since reset in STM32L051T6 microcontroller?
My effort and detailed issue:
I am using an STM32L051T6 series microcontroller. I need to count seconds since power on. I am also using low power mode. So I wrote code to use wakeup timer interrupt functionality of internal RTC of microcontroller. I used 1 second interval wake up timer with external LSE clock of 32768 Hz. I observed the accumulated seconds since power on (SSPO) after 3 days and found that it is falling behind by 115 seconds compared to actual time elapsed. My guess for this drift is interrupt latency in executing wakeup timer interrupt. How can I remove drift of this 115 seconds? Or is there any other better method than using wakeup interrupt to count seconds since power on?
UPDATE:
I tried to use Systick with HAL_GetTick() function as seconds since power on. But even systick is also getting delayed over time.
If you want to measure time with accuracy over a longer period, an RTC is the way to go. As you mentioned that you have an RTC, you can use the method below.
At startup, load the RTC with zero.
Then you can read the seconds elapsed when required without the errors above.
Edit: As per comment, the RTC can be changed by user. In that case,
If you can modify the RTC write function called by the user, then when the user calls the RTC write function, you update a global variable VarA = time set by user. The elapsed time will be Time read by RTC - VarA.
If the RTC is accurate, you should use the RTC by storing its value at boot time and later comparing to that saved value. But you said that the RTC can be reset by user so I can see two ways to cope with it:
if you have enough control on the system, replace the command or IHM that a user can use to reset the clock with a wrapper that inform you module and allows to read the RTC before and after it has been reset
if you have not enough control or cannot wrap the user's reset (because it uses a direct system call, etc.) use a timer to control the RTC value on every second
But you should define a threshold on the delta on RTC clock. If it is small, it is likely to be an adjustment because unless your system uses an atomic clock, even RTC can derive over time. In that case I would not care because you can hardly know whether it derived since last reboot or not. If you want a more clever algorythm, you can make the threshold dependant on the current time since last reboot: the longer the system is up, the higher the probability it has derived since then.
On the opposite, a large delta is likely to be a correction because RTC was blatantly erroneous, the saving battery is out of use, or what else. In that case you should compute the new start RTC time that gives same duration with the new RTC value.
As a rule of thumb, I would use a threshold of about 1 or 2 seconds per uptime day without RTC clock adjustement (ref) - meaning I would also store the time of last RTC adjustement, initially boot time.
I'm using a PIC18 with Fosc = 10MHz. So if I use Delay10KTCYx(250), I get 10,000 x 250 x 4 x (1/10e6) = 1 second.
How do I use the delay functions in the C18 for very long delays, say 20 seconds? I was thinking of just using twenty lines of Delay10KTCYx(250). Is there another more efficient and elegant way?
Thanks in advance!
It is strongly recommended that you avoid using the built-in delay functions such as Delay10KTCYx()
Why you might ask?
These delay functions are very inaccurate, and they may cause your code to be compiled in unexpected ways. Here's one such example where using the Delay10KTCYx() function can cause problems.
Let's say that you have a PIC18 microprocessor that has only two hardware timer interrupts. (Usually they have more but let's just say there are only two).
Now let's say you manually set up the first hardware timer interrupt to blink once per second exactly, to drive a heartbeat monitor LED. And let's say you set up the second hardware timer interrupt to interrupt every 50 milliseconds because you want to take some sort of digital or analog reading at exactly 50 milliseconds.
Now, lastly, let's say that in your main program you want to delay 100,000 clock cycles. So you put a call to Delay10KTCYx(10) in your main program. What happenes do you suppose? How does the PIC18 magically count off 100,000 clock cycles?
One of two things will happen. It may "hijack" one of your other hardware timer interrupts to get exactly 100,000 clock cycles. This would either cause your heartbeat sensor to not clock at exactly 1 second, or, cause your digital or analog readings to happen at some time other than every 50 milliseconds.
Or, the delay function will just call a bunch of Nop() and claim that 1 Nop() = 1 clock cycle. What isn't accounted for is "overheads" within the Delay10KTCYx(10) function itself. It has to increment a counter to keep track of things, and surely it takes more than 1 clock cycle to increment the timer. As the Delay10KTCYx(10) loops around and around it is just not capable of giving you exactly 100,000 clock cycles. Depending on a lot of factors you may get way more, or way less, clock cycles than you expected.
The Delay10KTCYx(10) should only be used if you need an "approximate" amount of time. And pre-canned delay functions shouldn't be used if you are already using the hardware timer interrupts for other purposes. The compiler may not even successfully compile when using Delay10KTCYx(10) for very long delays.
I would highly recommend that you set up one of your timer interrupts to interrupt your hardware at a known interval. Say 50,000 clock cycles. Then, each time the hardware interrupts, within your ISR code for that timer interrupt, increment a counter and reset the timer over again to 0 cycles. When enough 50,000 clock cycles have expired to equal 20 seconds (or in other words in your example, 200 timer interrupts at 50,000 cycles per interrupt), reset your counter. Basically my advice is that you should always manually handle time in a PIC and not rely on pre-canned Delay functions - rather build your own delay functions that integrate into the hardware timer of the chip. Yes, it's going to be extra work - "but why can't I just use this easy and nifty built-in delay function, why would they even put it there if it's gonna muck up my program?" - but this should become second nature. Just like you should be manually configuring EVERY SINGLE REGISTER in your PIC18 upon boot-up, whether you are using it or not, to prevent unexpected things from happening.
You'll get way more accurate timing - and way more predictable behavior from your PIC18. Using pre-canned Delay functions is a recipe for disaster... it may work... it may work on several projects... but sooner or later your code will go all buggy on you and you'll be left wondering why and I guarantee the culprit will be the pre-canned delay function.
To create very long time use an internal timer. This can helpful to avoid block in your application and you can check the running time. Please refer to PIC data sheet on how to setup a timer and its interrupt.
If you want a very high precision 1S time I suggest also to consider an external RTC device or an internal RTC if the micro has one.
I'm trying simulate a key down and key up action.
For example: 2638 millseconds.
SendMessage(hWnd, WM_KEYDOWN, keyCode, 0);
Sleep(2638);
SendMessage(hWnd, WM_KEYUP, keyCode, 0);
How would you know if it really worked?
You wouldn't with this code, since accurately measuring the time that code takes to execute is a difficult task.
To get to the question posed by your question title (you should really ask one question at a time...) the accuracy of said functions is dictated by the operating system. On Linux, the system clock granularity is 10ms, so timed process suspension via nanosleep() is only guaranteed to be accurate to 10ms, and even then it's not guaranteed to sleep for exactly the time you specify. (See below.)
On Windows, the clock granularity can be changed to accommodate power management needs (e.g. decrease the granularity to conserve battery power). See MSDN's documentation on the Sleep function.
Note that with Sleep()/nanosleep(), the OS only guarantees that the process suspension will last for at least as long as you specify. The execution of other processes can always delay resumption of your process.
Therefore, the key-up event sent by your code above will be sent at least 2.638 seconds later than the key-down event, and not a millisecond sooner. But it would be possible for the event to be sent 2.7, 2.8, or even 3 seconds later. (Or much later if a realtime process grabbed hold of the CPU and didn't relinquish control for some time.)
Sleep works in terms of the standard Windows thread scheduling. It is accurate up to about 20-50 milliseconds.
So that it's ok for user experience-dependent things. However it's absolutely inappropriate for real-time things.
Beside of this, there're much better ways to simulate keyboard/mouse events. Please see SendInput.
The sleep() function will return before the desired delay when the requested delay is shorter than the time left until the next interrupt occurs. But this only points out that you want to sleep for a shorter period of time than currently is supported by your system. It is advisable to setup the multimedia timer resource to a higher interrupt frequency to obtain better matching of the observed sleep delay with respect to the desired delay.
The the comments in the following threads:
How to get an accurate 1ms Timer Tick under WinXP
Sleep Less Than One Millisecond
The command Sleep() will ensure that thread is suspended at least the amount of time which is given as argument. Operating system does not guarantee it. For detailed discussion you can refer the below post
how is sleep implemented at OS level?