Is there a way to ensure ntp synced clock never moves backwards? - ntp

Let say I have code that uses the clock to generate ids. Or I have code that calculates time elapsed since an event happened. Or any other logic that expects system time to only move forwards, never backwards. If time does move backwards, and the program notices it, lets say it crashes or hangs.
I'd like to use an NTP service with such programs. Is there a way that NTP can be configured so that it is guaranteed never to adjusts time backwards? Slowing down the system clock would be fine.
So a second could be longer or shorter, but system time should never move backwards.

With standard ntpd you may have a look at
tinker configuration command.
It allows specifying "never step the clock" by setting "stepback 0" and (if you also do not want the clock being stepped forward) "stepfwd 0" (Or even plain "step 0").
Otherwise it will step (back) if local clock is behind >= 128ms
and terminate if local clock is off >= 1000s

Related

Does FreeRTOS guarantee the first timer tick to be exactly 1ms?

I am implementing software watchdogs to ensure that a 1kHz task is executing within its allotted deadline (i.e. 1ms). But I am wondering if there's exactly 1ms between the 1kHz starting and tick 1.
From my understanding, this is what happens when FreeRTOS starts
vPortSetupTimerInterrupt(); // Tick 0 starts
...
prvPortStartFirstTick(); // Context switch
// After the context switch, the 1kHz task starts
Between tick 0 and tick 1, the 1kHz task doesn't get a full 1ms to do useful work because some time was spent on calling vPortSetupTimerInterrupt() to prvPortStartFirstTick(). Is this correct? And if so, is this a cause for concern? Or is the extra delay time so short that it is negligible?
I am developing on ARM Cortex M4 (STM32F302 series).
You are correct in that executing from vPortSetupTimerInterrupt() to prvPortStartFirstTick() (not sure what that function is, not a FreeRTOS supplied one) does take some time - as does executing any instruction. If it is a concern or not depends on your application - but if it is a concern then you are probably not going to every get exactly 1ms to the first tick. Think about doing it the other way around - say the first task starts before the timer is started - then the first task will have to be the one that starts the time - so again you are going to spend some time in the task doing something other than what you want the task to be doing.

Measuring Elapsed Time In Linux (CLOCK_MONOTONIC vs. CLOCK_MONOTONIC_RAW)

I am currently trying to talk to a piece of hardware in userspace (underneath the hood, everything is using the spidev kernel driver, but that's a different story).
The hardware will tell me that a command has been completed by indicating so with a special value in a register, that I am reading from. The hardware also has a requirement to get back to me in a certain time, otherwise the command has failed. Different commands take different times.
As a result, I am implementing a way to set a timeout and then check for that timeout using clock_gettime(). In my "set" function, I take the current time and add the time interval I should wait for (usually this anywhere from a few ms to a couple of seconds). I then store this value for safe keeping later.
In my "check" function, I once again, get the current time and then compare it against the time I have saved. This seems to work as I had hoped.
Given my use case, should I be using CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW? I'm assuming CLOCK_MONOTONIC_RAW is better suited, since I have short intervals that I am checking. I am worried that such a short interval might represent a system-wide outlier, in which NTP was doing alot of adjusting. Note that my target system is only Linux kernels 4.4 and newer.
Thanks in advance for the help.
Edited to add: given my use case, I need "wall clock" time, not CPU time. That is, I am checking to see if the hardware has responded in some wall clock time interval.
References:
Rutgers Course Notes
What is the difference between CLOCK_MONOTONIC & CLOCK_MONOTONIC_RAW?
Elapsed Time in C Tutorial

Determining cause of delay/pause - kernel scheduler etc

System is an embedded Linux/Busybox core on a small embedded board with a web server (Boa) running.
We are seeing some high latency in responses from the web server - sometimes >500ms for no good reason, so I've been digging...
On liberally scattering debug prints throughout the code it seems to come down to the entire process just... stopping for a bit, in a way which I can only assume must be the process/thread being interrupted by another process.
Using print statements and clock_gettime() to calculate time taken to process a request, I can see the code reach the bottom of a while() loop (parsing input), print something like "Time so far: 5ms" and then the next line at the top of the loop will print "Time so far: 350ms" - and all that the code does between the bottom of the loop and the 1st print back at the top is a basic check along the lines of while(position < end), it has nothing complicated that could hold it up.
There's no IO blocking, the data it's parsing has all arrived already, and it's not making any external calls or wandering off into complex functions.
I then looked into whether the kernel scheduler (CFS in our case) might be holding things up, adding calls to clock() (processor time rather than wall-clock) and again calculating time differences Vs processor time used I can see that the wall-clock time delay may run beyond 300ms from one loop to the next, but the reported processor time taken (which seems to have a ~10ms resolution) is more like 50ms.
So, that suggests the task scheduler is holding the process up for hundreds of milliseconds at a time. I've checked the scheduler granularity and max delay and it's nowhere near 100ms, scheduler latency is set at 6ms for example.
Any advice on what I can do now to try and track down the problem - identifying processes which could hog the CPU for >100ms, measuring/tracking what the scheduler is doing, etc.?
First you should try and run your program using strace to see if there are any system calls holding things up.
If that is ambiguous or does not help I would suggest you try and profile the kernel. You could try OProfile
This will create a call graph that you can analyze and see what is happening.

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

How does GetThreadTimes report time spent hibernating?

I am currently using GetThreadTimes to tell me how much time I spend in my application's event loop.
I wonder how this will be affected by hibernating. Is hibernation time reported at all? Or perhaps as system time? Is the behavior the same on all versions of Windows?
Note I asked the same question for Posix here.
No, hibernation time is not reported
How could it?
When hibernated the computer is actually turned off, no timer is ticking and no counting is taking place.
The timings you see with GetThreadTimes are computed at each tick (i.e. interrupt) of the system timer1.
I've made a small C program to test this.
It logs the timings every 1000 ms, I run it and hibernated my laptop (at second 9 circa).
No gaps are shown in the counting, as expected, each line is about 1000 ms from the previous.
So hibernation time doesn't count either as system time or user time.
I don't know it this is consistent with all old versions of Windows, specially the ones with different meaning of hibernation but you can expect this to be consistent among all the important versions.
1HPET, LAPIC timer or the old good PIT, whatever is available

Resources