adjtime(x) adjusts time very slowly - c

I'm trying to do adjtime to adjust the clock (which is, for example, 15 mins in future) supplying correct offset to adjtime function. But as I understand adjtime does it very slowly. So when I actually call the function and then check the time it remains the same.
How can I make a small adjustements (e.g. 1s to 1hr) so that effect appears immediately?
Thanks.

The whole point of adjtime is to do so gradually and continuously in a way that does not break running applications which expect time to be continuous (and more importantly, monotone). If you use it correctly (e.g. via ntp), your clock should never drift by enough to require large adjustments like 15 minutes.
If you really do want to do a one-time large adjustment, use clock_settime. Just be aware that it could cause running applications to misbehave.

Related

clock_settime backward correction

I'm trying to do a couple of tests where I need to set the computer time backward or forward depending on some external values. I know that I can do this using clock_settime() in time.h.
I've encountered the problem that when needing to set the time backward, the operation fails.
The documentation for clock_settime states that
Only the CLOCK_REALTIME clock can be set, and only the superuser may do so. If the system securelevel is greater than 1 (see init(8)), the time may only be advanced. This limitation is imposed to prevent a malicious superuser from setting arbitrary time stamps on files. The system time can still be adjusted backwards using the adjtime(2) system call even when the system is secure.
I require nanosecond precision, and adjtime() as far as I understand, does not allow nanosecond precision. The other problem with ajdtime() is that it does not set the clock outright, rather slows it down, until the clock catches up to the set value.
I've done some reading on init() but I'm not sure how to lower the securelevel, and frankly I'd rather not be forced to do this, however, if there's no other way, I'm willing to try it.
Thanks in advance
Update 1
I started looking into altering securelevel and now i'm not even sure if that's something that can be done on Ubuntu. Around the web, I have come across mentions of editing /etc/init/rc-sysinit.conf, /etc/init/rc.conf, or /etc/sysctl.conf and, again, I'm not sure what needs to be added in order to lower the securelevel if, in fact, this is something that can be done. Especially since I could not find a 'rc.securelvel' file.

Fastest way to get approximate time difference

I have a method which returns the current time as a string. Though this method is called millions of times per second and thus I optimized this method in several ways (statically allocated buffers for the time string etc).
For this application it is perfectly fine to approximate the time. For example I use a resolution of 10 milliseconds. Within this time the same time string is returned.
Though when profiling the code the clock() call consumes the vast amount of time.
What other and faster choices do I have to approximate the time difference with milliseconds resolution?
To answer my own question: The solution was to limit calls to clock(), or any time function for that matter. The overall execution time for the whole test case is now 22x faster.
I think I can give a general advise after profiling this quite extensively: If you can live with a lower time resolution, and you really need to optimize your code for speed, change the problem into using a single global timer and avoid costly time comparisons for each run.
I have a simple thread now, sleeping for the desired resolution time, and updating an atomic int ticker variable on each loop.
In the function I needed to optimize I then just compare two ints (the last tick and the current tick). If not equal, it‘s time for an update.

Getting reliable performance measurements for short bits of code

I'm trying to profile a few set of functions which implement different versions of the same algorithm in different ways. I've increased the number of times each function is run so that the total time spent in a single function is roughly 1 minute (to reveal performance differences).
Now, running several times the test produces baffling results. There is a huge variability (+- 50 %) between several executions of the same function, and determining which function is the fastest (which is the goal of the test) is nearly impossible because of that.
Is there something special I should take care of before running the tests, so that I get smoother measurements? Failing that, is running the test several times and compute the average for each function the way to go?
There are lots of things to check!
First, make sure your functions are actually CPU-bound. If so, make sure you have all CPU throttling, turbo modes, and power-saving modes disabled (in BIOS) for the test. If you still have trouble, try pinning your process to a single core. Disable hyper-threading too perhaps.
The goal of all this is to make sure you get your code running hot on a single core without much interruption. If you're on Linux, you can remove a single core from the OS list of available cores and use that (so there is no chance of interference on that core).
Running the test several times is a good idea, but using the average (arithmetic mean) is not. Instead, use the median or minimum or some other measurement which won't be influenced by outliers. Usually, the occasional long test run can be thrown out entirely (unless you're building a real-time system!).

time() and context switching

I am more or less wondering how time() is implemented in the C standard library and what would happen in the situation described below. Although this time is most-likely negligible, consider a situation where you have a hard-limit on time and no control over the CPU scheduler (assume that it is a "good" scheduler for a general-purpose CPU).
Now, if I use time() to calculate my execution time of a particular section of code and use this time subtracted from some maximum bound to determine some other time-dependent variable, how would this variable be skewed based on context-switches? I know we could use nice and other tools (i.e. custom scheduler, etc.) to be certain we get full CPU usage when we need it, however, I am wondering how this works in general for similar situations as this and what side-effects exist due to the system's choices.
time is supposed to measure wall-time. I.e., it gives the current time, regardless of how much or little your process has run.
If you want to measure cpu time, you should use clock instead (though some vendors such as MS implement it wrong, so it does wall time also).
Of course, there are also other tools to retrieve CPU usage, such as times on Unix-like systems or GetProcessTimes on Windows. Most people find these more useful despite the reduced portability.

Are high resolution calls to get the system time wrong by the time the function returns?

Given a C process that runs at the highest priority that requests the current time, Is the time returned adjusted for the amount of time the code takes to return to the user process space? Is it out of date when you get it? As a measurement taking the execution time of known number of assembly instructions in a loop and asking for the time before and after it could give you an approximation of the error. I know this must be an issue in scientific applications? I don't plan to write software involving any super colliders any time in the near future. I have read a few articles on the subject but they do not indicate that any correction is made to make the time given to you be slightly ahead of the time the system read in. Should I lose sleep over other things?
Yes, they are almost definitely "wrong".
For Windows, the timing functions do not take into account the time it takes to transition back to user mode. Even if this were taken into account, it can't correct if the function returns, and your code hits a page fault/gets swapped out/etc., before capturing the return value.
In general, when timing things you should snap a start and an end time around a large number of iterations to weed out these sort of uncertainties.
No, you should not lose sleep over this. No amount of adjustment or other software trickery will yield perfect results on a system with a pipelined processor with multi-layered memory access running a multi-tasking operating system with memory management, devices, interrupt handlers... Not even if your process has the highest priority.
Plus, taking the difference of two such times will cancel out the constant overhead, anyway.
Edit: I mean yes, you should lose sleep over other things :).
Yes, the answer you get will be off by a certain (smallish) amount; I have never heard of a timer function compensating for the average return time, because such a thing is nearly impossible to predict well. Such things are usually implemented by simply reading a register in the hardware and returning the value, or a version of it scaled to the appropriate timescale.
That said, I wouldn't lose sleep over this. The accepted way of keeping this overhead from affecting your measurements in any significant way is not to use these timers for short events. Usually, you will time several hundred, thousand, or million executions of the same thing, and divide by the number of executions to estimate the average time. Such a thing is usually more useful than timing a single instance, as it takes into account average cache behavior, OS effects, and so forth.
Most of the real world problems involving high resolution timers are used for profiling, in which the time is read once during START, and once more during FINISH. So most of the times ~almost~ the same amount of delay in involved in both START and FINISH. And hence it works fine.
Now, for nuclear reactors, WINDOWS or for that many other operating system with generic functions may not be suitable. I guess they use REAL TIME operating systems which might give a better accurate time values than desktop operating systems.

Resources