clock_settime backward correction - c

I'm trying to do a couple of tests where I need to set the computer time backward or forward depending on some external values. I know that I can do this using clock_settime() in time.h.
I've encountered the problem that when needing to set the time backward, the operation fails.
The documentation for clock_settime states that
Only the CLOCK_REALTIME clock can be set, and only the superuser may do so. If the system securelevel is greater than 1 (see init(8)), the time may only be advanced. This limitation is imposed to prevent a malicious superuser from setting arbitrary time stamps on files. The system time can still be adjusted backwards using the adjtime(2) system call even when the system is secure.
I require nanosecond precision, and adjtime() as far as I understand, does not allow nanosecond precision. The other problem with ajdtime() is that it does not set the clock outright, rather slows it down, until the clock catches up to the set value.
I've done some reading on init() but I'm not sure how to lower the securelevel, and frankly I'd rather not be forced to do this, however, if there's no other way, I'm willing to try it.
Thanks in advance
Update 1
I started looking into altering securelevel and now i'm not even sure if that's something that can be done on Ubuntu. Around the web, I have come across mentions of editing /etc/init/rc-sysinit.conf, /etc/init/rc.conf, or /etc/sysctl.conf and, again, I'm not sure what needs to be added in order to lower the securelevel if, in fact, this is something that can be done. Especially since I could not find a 'rc.securelvel' file.

Related

adjtime(x) adjusts time very slowly

I'm trying to do adjtime to adjust the clock (which is, for example, 15 mins in future) supplying correct offset to adjtime function. But as I understand adjtime does it very slowly. So when I actually call the function and then check the time it remains the same.
How can I make a small adjustements (e.g. 1s to 1hr) so that effect appears immediately?
Thanks.
The whole point of adjtime is to do so gradually and continuously in a way that does not break running applications which expect time to be continuous (and more importantly, monotone). If you use it correctly (e.g. via ntp), your clock should never drift by enough to require large adjustments like 15 minutes.
If you really do want to do a one-time large adjustment, use clock_settime. Just be aware that it could cause running applications to misbehave.

Implementing general timeouts

I'm porting some code from C# to C. In the C# code there are three timers that fire if particular events take too long and they set flags that are checked next time a thread runs a bit of housekeeping.
The C is pure C, not C++, and will eventually be used on both Linux and in embedded targets, so I can't use any OS oriented stuff- simple soft timers. I started off using just an "enabled" flag and a due time for each timer, in ms, and when I call the housekeeping function I'll pass the current ms timer value to it. Then I started thinking of the wraparound issue and decided I wanted the start time as well, so if the present time isn't between the start time and the due time I know it's expired. And I want the default duration to be there as well, so it ends up being worth making a structure to represent a timer. And then making functions that work with pointers to these structures. And then it started me thinking I may be reinventing the wheel.
I don't see anything in the standard libraries that looks like this. Am I missing something? Is this just something that's just easier to do than to look for? :)
Ta for commenting. That's the way I went, just wanted to make sure I wasn't wasting work. Yeah embedded stuff tends to have a timer interrupt, but three is probably asking a bit much and adds hardware dependencies- I'm just passing the current ms timer value to my code and then it doesn't have to care about where that value's coming from. – Craig Graham

time() and context switching

I am more or less wondering how time() is implemented in the C standard library and what would happen in the situation described below. Although this time is most-likely negligible, consider a situation where you have a hard-limit on time and no control over the CPU scheduler (assume that it is a "good" scheduler for a general-purpose CPU).
Now, if I use time() to calculate my execution time of a particular section of code and use this time subtracted from some maximum bound to determine some other time-dependent variable, how would this variable be skewed based on context-switches? I know we could use nice and other tools (i.e. custom scheduler, etc.) to be certain we get full CPU usage when we need it, however, I am wondering how this works in general for similar situations as this and what side-effects exist due to the system's choices.
time is supposed to measure wall-time. I.e., it gives the current time, regardless of how much or little your process has run.
If you want to measure cpu time, you should use clock instead (though some vendors such as MS implement it wrong, so it does wall time also).
Of course, there are also other tools to retrieve CPU usage, such as times on Unix-like systems or GetProcessTimes on Windows. Most people find these more useful despite the reduced portability.

How to retrieve the current processor time in Linux?

I am using C language and Linux as my programming platform in embedded device.
My question is, how to correctly retrieve the current processor time(tick). I am using clock() function in time.h and it seems I am getting inconsistent value.
Thanks.
The clock() function measures the CPU time consumed by your process. It doesn't increment while your process is sleeping or blocked.
If you want a high resolution clock that advances continually, use clock_gettime(CLOCK_MONOTONIC, ..).
I am not real clear on what, specifically, you are asking. If you want another method to get the time your process is using, I often use getitimer() / setitimer() with ITIMER_PROF versus ITIMER_REAL. I find that can be a bit quirky, however.
You may be interested in the LWN article "The trouble with TSC", and the attached comments. While gettimeofday and clock_gettime seem to be the correct thing to go to, there's a lot to consider: performance may vary, there may be consistency issues between different CPUs in multithreaded or multiprocess programs, and the presence of e.g. NTP can mutate the clock value (CLOCK_MONOTONIC will not be affected by NTP, but others may).
Be careful, and make sure you read up on whatever you pursue to make sure it fits your requirements. If you're lucky you're on a fixed hardware and library platform, or you can afford some kinds of inaccuracy or imprecision.

Are high resolution calls to get the system time wrong by the time the function returns?

Given a C process that runs at the highest priority that requests the current time, Is the time returned adjusted for the amount of time the code takes to return to the user process space? Is it out of date when you get it? As a measurement taking the execution time of known number of assembly instructions in a loop and asking for the time before and after it could give you an approximation of the error. I know this must be an issue in scientific applications? I don't plan to write software involving any super colliders any time in the near future. I have read a few articles on the subject but they do not indicate that any correction is made to make the time given to you be slightly ahead of the time the system read in. Should I lose sleep over other things?
Yes, they are almost definitely "wrong".
For Windows, the timing functions do not take into account the time it takes to transition back to user mode. Even if this were taken into account, it can't correct if the function returns, and your code hits a page fault/gets swapped out/etc., before capturing the return value.
In general, when timing things you should snap a start and an end time around a large number of iterations to weed out these sort of uncertainties.
No, you should not lose sleep over this. No amount of adjustment or other software trickery will yield perfect results on a system with a pipelined processor with multi-layered memory access running a multi-tasking operating system with memory management, devices, interrupt handlers... Not even if your process has the highest priority.
Plus, taking the difference of two such times will cancel out the constant overhead, anyway.
Edit: I mean yes, you should lose sleep over other things :).
Yes, the answer you get will be off by a certain (smallish) amount; I have never heard of a timer function compensating for the average return time, because such a thing is nearly impossible to predict well. Such things are usually implemented by simply reading a register in the hardware and returning the value, or a version of it scaled to the appropriate timescale.
That said, I wouldn't lose sleep over this. The accepted way of keeping this overhead from affecting your measurements in any significant way is not to use these timers for short events. Usually, you will time several hundred, thousand, or million executions of the same thing, and divide by the number of executions to estimate the average time. Such a thing is usually more useful than timing a single instance, as it takes into account average cache behavior, OS effects, and so forth.
Most of the real world problems involving high resolution timers are used for profiling, in which the time is read once during START, and once more during FINISH. So most of the times ~almost~ the same amount of delay in involved in both START and FINISH. And hence it works fine.
Now, for nuclear reactors, WINDOWS or for that many other operating system with generic functions may not be suitable. I guess they use REAL TIME operating systems which might give a better accurate time values than desktop operating systems.

Resources