I've read chapter 7 in the 'Linux Device Drivers' (which can be found here) that time can be measured in 'jiffies'. The problem with the stock jiffies variable is that it wraps around quite frequently (especially if you have your CONFIG_HZ set to 1000).
In my kernel module I'm saving a jiffies value that is set to some time in the future and comparing it at a later time with the current 'jiffies' value. I've learned already that there are functions that take the 32bit jiffy wrap into consideration so to compare two values I'm using this:
if (time_after(jiffies, some_future_jiffies_value))
{
// we've already passed the saved value
}
Here comes my question: So now I want to set the 'some_future_jiffies_value' to "now + 10ms". This can easily be accomplished by doing this:
some_future_jiffies_value = jiffies + msecs_to_jiffies(10);
Is this correct? What happens if the current jiffies is near MAX_JIFFY_OFFSET and the resulting value of msecs_to_jiffies(10) puts some_future_jiffies_value past that offset? Does it wrap around automatically or should I add some code to check for this? Are there functions that save me from having to deal with this?
Update:
To avoid stuff with wraparound I've rewritten my sleep loop:
// Sleep for the appropriate time
while (time_after(some_future_jiffies_value, jiffies))
{
set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(1);
}
I assume this is more portable right?
Update 2:
Thank you very much 'ctuffli' for taking the time to come back to this question and providing some feedback on my comments as well. My kernel driver is working fine now and it is a lot less ugly compared to the situation before you provided me with all these tips. Thanks!
What you are implementing here is essentially msleep_interruptible() (linux/kernel/timer.c)
/**
* msleep_interruptible - sleep waiting for signals
* #msecs: Time in milliseconds to sleep for
*/
unsigned long msleep_interruptible(unsigned int msecs)
This function has the advantage that the specification is in milliseconds and hides the details of jiffies wrapping internally. Be sure to check the return values as this call returns the number of jiffies remaining. Zero means the call slept the specified number of milliseconds while a non-zero value indicates the call was interrupted this many jiffies early.
With regard to wrapping, see section 6.2.1.2 for a description of jiffies and wrapping. Also, this post tries to describe wrapping in the abstract.
Related
I have the following code:
#include <stdio.h>
#include <time.h>
int main(){
clock_t timerS;
int i=1, targetTime=2;
scanf("%d", &targetTime);
while(i!=0){
timerS = clock();
while ((double)((clock() - timerS) / CLOCKS_PER_SEC) < targetTime){
//do something
}
//do another thing but delayed by the given time
if(targetTime>=0.5)
targetTime-=0.02;
else i=0;
}
return 0;
}
And what I want to do is having a loop which does something for (initially) an inputted amount of seconds and also doing another thing after targetTime-seconds have passed.
But after the first loop, to change the speed with which these operations are made(more specifically -0.02 seconds in this case).
An example would be getting multiple user inputs from user for 2 seconds, and displaying all the inputs made in these 2 seconds afterwards.
First problem is
If the initial given time is smaller than 1 second (for example 0.6), the other thing isn't delayed by 0.6 seconds, but is done immediately.
Second problem is
Actually similar to the first, if I subtract 0.02 seconds (in this case) from targetTime, it again does the other thing immediately and not in targetTime-0.02 seconds as I intend it to.
I'm new to this "clock" and "time" topic in C so I guess I'm doing something wrong regarding how these operations should be done. Also, please don't give an overly-complicated explanation/solution because of the above-mentioned reason.
Thanks!
Don't use the clock(2) system call, as it is obsolete and has been fully superseeded by machine independent replacements.
You can use, if your system supports it, clock_gettime(2), that will give you up to nanosecond precission (depending on the platform, but at least in linux on Intel architectures it is almost warranted) or, if you cannot use it, at least you'll have gettimeofday(2), which is derived from BSD systems, and provides you with a clock with microsecond resolution.
If you want to stop your program for some delay, you have also sleep(2) (second based) usleep(2) (microsecond based) or even nsleep(2) (nanosecond based)
Anyway, any of these calls has a tick that is not based on the system heartbeat, and the resolution is uniform and not system dependant.
I mistakenly initiated targetTime as int instead of double. Changing it to double solves the issue easily. Sorry!
Does anyboy have an idea how the time() function works?
I was looking online for implementations out of pure curiosity, but could only find the NetBSD implementation of difftime()
Also is there anything that describes the process of calculating the time (non system specific or system specific)?
Note: I am not looking for answers on how to use time() but how it actually works behind the scenes when I call it.
Somewhere deep down in your computer, typically in hardware, there's a clock oscillator running at some frequency f. For the purposes of this example let's say that it's operating at 1 kHz, or 1,000 cycles per second. Things are set up so that every cycle of the oscillator triggers a CPU interrupt.
There's also a low-level counter c. Every time the clock interrupt is triggered, the OS increments the counter. For the moment we'll imagine it increments it by 1, although this won't usually be the case in practice.
The OS also checks the value of the counter as it's incremented. When c equals 1,000, this means that exactly one second has gone by. At this point the OS does two things:
It increments another counter variable, the one that's keeping track of the actual time of day in seconds. We'll call this other counter t. (It's going to be a big number, so it'll be at least a 32-bit variable, or these days, 64 bits if possible.)
It resets c to 0.
Finally, when you call time(), the kernel simply returns you the current value of t. It's pretty simple, really!
Well, actually, it's somewhat more complicated than that. I've overlooked the details of how the value of the counter t gets set up initially, and how the OS makes sure that the oscillator is running at the right frequency, and a few other things.
When the OS boots, and if it's on a PC or workstation or mainframe or other "big" computer, it's typically got a battery-backed real-time clock it can use to set the initial value of t from. (If the CPU we're talking about is an embedded microcontroller, on the other hand, it may not have any kind of clock, and all of this is moot, and time() is not implemented at all.)
Also, when you (as root) call settimeofday, you're basically just supplying a value to jam into the kernel's t counter.
Also, of course, on a networked system, something like NTP is busy keeping the system's time up-to-date.
NTP can do that in two ways:
If it notices that t is way off, it can just set it to a new value, more or less as settimeofday() does.
If it notices that t is just a little bit off, or if it notices that the underlying oscillator isn't counting at quite the right frequency, it can try to adjust that frequency.
Adjusting the frequency sounds straightforward enough, but the details can get pretty complicated. You can imagine that the frequency f of the underlying oscillator is adjusted slightly. Or, you can imagine that f is left the same, but when the time interrupt fires, the numeric increment that's added to c is adjusted slightly.
In particular, it won't usually be the case that the kernel adds 1 to c on each timer interrupt, and that when c reaches 1,000, that's the indication that one second has gone by. It's more likely that the kernel will add a number like 1,000,000 to c on each timer interrupt, meaning that it will wait until c has reached 1,000,000,000 before deciding that one second has gone by. That way, the kernel can make more fine-grained adjustments to the clock rate: if things are running just a little slow, it can change its mind, and add 1,000,001 to c on each timer interrupt, and this will make things run just a tiny bit faster. (Something like one part per million, as you can pretty easily see.)
One more thing I overlooked is that time() isn't the only way of asking what the system time is. You can also make calls like gettimeofday(), which gives you a sub-second time stamp represented as seconds+microseconds (struct timeval), or clock_gettime(), which gives you a sub-second time stamp represented as seconds+nanoseconds (struct timespec). How are those implemented? Well, instead of just reading out the value of t, the kernel can also peek at c to see how far into the next second it is. In particular, if c is counting up to 1,000,000,000, then the kernel can give you microseconds by dividing c by 1,000, and it can give you nanoseconds by returning c directly.
Two footnotes:
(1) If we've adjusted the frequency, and we're adding 1,000,001 to c on each low-level timer tick, c will usually not hit 1,000,000,000 exactly, so the test when deciding whether to increment t will have to involve a greater-than-or-equal-to condition, and we'll have to subtract 1,000,000,000 from c, not just clear it. In other words, the code will look something like
if(c >= 1000000000) {
t++;
c -= 1000000000;
}
(2) Since time() and gettimeofday() are two of the simplest system calls around, and since programs calling them may (by definition) be particularly sensitive to any latency due to system call overhead, these are the calls that are most likely to be implemented based on the vDSO mechanism, if it's in use.
The C specification does not say anything about how library functions work. It only states the observable behavior. The internal workings is both compiler and platform dependent.
Synopsis
#include <time.h>
time_t time(time_t *timer);
Description
The time function determines the current calendar time. The encoding of the value is unspecified.
Returns
The time function returns the implementation's best approximation to the current calendar time. The value (time_t)(-1) is returned if the calendar time is not available. If timer is not a null pointer, the return value is also assigned to the object it points to.
https://port70.net/~nsz/c/c11/n1570.html
Here is one implementation:
time_t
time (timer)
time_t *timer;
{
__set_errno (ENOSYS);
if (timer != NULL)
*timer = (time_t) -1;
return (time_t) -1;
}
https://github.com/lattera/glibc/blob/master/time/time.c
I am currently trying to talk to a piece of hardware in userspace (underneath the hood, everything is using the spidev kernel driver, but that's a different story).
The hardware will tell me that a command has been completed by indicating so with a special value in a register, that I am reading from. The hardware also has a requirement to get back to me in a certain time, otherwise the command has failed. Different commands take different times.
As a result, I am implementing a way to set a timeout and then check for that timeout using clock_gettime(). In my "set" function, I take the current time and add the time interval I should wait for (usually this anywhere from a few ms to a couple of seconds). I then store this value for safe keeping later.
In my "check" function, I once again, get the current time and then compare it against the time I have saved. This seems to work as I had hoped.
Given my use case, should I be using CLOCK_MONOTONIC or CLOCK_MONOTONIC_RAW? I'm assuming CLOCK_MONOTONIC_RAW is better suited, since I have short intervals that I am checking. I am worried that such a short interval might represent a system-wide outlier, in which NTP was doing alot of adjusting. Note that my target system is only Linux kernels 4.4 and newer.
Thanks in advance for the help.
Edited to add: given my use case, I need "wall clock" time, not CPU time. That is, I am checking to see if the hardware has responded in some wall clock time interval.
References:
Rutgers Course Notes
What is the difference between CLOCK_MONOTONIC & CLOCK_MONOTONIC_RAW?
Elapsed Time in C Tutorial
Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.
I am writing a Gif animator in C.
I have two threads running in parallel, both . The first allows the user to alter the speed of the animation. The second draws the current frame, and then calls Sleep(Constant * 100 / CurrentSpeed), where CurrentSpeed is a percentage amount, ranging from 1 to 200.
The problem is that if you quickly change the speed from 100%, to 1%, and then back to the first, the second thread will execute the following:
Sleep(Constant * 100)
This will draw frame A, wait many seconds (although the speed was changed by the user), and only then draw B and the following frames in the default speed.
It seems to me that Sleep is a poor choice of mine in this case. What can I do to solve this problem?
EDIT:
The code I currently have (Simplified):
while (1) {
InvalidateRect(Handle, &ImageRect, FALSE);
if (shouldDispose) {
break;
}
if (DelayTime)
Sleep(DelayTime * 100 / CurrentSpeed);
SelectNextImage();
}
Instead of calling Sleep() with the desired frame rate, why don't you call it with a constant interval of 1 ms, for example, and use a variable as a counter?
For example, let C be a global variable (counter) which is loaded with a number of 'ticks' of 1ms. Then, write the loop:
while(1) { //Main loop of the player thread
if (C > 0) C--;
if (C == 0) nextframe(); //if counter reaches 0, load next frame.
Sleep(1);
}
The control thread would load C with a number of 1ms ticks (i.e. frame rate), and the player thread will never be stopped beyond 1 ms. The use of 1ms as the base rate is arbitrary. Use the minimum time that allows you the maximum frame rate, in order to load CPU the less as possible.
EDIT
After some hot comments (arguing is good after all), I'd like to point out that this solution is sub-optimal, i.e., it doesn't use any OS mechanism for signaling threads or any other API for preventing the thread from wasting CPU time. The solution shown here is generic: it may be used in any system (even in embedded systems without any running OS. But above all, it is based on the original code posted by the user that asked the question: using Sleep(), how can I achieve my purpose. I give him my humble answer. Anyway, I encourage other people to write sample code using the appropriate API for achieving the same goal. With no hard feelings, special thanks to Martin James.
Find a synchro API on your OS that allows a wait with a timeout, eg. WaitForSingleObject() on Windows. If you want to change the delay, change the timeout and signal the event upon which the WFSO is waiting to make it return 'early' and restart the wait with the new timeout.
Polling with Sleep(1) loops is rarely justifiable.
Create a waitable timer. When you set the timer, you can specify a callback function that will run in the setting thread's context. This means you can do it with two threads, but it actually works just fine with only a single thread as well.
The main advantage of a waitable timer is, however, that it is more accurate and more reliable than Sleep. A timer is conceptually much different from Sleep insofar as Sleep only gives up control and the scheduler marks the thread as ready to run when the time is up and when the scheduler runs anyway. It doesn't do anything beyond that. Which means that the thread will eventually be scheduled to run again, like any other thread that is ready.
A thread that is waiting on a timer (or other waitable object) causes the scheduler to run when the timer is up and has its priority temporarily boosted. It therefore runs not only more reliably and more closely to the desired time, but also earlier than all other threads with the same base priority. Which does not give a realtime guarantee but at least gives a sort of "soft guarantee".
If you still want to use Sleep, use SleepEx instead which you can alert, either by queueing an APC, or by calling the undocumented NtAlertThread function.
In any case, Sleep is troublesome not only because of being unreliable, but also because it bases on the granularity of the system-wide timer. Which you can, of course, set to as low as 1ms (or less on some systems), but that will cause a lot of unnecessary interrupts.