A timer is used for CPU protection. What mechanism is the timer used to compute actual current time in the PC?
Almost all modern computers have a built-in real-time clock, which keeps rough track of the time even while the computer is off. The computer can simply read from the RTC to get the current time.
Related
Based on my understanding, the CPU has a "hardware timer" that fires an interrupt when its interval expires.
The kernel uses this hardware timer to implement the scheduling mechanism for the processes, so if the hardware timer fires an interrupt with the number of 123, the kernel will map this interrupt number to an interrupt handler that executes the scheduler code (which will decide which process to execute next).
I have two questions:
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Edit: The hardware architecture I am more interested in is a PC, but I would like to know if other architectures (for example: a mobile phone, a raspberry PI, etc.) works in a similar way.
Details are hardware specific (might be different with various motherboards, chipsets, processors; read about SouthBridge). Read about High Precision Event Timer (and APIC).
See also OSDEV wiki, notably Programmable Interval Timer.
(so the answer is usually yes to both questions)
From early on, IBM-compatible PCs had PITs (Programmable Interval Timers): IBM PC and IBM PC XT had the Intel 8253, the IBM PC AT introduced the Intel 8254.
From the IBM PC Technical Reference from April 1984, page 1-11:
System Timers
Three programmable timer/counters are used by the system as follows: Channel 0 is a general-purpose timer providing a constant time base for implementing a time-of-day clock, Channel 1 times and requests refresh cycles from the Direct Memory Access (DMA) channel, and Channel 2 supports the tone generation for the speaker. [...]
Channel 0 is exactly the "constant time base," the "interval" you are asking for. And, to answer your 1st question, it is changeable; it is the Programmable Interval Timer.
However, the CPU built into the original IBM PC was the Intel 8088, basically an Intel 8086 with an 8-bit data bus. Real Mode was the state of the art back then; Protected Mode was introduced some years later with the Intel 80286, so effective multitasking, let alone preemptive multitasking or multithreading, were of no concern in those days when DOS reigned the market.
Fast-forwarding to the IBM PC AT, the world was blessed with a Protected Mode-capable CPU, the Intel 80286, and the Intel 8254 was introduced, a "[...] superset of the 8253." (from the 8254 PIT datasheet). If you really want an in-depth understanding of the PITs, read the 8253/8254 datasheets linked at the bottom. It might also be worth looking at Linux. Since the latest kernels are way too complicated to really understand the particular parts in a matter of twenty minutes, I suggest you look at Linux 0.01, the very first release. _timer_interrupt in kernel/system_calls.s might be interesting and from there you can go wherever you want.
Regarding your 2nd question: there are multiple timer sources, but only one is suitable for interval timing, that is, channel 0. IBM-compatibles still comply with the system timer layout shown above. They retain the same functionality, but might add more on top of that or change how the hardware works and how it's packaged. Nowadays, additional timers do exist like high-resolution timers, but using them for interrupt timing instead would break compatibility.
Intel 8253 Datasheet
Intel 8254 Datasheet
IBM PC Technical Reference
IBM PC AT Technical Reference
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Your questions are ENTIRELY processor specific. Some processors have controllable timers. Others have timers that go off at fixed intervals. Most processors you are likely to encounter have adjustable timers, however.
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Some processors have only one timer. Most processors these days have multiple timers.
I have been asked a question but I am not sure if I answered it correctly.
"Is it possible to rely only on software timer?"
My answer was "yes, in theory".
But then I added:
"Just relying on hardware timer at the kernel loading (rtc) and then
software only is a mess to manage since we must be able to know
how many cpu cycles each instruction took + eventual cache miss +
branching cost + memory speed and put a counter after each one or
group (good luck with out-of-order cpu).
And do the calculation to derivate the current cpu cycle. That is
insane.
Not talking about the overall performance drop.
The best we could have is a brittle approximation of the time which
become more wrong over time. Even possibly on short laps."
But even if it seems logical to me, did my thinking go wrong?
Thanks
On current processors and hardware (e.g. Intel or AMD or ARM in laptops or desktops or tablets) with common operating systems (Linux, Windows, FreeBSD, MacOSX, Android, iOS, ...) processes are scheduled at random times. So cache behavior is non deterministic. Hence, instruction timing is non reproducible. You need some hardware time measurement.
A typical desktop or laptop gets hundreds, or thousands, of interrupts every second, most of them time related. Try running cat /proc/interrupts on a Linux machine twice, with a few seconds between the runs.
I guess that even with a single-tasked MS-DOS like operating system, you'll still get random behavior (e.g. induced by ACPI, or SMM). On some laptops, the processor frequency can be throttled by its temperature, which depends upon the CPU load and the external temperature...
In practice you really want to use some timer provided by the operating system. For Linux, read time(7)
So you practically cannot rely on a purely software timer. However, the processor has internal timers.... Even in principle, you cannot avoid timers on current processors ....
You might be able, if you can put your hardware in a very controlled environment (thermostatically) to run a very limited software (an OS-like free standing thing) sitting entirely in the processor cache and perhaps then get some determinism, but in practice current laptop or desktop (or tablet) hardware is non-deterministic and you cannot predict the time needed for a given small machine routine.
Timers are extremely useful in interesting (non-trivial) software, see e.g. J.Pitrat CAIA, a sleeping beauty blog entry for an interesting point. Also look at the many uses of watchdog timers in software (e.g. in the Parma Polyhedra Library)
Read also about Worst Case Execution Time (WCET).
So I would say that even in theory it is not possible to rely upon a purely software timer (unless of course that software uses the processor timers, which are hardware circuits). In the previous century (up to 1980s or 1990s) hardware was much more deterministic, and the amount of clock cycles or microsecond needed for each machine instruction was documented (but some instructions, e.g. division, needed a variable amount of time, depending on the actual data!).
I have a device that runs on VxWorks, and I would like to know, how to retrieve the total CPU load? I know the "spyLib", but unfortunately, it is not supported on my system.
is there any way to measure/calculate/retrieve the total load on my single-core CPU?
It depends on your system and what its design is.
One way is to create a task at the lowest priority that is nothing more than an idle loop.
You could then use taskSwitchHookAdd to detect whenever you switched in and out of this idle task and calculate the time delta between switching in and out.
The problem with this is that now, you CPU is NEVER idle since the task at priority 255 is consuming all the spare CPU cycles. This might or might not be an issue for your system.
I am planning to write some software direct to an FPGA network card, to catch incoming customised network packets.
Eventually I believe I will send the data obtained either to the kernel or to a user application. This is for a latency-critical trading research project.
What kind of nanosecond timing instruments could I use due to the accuracy required and also the fact that I am timing the duration between reception at the PCI-E network card and receivership in the kernel?
This will be on Linux, with "driver" code (I may put the user application at this level to cut latency) written in C.
On linux access to the CPU clock tick is through the tsc equivalent to the Windows QueryPerformanceCOunter
clock_gettime uses HPET if available, which is simple and as good and as reliable as you can get.
If HPET is not available, you have no reliable timer at that scale anyway, so unluckily the resolution of clock_gettime will be worse, but that's just what it is, and there's not much you can do about it.
Any other source, including tsc, is either lower resolution or unreliable or both.
In software every thing happens on multiples of system clock. I think you can use any time measurement function that returns the number of elapsed clock ticks, clock() for example should give you enough accuracy.
I have a kernel module I've built that requires at least 1 ms time resolution. I currently use do_gettimeofday() but I'm concerned that this won't work once I move my module to an embedded device. The device has a 180 Mz processor (MIPS) and the default HZ value in the kernel is 100. Thus using jiffies will only give me at best 10 ms resolution. That won't cut it.
What I'd like to know is if do_gettimeofday() is based on the timer interrupt (HZ). Can it be guaranteed to provide at least 1 ms of resolution?
Thanks!
ms is not microsecond, it's millisecond. Without knowing more about your choice of device, no one can possibly answer such an implementation-dependent question as whether gettimeofday is based on the timer interrupt. If you have chosen a device, which knowing the instruction set and clock speed suggests, then why don't you look at the implementation of that particular kernel to find out?
On an embedded device, it can't be guaranteed. Seeing as it's MIPS based, it's probably OK, most MIPS machines have cycle counters. But, you're going to have to go read the source to that part of the kernel to see what it is doing on your platform.
Yes, you need to enable CONFIG_HIGH_RES_TIMERS in your kernel, and make sure that your platform registers a clock_event_device. This is the mechanism that allows to expose high-resolution timers to userspace. You can check the resolution of your timers by calling clock_getres() in userspace.