I'm trying to hijack the Timer interrupt. A colleague told me that interrupt 0x08 on the IDT (Interrupt Descriptor Table) is the timer. Of curse I checked and saw two possible answers: this which says that 8 is the real clock timer and this saying it's the Double Fault interrupt - I decided to believe him and not waste time on checking further. After finally having control over the IDT and replacing interrupt 8, nothing is happening.
So what is going on?
Did this interrupt change its purpose over time from timer to double fault?
Does this interrupt has different purposes on ARM/Intel/etc.?
My code is a kernel module, that hijacks the interrupt 8 and simply do a printk command every time the interrupt arrives. I ran it for about 25 minutes - No output in dmesg.
In case it matters: I run Linux Mint with kernel 3.8 on a VM. The host has an Intel i5.
You can find which interrupt is for timer by using this command: cat /proc/interrupt
Following is a sample output on a 6 core machine:
cat /proc/interrupts | egrep "timer|rtc"
0: 320745126 0 0 0 0 0 IO-APIC-edge timer
8: 1 0 0 0 0 0 IO-APIC-edge rtc0
LOC: 115447297 304097630 194770704 212244137 63864376 69243268 Local timer interrupts
Note, timer and rtc are different. Also there is only one rtc interrupt so far. (Lots of timer interrupts). Following is the uptime output.
uptime
14:14:20 up 13 days, 3:58, 9 users, load average: 0.47, 1.68, 1.38
I think you should check this before you hack IDT. Also, probably, you want to hack interrupt 0, not 8.
You have found two descriptions for the same IRQ because in protected mode the address range 0x0 - 0x1F are reserved for internal cpu interruptions use.
You have to remap IRQs to another address space without conflicts, in this article you can find it explained with all the source code needed:
https://alfaexploit.com/readArticle/416
Related
I'm having a problem with a simple periodic Clock function running on TI's AM572x EVM (main A15 core). The Clock is set up to use any available timer (Timer_ANY), which I assume is either TIMER2 or TIMER10 as I've seen them associated with A15_0 in a GEL script. When I pause and resume emulation with the XDX200 debugger (with CCS9), I'm seeing the Clock Swi executing many more times than it should, preceded by the Hwi posting the Clock Swi as shown here:
Many Hwi executions following unpause
Many Swi executions following many Hwi executions
I've checked the TIOCP_CFG EMUFREE bits for TIMER2 and TIMER10 and they're set to 0, indicating that the timers should be frozen in emulation mode. However, I'm also always seeing 0 in the TCRR register for both timers, which I understand to be the counter register. This suggests that these timers aren't actually counting at all, and that a different timer is being used for the TI-RTOS Clock, but I'm not sure what timer that would be or how I would configure it to freeze during a debugger pause.
Does anyone have any insight into how to properly freeze TI-RTOS Clocks while debugging?
Why does any of the following register writes cause my program to halt?
slcr.DDR_CLK_CTRL[DDR_2XCLKACT] = 0
slcr.DDR_CLK_CTRL[DDR_3XCLKACT] = 0
slcr.DDR_PLL_CTRL[PLL_BYPASS_FORCE] = 1
slcr.DDR_PLL_CTRL[PLL_PWRDWN] = 1
I'm new with embedded development and I'm trying to implement some bare bones C code to put the zynq 7000 into sleep mode per page 674 of the Technical Reference Manual
All of the sleep mode steps execute without issue except for the steps listed, all of which relate to DDR, and all of which halt execution. Leaving the DDR steps out the code functions but I'm not sure I'm reaching the lowest power state.
I'm using the on-board button of my Cora Z7-07S development board as an interrupt source. The handler for the interrupt on button-down executes the power down function and executes the wake function on button-up.
I followed this tutorial (video here) on my Cora Z7-07S to get the interrupt functioning. Does using the AXI GPIOs as an interrupt source create some dependence on DDR? Is there a way to setup the PL to avoid this and still allow a GPIO interrupt?
TL;DR : Using Linux kernel real time with NO_HZ_FULL I need to isolate a process in order to have deterministic results but /proc/interrupts tell me there is still local timer interrupts (among other). How to disable it?
Long version :
I want to make sure my program is not being interrupt so I try to use a real time Linux kernel.
I'm using the real time version of arch Linux (linux-rt on AUR) and I modified the configuration of the kernel to selection the following options :
CONFIG_NO_HZ_FULL=y
CONFIG_NO_HZ_FULL_ALL=y
CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y
then I reboot my computer to boot on this real time kernel with the folowing options:
nmi_watchdog=0
rcu_nocbs=1
nohz_full=1
isolcpus=1
I also disable the following option in the BIOS :
C state
intel speed step
turbo mode
VTx
VTd
hyperthreading
My CPU (i7-6700 3.40GHz) has 4 cores (8 logical CPU with hyperthreading technology)
I can see CPU0, CPU1, CPU2, CPU3 in /proc/interrupts file.
CPU1 is isolated by isolcpus kernel parameter and I want to disable the local timer interrupts on this CPU.
I though real-time kernel with CONFIG_NO_HZ_FULL and CPU isolation (isolcpus) was enough to do it and I try to check by running theses command :
cat /proc/interrupts | grep LOC > ~/tmp/log/overload_cpu1
taskset -c 1 ./overload
cat /proc/interrupts | grep LOC >> ~/tmp/log/overload_cpu1
where the overload process is:
***overload.c:***
int main()
{
for(int i=0;i<100;++i)
for(int j=0;j<100000000;++j);
}
The file overload_cpu1 contains the result:
LOC: 234328 488 12091 11299 Local timer interrupts
LOC: 239072 651 12215 11323 Local timer interrupts
meanings 651-488 = 163 interrupts from local timer and not 0...
For comparison I do the same experiment but I change the core where my process overload run (I keep watching interrupts on CPU1):
taskset -c 0 : 8 interrupts
taskset -c 1 : 163 interrupts
taskset -c 2 : 7 interrupts
taskset -c 3 : 8 interrupts
One of my question is why there is no 0 interrupts ? why the number of interrupts is bigger when my process run on CPU1 ? (I mean I though NO_HZ_FULL will prevent interrupt if my process was alone : "The CONFIG_NO_HZ_FULL=y Kconfig option causes the kernel to avoid
sending scheduling-clock interrupts to CPUs with a single runnable task"(https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt)
Maybe an explaination is there is other process running on CPU1.
I checked by using ps command :
CLS CPUID RTPRIO PRI NI CMD PID
TS 1 - 19 0 [cpuhp/1] 18
FF 1 99 139 - [migration/1] 20
TS 1 - 19 0 [rcuc/1] 21
FF 1 1 41 - [ktimersoftd/1] 22
TS 1 - 19 0 [ksoftirqd/1] 23
TS 1 - 19 0 [kworker/1:0] 24
TS 1 - 39 -20 [kworker/1:0H] 25
FF 1 1 41 - [posixcputmr/1] 28
TS 1 - 19 0 [kworker/1:1] 247
TS 1 - 39 -20 [kworker/1:1H] 501
As you can see, there is threads on the CPU1.
Is that possible to disable these processes ? I guess it is because if it is not the case, NO_HZ_FULL will never work right ?
Tasks with class TS doesn't disturb me because they didn't have priority among SCHED_FIFO and I can set this policy to my program.
Same things for tasks with class FF and priority less than 99.
However, you can see migration/1 that is in SCHED_FIFO and priority 99.
Maybe these process can causes interrupts when they run . This explain the few interrupts when my process in on CPU0, CPU2 and CPU3 (respectively 8,7 and 8 interrupts) but it also mean these processes are not running very often and then doesn't explain why there is many interrupts when my process run on CPU1 (163 interrupts).
I also do the same experiment but with the SCHED_FIFO of my overload process and I get:
taskset -c 0 : 1
taskset -c 1 : 4063
taskset -c 2 : 1
taskset -c 3 : 0
In this configuration there is more interrupts in the case my process use SCHED_FIFO policy on CPU1 and less on other CPU. do you know why ?
The thing is that a full-tickless CPU (a.k.a. adaptive-ticks, configured with nohz_full=) still receives some ticks.
Most notably the scheduler requires a timer on an isolated full tickless CPU for updating some state every second or so.
This is a documented limitation (as of 2019):
Some process-handling operations still require the occasional
scheduling-clock tick. These operations include calculating CPU
load, maintaining sched average, computing CFS entity vruntime,
computing avenrun, and carrying out load balancing. They are
currently accommodated by scheduling-clock tick every second
or so. On-going work will eliminate the need even for these
infrequent scheduling-clock ticks.
(source: Documentation/timers/NO_HZ.txt, cf. the LWN article (Nearly) full tickless operation in 3.10 from 2013 for some background)
A more accurate method to measure the local timer interrupts (LOC row in /proc/interrupts) is to use perf. For example:
$ perf stat -a -A -e irq_vectors:local_timer_entry ./my_binary
Where my_binary has threads pinned to the isolated CPUs that non-stop utilize the CPU without invoking syscalls - for - say 2 minutes.
There are other sources of additional local timer ticks (when there is just 1 runnable task).
For example, the collection of VM stats - by default they are collected each seconds. Thus, I can decrease my LOC interrupts by setting a higher value, e.g.:
# sysctl vm.stat_interval=60
Another source are periodic checks if the TSC on the different CPUs doesn't drift - you can disable those with the following kernel option:
tsc=reliable
(Only apply this option if you really know that your TSCs don't drift.)
You might find other sources by recording traces with ftrace (while your test binary is running).
Since it came up in the comments: Yes, the SMI is fully transparent to the kernel. It doesn't show up as NMI. You can only detect an SMI indirectly.
As far as I remember PIT (timer whose irq is 0) emits interrupt 16 or 18 times per second. This frequency (16 or 18 Hz) is just what I need for my application (it should emulate some physical device). Also, as far as I know irq 0 is used for task scheduler and it is triggered much more frequently than 18 Hz.
So, my question is: which is right? 18Hz or much more frequent? Another question is: is it ok to set my own irq 0 handler to be called after task scheduler (set handler with request_irq function)?
I have read that the postscalar of a timer specifies how many times the counter has to overflow inorder to get an interrupt.
But i have a doubt there.
So what i understand is if i put 0x55 and start timer with postscalar as 2, then timer will count from 0x55 to 0xFF and then again 0x55 to 0xFF and generate an interrupt.
Consider a case that i start the timer in an external inetrrupt. My requyirement may be to get the timegap between two interrupt. I start the timer in first interrupt, then read the timer in the next interrupt.
but if i have put postscalar then i will get the wrong time right.
I just used this as an example to make my question clear.
Edit: So will there be any issue if a timer value is read when postscalar turned ON
Usage Context: To get time difference between two interrupts
No. PostScale - Pre-Scale divide the clock input/output so you can sample at lower frequencies or intervals, depending on the application where you need more count than available. Let's say you have a XTAl of 8MHz with a Pre-Scaler of 1:8 (found on many PICS), you won't sample at 8MHz but at 1MHz.
Addind a pre-scaler - Post-Scaler will change the time between your 2 interrupts, surely. But that won't affect the reading of the counter value, assuming you count a variable each time there's one of the 2 interrupts on. You will simply count slower, or faster, depending on which timer you are using (most of them only have a pre-scaler option).