I'm writing a kernel (using qemu to simulate) for x86 as a school project and I ran into weird problem.
Even though I have set the interrupt flag in the eflags register, I'm sill not getting any clock interrupts (I checked with qemu info register command and I see eflag=0x292 which means it is set).
To be precise when I run a spin test (while(1); program) in user mode, I get one clock interrupt, but after that one, it stops, qemu does not seems to simulate more! did it happen to anyone else? Is there another mechanism that can affect interrupts?
Anyone have a clue?
Shai.
Apparently in x86, you have to acknowledge clock interrupts after each one.
I.e one must sent an acknowledgment to the lapic after every clock interrupt.
If you are expecting interrupts from the RTC, you must acknowledge the previous interrupt first by reading from REG_C (CMOS register 0x0C).
Related
I working with EFM microcontroller (Silicon Labs)
I need to make a beep every x seconds, when the device in EM3 mode.
I tried so many ways without success.
Please try to help me with code example (I'm HW man, not a SW haha)
Thanks,
Gal.
Refer to page 8 on the datasheet (thanks to #user694733)
EM3 mode description:
still full CPU and RAM retention, as well as Power-on Reset, Pin reset and Brown-out Detection, with a consumption of only 0.6 μA. The low-power ACMP, asynchronous external interrupt, PCNT, and I2C can wake-up the device.
So these are your options. One of these things can wake up the microcontroller. All of them are external inputs. So the microcontroller cannot wake itself up in this mode. This makes sense because all clocks are stopped.
If you had an outside clock connected to the PCNT you could use that to wake it up.
If you want the microcontroller to wake itself up, then you need EM2 mode or less:
In EM2 the high frequency oscillator is turned off, but with the 32.768 kHz
oscillator running, selected low energy peripherals (LCD, RTC, LETIMER,
PCNT, LEUART, I2C, WDOG and ACMP) are still available
In EM2 mode the microcontroller may wake itself up using the RTC (real-time clock), LETIMER (low-energy timer), WDOG (watchdog timer) or PCNT (pulse counter, which can be set to count pulses of the 32.768kHz clock).
The datasheet recommends using the Real-Time Clock or Low Energy Timer (RTC or LETIMER) modules.
... however, if we pay attention, we see the datasheet mentions something called the ULFRCO, Ultra-Low-Frequency RC Oscillator, which runs at approximately 1000 Hz. By searching for the keyword ULFRCO, we see that it does still run in EM3 mode, and it can be used as input for the WDOG. On page 89 we see this listed as a feature of EM3 mode.
So, you may configure the WDOG to reset the system after a few seconds. When the microcontroller resets due to watchdog timeout, it wakes up. You should not be afraid of using a system reset. The RMU_RSTCAUSE allows you to see that the system was reset because of the watchdog timer (not because it was first turned on or the reset pin was used). Memory contents are probably still there, but all peripherals are reset. As long as you can deal with peripherals being reset, you can probably make this work. You might even be able to use a little bit of assembly programming to jump back to exactly the point where the program left off.
When I am executing a system call to do write or something else, the ISR corresponded to the exception is executing in interrupt mode (on cortex-m3 the IPSR register is having a non-zero value, 0xb). And what I have learned is that when we execute a code in an interrupt mode we can not sleep, we can not use functions that might block ...
My question is that: is there any kind of a mechanism with which the ISR could still executing in interrupt mode and in the same time it could use functions that might block, or is there any kind of trick is implemented.
Caveat: This is more of a comment than an answer but is too big to fit in a comment or series of comments.
TL;DR: Needing to sleep or execute a blocking operation from an ISR is a fundamental misdesign. This seems like an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Doing sleep [even in application code] is generally a code smell. Why do you feel you need to sleep [vs. some event/completion driven mechanism]?
Please edit your question and add clarification [i.e. don't just add comments]
When I am executing a system call to do write or something else
What is your application doing? A write to what device? What "something else"?
What is the architecture/board, kernel/distro? (e.g. Raspberry Pi running Raspian? nvidia Jetson? Beaglebone? Xilinx FPGA with petalinux?)
What is the H/W device and where did the device driver come from? Did you write the device driver yourself or is it a standard one that comes with the kernel/distro? If you wrote it, please post it in your question.
Is the device configured properly? (e.g.) Are the DTB entries correct?
Is the device a block device, such as a disk controller? Or, is it a character device, such as a UART? Does the device transfer data via DMA? Or, does it transfer data by reading/writing to/from an IO port?
What do you mean by "exception"? Generally, exception is an abnormal condition (e.g. segfault, bus error, etc.). Please describe the exact context/scenario for which this occurs.
Generally, an ISR does little things. (e.g.) Grab [and save] status from the device. Clear/rearm the interrupt in the interrupt controller. Start the next queued transfer request. Wake up the sleeping base level task (usually the task that executed the syscall [waiting on a completion event in kernel mode]).
More elaborate actions are generally deferred and handled in the interrupt's "bottom half" handler and/or tasklet. Or, the base level is woken up and it handles the remaining processing.
What kernel subsystems are involved? Are you using platform drivers? Are you interfacing from within the DMA device driver framework? Are message buses involved (e.g. I2C, SPI, etc.)?
Interrupt and device handling in the linux kernel is somewhat different than what one might do in a "bare metal" system or RTOS (e.g. FreeRTOS). So, if you're coming from those environments, you'll need to think about restructuring your driver code [and/or application code].
What are your requirements for device throughput and latency?
You may wish to consult a good book on linux device driver design. And, you may wish to consult the kernel's Documentation subdirectory.
If you're able to provide more information, I may be able to help you further.
UPDATE:
A system call is not really in the same class as a hardware interrupt as far as the kernel is concerned, even if the CPU hardware uses the same sort of exception vector mechanisms for handling both hardware and software interrupts. The kernel treats the system call as a transition from user mode to kernel mode. – Ian Abbott
This is a succinct/great explanation. The "mode" or "context" has little to do with how we got/get there from a H/W mechanism.
The CPU doesn't really "understand" interrupt mode [as defined by the kernel]. It understands "supervisor" vs "user" privilege level [sometimes called "mode"].
When executing at user privilege level, an interrupt/exception will notice the transition from "user" level to "supvervisor" level. It may have a special register that specifies the address of the [initial] supervisor stack pointer. Atomically, it swaps in the value, pushing the user SP onto the new kernel stack.
If the interrupt is interrupting a CPU that is already at supervisor level, the existing [supervisor] SP will be used unchanged.
Note that x86 has privilege "ring" levels. User mode is ring 3 and the highest [most privileged] level is ring 0. For arm, some arches can have a "hypervisor" privilege level [which is higher privilege than "supervisor" privilege].
The setup of the mode/context is handled in arch/arm/kernel/entry-*.S code.
An svc is a synchronous interrupt [generated by a special CPU instruction]. The resulting context is the context of the currently executing thread. It is analogous to "call function in kernel mode". The resulting context is "kernel thread mode". At that point, it's not terribly useful to think of it as an "interrupt" anymore.
In fact, on some arches, the syscall instruction/mechanism doesn't use the interrupt vector table. It may have a fixed address or use a "call gate" mechanism (e.g. x86).
Each thread has its own stack which is different than the initial/interrupt stack.
Thus, once the entry code has established the context/mode, it is not executing in "interrupt mode". So, the full range of kernel functions is available to it.
An interrupt from a H/W device is asynchronous [may occur at any time the CPU's internal interrupt enable flag is set]. It may interrupt a userspace application [executing in application mode] OR kernel thread mode OR an existing ISR executing in interrupt mode [from another interrupt]. The resulting ISR is executing in "interrupt mode" or "ISR mode".
Because the ISR can interrupt a kernel thread, it may not do certain things. For example, if the CPU were in [kernel] thread mode, and it was in the middle of a kmalloc call [GFP_KERNEL], the ISR would see partial state and any action that tried to adjust the kernel's heap would result in corruption of the heap.
This is a design choice by linux for speed.
Kernel ISRs may be classified as "fast interrupts". The ISR executes with the CPU interrupt enable [IE] flag cleared. No normal H/W interrupt may interrupt the ISR.
So, if another H/W device asserts its interrupt request line [in the external interrupt controller], that request will be "pending". That is, the request line has been asserted but the CPU has not acknowledged it [and the CPU has not jumped via the interrupt table].
The request will remain pending until the CPU allows further interrupts by asserting IE. Or, the CPU may clear the pending interrupt without taking action by clearing the pending interrupt in the interrupt controller.
For a "slow" interrupt ISR, the interrupt entry code will clear the interrupt in the external interrupt controller. It will then rearm interrupts by setting IE and call the ISR. This ISR can be interrupted by other [higher priority] interrupts. The result is a "stacked" interrupt.
I have been searching all over the places, I come to the conclusion is that the interrupts have a higher priority than some of exceptions in the Linux kernel.
An exception [synchronous interrupt] can be interrupted if the IE flag is enabled. An svc is treated differently but after the entry code is executed, the IE flag is set, so the actual syscall code [executing in kernel thread mode] can be interrupted by a H/W interrupt.
Or, in limited circumstances, the kernel code can generate an exception (e.g. a page fault caused by a kernel action [which is usually deemed fatal]).
but I am still looking on how exactly the context switching happen when executing an exception and letting the processor to execute in a thread mode while the SVCall exception is pending (was preempted and have not returned yet)... I think when I understand that, it would be more clear to me.
I think you have to be very careful with the terminology. In particular, when combining terms from disparate sources. Although user mode, kernel thread mode, or interrupt mode can be considered a context [in the dictionary sense of the word], context switching usually means that the current thread is suspended, the scheduler selects a new thread to run and resumes it. That is separate from the user-to-kernel transition.
And if there is any recommended resources about that for ARM-Cortex-M3/4, it would be nice
Here is something: https://interrupt.memfault.com/blog/arm-cortex-m-exceptions-and-nvic But, be very careful in applying the terminology therein. What it considers "pending" only exists in the kernel during the entry code. What is more relevant is what the kernel does to set up mode/context and the terms are not equivalent.
So, from the kernel's standpoint, it's probably better to not consider an svc as "pending".
I have an ATSAMD21E18A micro that I am using with semi-hosting. In order for the semi-hosting to work, GDB needs to be "attached" before the first bkpt instruction. On the other hand, I have inexplicably found that the SysTick interrupt will not fire if GDB was already attached when I configured it. If I want to the SysTick interrupt to fire, I have to perform a reset (power-off via a button) and tell GDB to continue when it hasn't yet configured the micro (that is, it hasn't sent breakpoints over or anything else), and then hit Ctrl-C to initialize debugging mode after the SysTick configuration but before we get to initialise_monitor_handles.
I have verified that the start function is only copying over the relocatable data segment, zeroing the zero segment, and setting the right initial stack pointer value. We are writing our code without libraries like CMSIS.
Also I can confirm that I have no issues when the debugger is not attached (JLinkGDBServer through an Atmel SAM-ICE), besides needing to remove the semi-hosting stuff.
Also, the SysTick COUNT does still correctly count even when the interrupts themselves don't fire. Also the SysTick pending interrupt bit PENDSTSET in ICSR, is in fact set when this happens.
My code follows:
int main()
{
// enable system timer interrupt
SYS_TICK->STATUS = 0; // (CSR)
SYS_TICK->PERIOD = 48000; // (RVR) fire at 1khz for 48mhz clock
SYS_TICK->STATUS = 0b111; // use processor clock, w/ interrupt, and enabled
SYS_TICK->COUNT = 1; // (CVR) avoid high unknown value
// dumb busy loop
util_idle_ms(2000); // <<< I hit Ctrl-C to break here!
initialise_monitor_handles();
// ... more system initialization and everything else
}
I have seen some similar seeming questions here on StackOverflow, but they seemed to be too vague to get good answers.
Edit:
Here are possibly relevant register values taken during the busy loop for the run that doesn't call the SysTick handler (no hard reset, GDB attached before SysTick configured):
SYS_TICK_CSR/STATUS: 0x10007
SYS_TICK_RVR/PERIOD: 48000
SYS_TICK_CVR/COUNT: 5245 (varies of course)
NVC_ISER: 0 (and we expect this since SysTick is considered an exception, and not an interrupt)
DHCSR: 0x30003/0x1030003 (C_MASKINTS is not set; I've seen both values show up)
ICSR: 0x400f00f (it really wants to run the SysTick handler)
PRIMASK: 0
xPSR: 0x2100000f (IPSR is 0x0f/SysTick)
And for the run that calls the SysTick handler just fine (hard reset with GDB attaching after SysTick configuration):
SYS_TICK_CSR/STATUS: 0x10007
SYS_TICK_RVR/PERIOD: 48000
SYS_TICK_CVR/COUNT: 16892 (varies of course)
NVC_ISER: 0
DHCSR: 0x10003/0x1030003 (I've seen both values show up)
ICSR: 0 (SysTick handler already run)
PRIMASK: 0
xPSR: 0x2100000f
So the register values here do not yet seem to reveal anything new to me... Please help inform me of other potentially relevant registers to check!
Just for interest, the reason this is important to me is because I have gotten gprof to work on this chip, based on https://mcuoneclipse.com/2015/08/23/tutorial-using-gnu-profiling-gprof-with-arm-cortex-m/
And although I do have to hit Ctrl-C at just the right time after a hard reset, it does work like this!
Edit
I have found that I had a misunderstanding where I thought running load in GDB performed a soft reset. I have since found that although it returns execution to reset vector, various peripherals and other registers are not in fact reset. If I perform a soft reset in GDB with monitor reset then I don't need to Ctrl-C during a delay to attach GDB and both SysTick and SemiHosting will work.
The problem occurs when SysTick is configured and then load is run in GDB, without an explicit hard or soft reset. In this case, SysTick does not fire interrupts. Most of my debugging went like this, loading new code and immediately expecting it to work so I could evaluate it. Just running monitor reset is a better workaround than before, but I still would prefer to know the reason for SysTick's misbehavior!
I would visit the ARM® v6-M Architecture Reference Manual and see if you can get some direction from that. https://static.docs.arm.com/ddi0419/d/DDI0419D_armv6m_arm.pdf
Observe that state of the registers related to the Systick that you didn't include in your question. If you can't figure out the problem based on those registers, edit your question and post the register values here (the NVIC ISER, all registers related to systick config, the DHCSR, and any others you think are related). They will be the key to getting more feedback.
The Debug Halting Control and Status Register (DHCSR) has the ability to mask interrupts including the systick. Maybe this is being set by the debugger?
bit 3 of the DHCSR looks relevant
I would also check that the SYST_RVR (Systick reload value register) is being set to something sane.
I don't have the rep to comment on your question, but I'm hoping this can get you going in a productive direction :)
I'm developping a bare-metal project on a STM32L4 and I'm starting from an existing code base.
The ISRs have been implemented the following way:
read interrupt status in the peripheral to know what event(s) provoked the interrupt
do something
clear the flags that have read at the beginning.
Is it the right way to clear the flag ? Shouldn't the flags be cleared at the very beginning of the ISR ? My understanding is that, if the same peripheral event is happening a second time during step 2, it will not provoke a second IRQ so it would be lost. On the other hand if you clear the flag as soon as you can, this second event would pulse the interrupt whose state in the CPU would change to "pending and active": a second IRQ would happen.
PS: From STM32 Processor Programming Manual I read: "STM32 interrupts are both level-sensitive and pulse-sensitive".
Definitely at the beginning (unless you have special reasons in the program logic) as some time is needed the for actual write to the flag clear register to propagate through the buses.
If you decide for some reason to put it at the end of the interrupt you should leave some instructions, place the barrier instruction or read back the register before the interrupt routine return to make sure that the clear operation has propagated across the buses. Otherwise you may have a "phantom" duplicate routine calls.
Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.
There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.
Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?
While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.
An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.