the basic world switch flow is:
set FIQ to monitor mode
normal world -> FIQ triggered
-> enter monitor mode (do switch to Secure world, restore Secure world context)
-> in Secure world sys mode
-> FIQ is not clear, enter FIQ handler in Secure world
step3 and step 4, after we restore the target context,
arm will trigger the exception to enter the exception
is the behavior correct? (if we dont branch to FIQ handle in monitor mode vector table)
we need flow like below:
(no world context switch case, just enter monitor mode to check if we need world switch, and enter irq exception from monitor mode directly. we need this because of our hw limitation, we only have IRQ in our chip)
set IRQ to monitor mode
normal world user mode -> IRQ triggered
-> enter monitor, do something we want to hook, check if we need context switch, prepare some spsr/lr for IRQ mode
-> enter normal world IRQ mode, irq handling
-> irq done, return back to user mode
for non-world switch case, we would like to let the normal world os does not know about the monitor mode, just though he enters the irq mode directly and return from irq mode.
for world switch case, just switch it in the monitor mode.
or it's just do the irq_handle in the monitor mode?
eq.
normal world OS usr mode -> irq -> usr mode
normal world OS usr mode -> monitor to irq handler -> usr mode
is the flow possible and well design?
is the flow possible and well design?
It is possible. 'well designed' is subjective. It has several fails or non-ideal issues. I guess your system doesn't have a GIC; which is a trustzone aware interrupt controller. The GIC has banked registers which allow the normal world OS to use it (almost) as if it was in the secure world.
It is not clear from you question whether you want the secure world to have interrupts? I guess from the statement 'for non-world switch case...'. If you only have interrupts handled by the normal world, things are simple. Don't branch to monitor mode on an IRQ (or FIQ). There is a register to set this behaviour (SCR/security configuration register).
For the dual world interrupt case, you have two issues.
You need to trust the normal world OS.
Interrupt latency will be increased.
You must always take the interrupt in monitor mode. The monitor must check the interrupt controller source to see what world the interrupt belongs to. It may need to do a world switch depending on the world. This will increase interrupt latency. As well, both the normal and secure world will be dealing with the same interrupt controller registers. So you have malicious security concerns and non-malicious race conditions with multiple interrupt drivers trying to manipulate registers (RMW). Generally, if your chip doesn't have a GIC, but the CPU supports TrustZone, the your system hasn't been well thought through for TrustZone use. The L1/L2 cache controllers must also be TrustZone aware and you possible have issue there as well.
If you have Linux (or some other open source OS in the normal world), it would be better to replace the normal world interrupt driver with a 'virtual' interrupt driver. The normal world virtual IRQ code would use the SMC instruction to set virtual registers and register IRQ routines for specific interrupts. The secure world/monitor IRQ code would then branch directly to the decoded IRQ routine.
With a GIC, set the group 0 (secure world) interrupts as FIQ and group 1 (normal world) as IRQ using the GICC_CTLR bit FIQEnb. Ie, you classify the interrupts with the DIST in the GIC to be either secure or normal (and therefore FIQ/IRQ).
You have to work through scheduling issues and how you want the different OS's to pre-empt. Normally (easiest) is to always have the secure OS running, but this means that some Linux (normal world) interrupts may be very delayed by the secure world (RTOS) main line code.
Related
Im trying to find out how I can disable and enable interrupts on the STM32L4x6RG Nucleo?
After a bit of googling I found the macros __disble_irq() and __enable_irq() but I'm not convinced these are disabling interrupts.
After more investigation it seems that the cpsid instruction this macro maps to only has effect when it runs in supervisor context. So the question becomes how do I move to supervisor mode to disable interrupts and back again??
I found the macros __disble_irq() and __enable_irq() but I'm not
convinced these are disabling interrupts.
They do, unless you (or the OS you are using) explicitly leave privileged mode with the MSR control, Rn instruction, or the __set_CONTROL() function, which does the same.
So the question becomes how do I move to supervisor mode to disable
interrupts and back again??
The processor is in privileged mode after reset, and stays in it unless you tell it otherwise. It also enters privileged mode temporarily when executing an exception handler.
You can use the SVC instruction to call the SVC exception handler from user code, and run some code in privileged mode. There is a problem though, that the SVC handler invocation would be blocked too by __disable_irq(), so there would be no way to reenable them afterwards. Instead of __disable_irq(), you can adjust the BASEPRI register to selectively disable lower priority interrupts, and set SVC priority higher so that it would not be blocked.
The processor boots in privileged mode so unless you are running your application on top of an operating system or have switched to unprivileged mode yourself you should already be in privileged mode. If you are running you application on top of an OS you should use its services to handle interrupts and if no such service exists you should leave the interrupts alone.
If you have switched to unprivileged-mode yourself, you can use the svc instruction to trigger an svc-exception and the handler for the exception executes in privileged mode.
I have to check whether the ARM processor is in SVC or IRQ mode during kernel bootup. That is, I want to insert some code to check the ARM mode into start_kernel() function before the interrupts are enabled and after the interrupts are enabled.
I know that I need the SPSR or CPSR values(bits) to check the mode in ARM, but how can I write code for that in start_kernel function since the code for reading bits of CPSR/SPSR is in assembly? Where do I put the assembly code to check the ARM mode during bootup time? Is there any way to dump the SPSR/CPSR values?
I don't dare imagine why this should be a concern, but fortunately there's an easy answer:
It's in SVC mode.
The very first thing* the kernel entrypoint does is to forcibly switch into SVC mode and mask interrupts. To somehow be in the wrong mode by the time you reach C code in start_kernel way later would need the system to be unimaginably horribly broken. The only way I can conceive of that even being possible is if there is some Secure firmware running asynchronously (e.g. off a Secure timer interrupt) which deliberately corrupts the Non-secure state, and that's sufficiently ridiculous to discount.
* OK, the second thing if the kernel is built with KVM support and entered in HYP, but hey...
after the boot the ARM processor is in secure SVC mode during this mode you can access CPSR register(first 6 bits) to check in which mode however when the kernel is runing you are in user non-secure mode and in this mode non secure you can't access the the CPSR and SPSR register and also copro 15 register .
The only way is to write a code that generate an exception to switch monitor mode " using SMC assebmbly instruction" to jump to monitor secure mode then in that mode you reset the "NS" bit to switch to non-secure then you generate another exception to switch the SVC mode(SVC instruction assembly call) now you are in supervisor secure mode and you can then access CPSR and SPSR register
I was learning about interrupts and came here to see if someone can help me!
A) I understand that the interrupt is the electrical signal sent by some external hardware to the processor, via one of the input ports.
B) I understand that in some MCU's more than one input port are "attached" to only one interrupt.
Can exist an useful input port in a MCU that is not linked to any interrupt at all?
A) I understand that the interrupt is the electrical signal sent by
some external hardware to the processor, via one of the input ports.
That is surely one class of interrupt, sure, as long as you understand that 'external hardware to the processor' can mean 'internal to the controller chip' - many MCU have extensive integrated peripherals.
B) I understand that in some MCU's more than one input port are
"attached" to only one interrupt.
Yes - that is not uncommon. The intrrupt-handler then has to poll the port to find out which GPIO/whatever pin generated the interrupt.
Can exist an useful input port in a MCU that is not linked to any
interrupt at all?
Sure, especially on 'trivial' controllers that do not require high-performace IO and have no RTOS.
Even higher-performance MCU apps may poll for sundry reasons. One common example is reading keypads. The input rate is very low and the mechanical switches need to be debounced. Fastening every KB read line to an interrupt line may cause unwanted multiple interrupts. Such iputs are better polled, though even then, a timer interrupt often handles the polling.
The answer is probably "yes," but it depends on the microcontroller architecture. There's no guarantee that one vendor's MCU will behave the same as any other (with respect to interrupts, ports, or anything else). If you're tasked with learning a particular MCU, then learn it, live it.
You house may have only one doorbell button. But pretty much anyone can use it for whatever reason. UPS there is a package. neighbor kid to play with your kid. someone trying to sell something and so on. A processor is no different. To reduce latency newer designs may have multiple interrupt signals on the core so that the handler doesnt have to do as much if any work to figure out who caused the interrupt. Kind of like having ringtones for every person on your phone, so you can tell without looking who is calling. Vs. one ringtone for everyone and you have to look.
Do not confuse external gpio ports on the chip with interrupt lines, they are not. they are general purpose I/O. they might have a way to be used as interrupts or not, depends on the design of the chip. Again as with the doorbell on your house, there are many things, technically all of them are within the chip (microcontroller), that create interrupts. Because software has to setup handlers before it can...handle...interrupts, all sources of interrupts are disabled at first, and only the ones software enabled have the ability to actually reach the core and cause an interrupt. Logic in the chip. so you may have an interrupt signal tied to the uart receiver and you might enable that. You might have one for the tx buffer, when it is empty interrupt. but you have to enable those before the processor can get an interrupt. there is a small section of logic that does fire an interrupt every time one of those events occurs, but that signal is gated and cannot reach the core, blocked by logic you control.
You can have timers in the mcu, that interrupt you when they roll over or count to zero. But you have to not only setup the timer to do that with software, you also have to enable the interrupt from making it across the chip from the timer to the processor core.
And yes sometimes the gpio peripheral has a way to interrupt the processor as well. as with everything else you have to with software setup the peripheral and define what interrupts you want and you have to enable them across the chip.
There are more different ways of doing this than there are companies making chips as they dont always do it the same way across their product lines. But generally at a minimum there is an interrupt enable on the peripheral end, one or many depending on the peripheral and features, that you have to enable in order for that signal to leave that peripheral on its way to the core. And there is often an interrupt controller peripheral or something built into the core or near it that takes all the dozens or hundreds of individual interrupt connections in the chip and prioritizes them and orrs them into the one or few interrupt lines into the core. you generally have to also enable the corresponding interrupt that matches the signal coming out of your peripheral to reach the processor core. And then there is sometimes an interrupt enable in the core itself so that even if you have the peripheral enabled, the interrupt controller enabled for that one peripherals interrupt, you still cannot interrupt the processor unless the interrupt enable in the processor core is enabled. That is the simple case, it can get more complicated if there are more layers of interrupt controllers along the way. Well the simple case is when you have something like a cortex-m with dozens or hundreds of individual interrupt signals, still have interrupt enables on both ends and in the core, just easier to manage as you have dozens to hundreds of interrupt handlers instead of one mega handler for everything.
So dont confuse the pins on the chip as being interrupts, on older dedicated processors, like the 8088/86, sure that was the one interrupt pin. But general purpose I/O sometimes called GPIO sometimes called ports, are just a peripheral, they are just pins you can make go high or low, they are not there to be interrupts although there may be a feature in that peripheral for that (or maybe there isnt). And again interrupt signals go through logic gates and have to be enabled, by software, at a minimum on both ends of that signal, at the peripheral and at the interrupt controller.
Can some one please explain to me that after the CPU is taken to secured mode, (Monitor program sets the NS = 0), how does the secure OS gets scheduled?
Is it that now that the CPU is in secured mode, the timer tick interrupt would be handled by the Secured OS and not the Non-Secured world?
The monitor mode setting NS=0 will set CP15 registers visible from monitor mode. See: monitor mode IFAR/IFSR.... When the monitor mode switches to another mode and NS=0, then the mode is the secure world version; meaning the banked CP15 registers are the secure version. Also the NS bit is clear on bus cycles.
If NS=1 is set, then when monitor mode switches, the banked CP15 registers are the normal version; mainly the normal world MMU will be active. Also, the NS bit is set on bus cycles. TZ vs hypervisor
How does the secure OS gets scheduled?
Monitor mode does this. The SCR (cp15 c1, c1, 0) has bits which determine whether the monitor vector table is used or the current CPU world (secure or normal). If you are in a normal world and you wish for a timer to interrupt that world, you need monitor mode to handle it.
You can setup the monitor mode in two possible ways,
Have all secure interrupts as FIQ.
Trap all interrupts to monitor.
The first choice is recommended. In this mode, the monitor code must ensure that SCR#FIQ (bit 2) is set in the normal world, but clear in the secure world. The SCR#IRQ (bit 1) will be set when running the secure OS (if you want normal interrupts to interrupt the secure OS) and clear in the normal world.
So when the secure timer has a FIQ interrupt, it traps to monitor mode which does a world switchref1 and runs the secure OS timer code. This secure timer may cause the secure world to reschedule. The way the normal and secure world schedulers interact is up to software. Ie, there is no generic answer. It depends on,
Monitor mode
The secure OS.
The normal world OS.
Mainly the ARM TrustZone does not handle the secure OS scheduling by itself. You need to write software that uses the primitives provided to implement this. ARM TrustZone only facilitates different ways of implementing it. TrustZone Whitepaper
See: How to develop programs for TrustZone for some alternative setups.
Ref1: A world switch saves/restores all general purpose CPU registers for all used modes. Ie, on a normal to secure world switch, the R0-R15 (and all banked copies) plus possibly NEON/VFP must be saved to a normal world store. Similarly, the registers must be reloaded for the secure world. Monitor mode sp provides a good anchor for accessing these world contexts. Monitor mode sp should be setup during secure boot, before the normal world initializes. This is much like a traditional OS context switch. The SCR#NS (bit 0) is set appropriately; you may do this before or after the register switching, depending on how you save the registers (Ie, by mode switch or by srs).
I am using mini2440 arm board, and GPIO to control the hardware connected with the GPIO. I am using BSP that ships with the cd of the board. I have only enabled functionality which I will need for running the hardware.
I have disabled audio, Ethernet and unnecessary stuff in kernel, so that it don;t cause interrupt hence CPU attention. But the problem is sometimes some interrupt occur on the GPIO and hardware do malfunction. I know I can see all interrupt via cat /proc/interrupt, but how should i know which interrupt occur on GPIO from which device?
I am running my application with highest nice priority (-20), but still sometime external interrupt occur.
When i send data on GPIO, only TimerTick of s3c2440 do interrupt, but that's fine, it is require, but not other. Please tell me how to find which interrupt occur (I know I can check it via cat /proc/interrupt) and how to disable (Disable interrupt on ethernet via ifconfig eth0 down) interrupt from kernel? Need some expert solution, I have tried the solution getting help from people but need some expert solution.
Disabling devices in the kernel has no real efect on interrupts (generated by the hardware), it just affects how software handles them. If the device isn't present, no interrupts get generated. And Linux was written by absolute performance freaks, barring misbehaving hardware the interrupt handling is nearly as good/fast as it could be.
What exactly are you trying to do? Are you sure you aren't trying to get performance that your machine just can't deliver?