How do I access the user R13 and R14 which are saved when supervisor mode is entered? I am using an ARM7TDMI.
I.E. I do not want to access supervisor R14 which now contains the return address to user mode, instead want the value of user mode's link register. This is part of a debugger I am writing.
Are there special aliases for these registers?
Thanks
I'll describe the answer for your specific question but the same approach applies to other modes as well.
You'll need to change the processor mode by changing the mode bits in the CPSR to system mode. This will give you access to user mode's SP/LR (R13 & R14). Remember that system mode is privileged, but its R13 and R14 are the same as user mode's R13 and R14.
Once you're in system mode, read R13 and R14 and put them where you want. Then just switch the mode bits back to your previous mode (I believe that was supervisor mode in your example) and you're good to go.
Note that we did not switch from supervisor to user mode. If you switched from supervisor to user, you couldn't get back to supervisor mode. (Otherwise there would be no protection from user code escalating privilege). That's why we used system mode -- system mode is privileged, but the registers are the same as user mode.
You can switch between any of the privileged modes at will by manipulating the mode bits in the CPSR. I think they're the lower 5 bits? I'm on the road & don't have the info at my fingertips. Otherwise I would have provided you with the assembly code for what I've described above. Actually, if you want to put some hair on your chest, take what I've given you above, implement it, test it, and post it back here. :-D
(One thing I should add for the "general case" (yours is very specific) -- you can examine the SPSR to see "where you came from" -- and use that to determine which mode you need to switch to.)
By the way, I just did this recently for one of my customers.... small world, I guess.
I've discovered a better way: -
When doing a STM, if r15 isn't one of the operands then ^ gives access to user-mode registers. However, autoincrementing doesn't seem to work within the instruction, and a nop is required afterwards if you want to access the register bank.
Something like
stmfd r13, {r13-r14}^ ;store r13 and r14 usermode
nop
sub r13, r13, #8 ;update stack pointer
Related
I am trying to figure out how to start cores other than core0 for a quad core allwinner h5. the C_RST_CTRL register (a.k.a CPU2 Reset Control Register) has four bits at the bottom that imply they are four reset controls. The lsbit is one the other three zeros implying setting those releases reset on the other cores, but I dont see that happening (nothing is running code I have left at address zero), at the same time zeroing that lsbit does stop core0 implying that it is a reset control. So I assume there are clock gates somewhere but I cannot find them.
The prcm registers which are not documented in the H5 docs but are on a sunxi wiki page for older allwinners do show what seem to be real PLL settings but the cpu enable registers are marked as A31 only and the cpu0 register(s) are not setup so that would imply that is not how you enable any cpu including 0 for this chip.
What am I missing?
For a pure bare metal solution look at sunxi_cpu_ops.c from the plat/sun50iw1p1 directory of https://github.com/apritzel/arm-trusted-firmware.git
You need to deactivate various power clamps as well as clock gates.
Alternatively, include the Arm Trusted Firmware code and enable a core by an SMC call:
ldr x2,=entry_point
mov x1,#corenumber
mov x0,#0x03
movk x0,#0x8400,lsl #16
smc #0
I've now confirmed this works on an H5.
Does C_CPU_STATUS STANDBY_WFI=0x0E suggest that the secondary cores are sitting in WFI?
Not an answer, I don't have enough rep to comment but I'm just starting the same exercise myself.
As an aside, how did you put code at address 0? Isn't that BROM? I was going to play with the RVBARADDR registers.
The problem I have currently is that ever since I enabled interrupts, the program is stuck in an endless loop. If I disable the interrupts then the program executes normally.
I even made sure that I protected the registers (variables) in the functions by pushing them onto the stack and popping them off upon exit of the function, and that did not help.
I even took the step to even replace the function names with only iret (to exit from interrupt), and I still face the same problem.
The only way for me to solve the problem right now is to disable the interrupts by not executing mov TCON,#50h. This makes me think that the interrupt vector addresses published on the internet are not correct, and that in reality, screwed-up code is being executed instead.
My microcontroller I'm using is AT89S52.
Am I off with my vector addressing here? I need some advice because the code I currently use below is currently not working when timer interrupts are enabled.
org 000h
;entry point when program first runs
ljmp sysinit ;seems to execute
sjmp $
org 00Bh
;Timer 0 interrupt executes every 65536 machine cycles even if timer 1 interrupt executes
ljmp crit
sjmp $
org 01Bh
;Timer 1 interrupt executes every 256 machine cycles
ljmp processkey
sjmp $
org 030h
start:
;rest of program goes here.
sysinit:
mov TL0,#0h
mov TH0,#0h
mov TH1,#0h
mov TL1,#0h
mov PSW,#0h
mov R0,#7Fh
;make all ram addresses 0h to 7Fh contain value 0
sysreset:
CLR A
mov #R0,A
djnz R0,sysreset
mov T2CON,#0h ;8052 register
mov AUXR,#0h ;8052 register
mov AUXR1,#0h ;8052 register
mov PCON,#80h ;Make baud divisor=192
mov TMOD,#21h ;Timer1=0-255,Timer2=0-65535
mov IP,#02h ;priority to timer 0
mov TCON,#50h ;timers on
mov SP,#050h ;stack=50h
mov IE,#8Ah ;ints enabled
ljmp start
Are you sure that the interrupt service routines actually clear the interrupts? If not, the core will continuously try to serve them hence the apparent lockup.
Check out the datasheet pg. 17:
Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Nei-
ther of these flags is cleared by hardware when the service routine is vectored to. In fact, the
service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt,
and that bit will have to be cleared in software.
So your interrupt will keep firing if the flags aren't reset.
Why are you writing your code in assembly anyway? Surely, for this functionality you could use plain C instead.
I have written code to enable performance monitoring register as user accessible by setting bit as 1. I getting ARM_BAD_INSTRUCTION at MCR instruction and MRC is going fine.
I am using armv7(cortex a5)
.cfi_startproc
MRC p15, 0, r0, c9, c14, 0 # Read PMUSERENR Register
ORR r0, r0, #0x01 # Set EN bit (bit 0)
MCR p15, 0, r0, c9, c14, 0 # Write PMUSERENR Register
ISB # Synchronize context
BX lr
.cfi_endproc
As per the documentation, PMUSERENR is only writeable from privileged modes, thus an attempt to write to it from unprivileged userspace will indeed raise an undefined instruction exception.
If you want to enable userspace access, you need to do it from the kernel (or possibly from a hypervisor/firmware in the case of a kernel which doesn't know about PMUs itself).
If you don't have control of any such privileged software, well then you're not getting yourself direct access, because that's rather the point of the notion of privilege. What you might have, however, is some userspace API provided by the OS - such as perf events on Linux - to let you profile and measure stuff without the hassle of managing the hardware directly; frankly that's usually the better option even if you could enable direct access, because userspace still has no way to properly handle all the necessary event filtering, scheduling, overflow interrupts, etc. on a multitasking system.
Can you explain how the ARM mode get changed in case of a system call handling?
I heard ARM mode change can happen only in privileged mode, but in case of a system call handling while the ARM is in user mode (which is a non-privileged mode), how does the ARM mode change?
Can anybody explain the whole action flow for the user mode case, and also more generally the system call handling (especially how the ARM mode change)?
Thanks in advance.
In the case of system calls on ARM, normally the system call causes a SWI instruction to be executed. Anytime the processor executes a SWI (software interrupt) instruction, it goes into SVC mode, which is privileged, and jumps to the SWI exception handler. The SWI handler then looks at the cause of the interrupt (embedded in the instruction) and then does whatever the OS programmer decided it should do. The other exceptions - reset, undefined instruction, prefetch abort, data abort, interrupt, and fast interrupt - all also cause the processor to enter privileged modes.
How file handling works is entirely up to whoever wrote your operating system - there's nothing ARM specific about that at all.
You need to get a copy of the ARM ARM (Architectural Reference Manual).
http://infocenter.arm.com -> ARM Architecture -> Reference Manuals -> ARMv5 Architectural Reference Manual then download the pdf.
It used to be a single ARM ARM for the ARM world but there are too many cores and starting to diverge so they split off the old one as ARMv5 ARM and made new Architectural Reference Manuals for each of the major ARM processor families.
In the Programmers Model chapter it talks about the modes, it says that you can change freely among the modes other than user. ARM startup code will often go through a series of mode changes so that the stack pointers, etc can be configured. Then as needed go back to System mode or User mode.
In that same chapter look at the Exceptions section, this describes the exceptions and what mode the processor switches to for each exception.
The Software interrupt exception which happens when an SWI instruction is executed, is a way to implement system calls. The processor is put in Supervisor mode and if in thumb mode switches to arm mode.
There needs to be code to support that exception handler of course. You need to verify with the operating system, if any, you are running, what is supported and what the calling convention is, etc.
Not all ARM processors work this way. The Cortex-M (ARMv7-M) does not have the same modes and same exception table, etc. As with any time you are using an ARM (at this level) you need to get the ARM ARM for the family you are using and you need to get the TRM (Techincal Reference Manual) for the core(s) you are using, ideally the exact revision, even if ARM marks the TRM as having been replaced by a newer version the chip manufacturer has purchased and uses a specific rev of the core and there can be enough differences between revs that you will want the correct manual.
When an SVC instruction is encountered by the PC, the following behaviour takes place:
The current status (CPSR) is saved (to the supervisor SPSR)
The mode is switched to supervisor mode
Normal (IRQ) interrupts are disabled
ARM mode is entered (if not already in use)
The address of the following instruction (the return address) is saved into the link register (R14) - It's worth noting that this is
the link register that belongs to the supervisor mode
The PC is altered to jump to address 0x00000008
An exception vector (just a branch instruction) should be at the address 0x0000008, which will branch the program to another area of code used to determine which supervisor call has been made.
Determining which supervisor call has been made is usually accomplished by loading the SVC instruction into a register (by offsetting the LR by one word - since the LR is still pointing to the instruction next to the supervisor call), bit clearing the last 8 bits and using the value in the remaining 24 bits of the register to calculate an offset in a jump table, to branch to the corresponding SVC code.
When the supervisor call code wishes to return to the user application, the processor needs to context switch back into user mode and return to the address contained within the LR (which is only available in supervisor mode, since certain registers are banked for both modes). This problem is overcome using the MOVS instruction, as illustrated below:
(consider this to also be your explanation on how to change mode)
MRS R0, CPSR ; load CPSR into R0
BIC R0, R0, #&1F ; clear mode field
ORR R0, R0, #&10 ; user mode code
MSR SPSR, R0 ; store modified CPSR into SPSR
MOVS PC, LR ; context switch and branch
The MRS and MSR instructions are used to transfer content between an ARM register and the CPSR or SPSR.
The MOVS instruction is a special instruction, which operates as a standard MOV instruction, but also sets the CPSR equal to the SPSR upon branching. This allows the processor to branch back (since we're moving the LR into the PC) and change mode to the mode specified by the SPSR.
I quote from the ARM documentation available here:
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0471c/BABDCIEH.html
When an exception is generated, the processor performs the following
actions:
Copies the CPSR into the appropriate SPSR. This saves the current mode, interrupt mask, and condition flags.
Switches state automatically if the current state does not match the instruction set used in the exception vector table.
Changes the appropriate CPSR mode bits to:
Change to the appropriate mode, and map in the appropriate banked out registers for that mode.
Disable interrupts. IRQs are disabled when any exception occurs. FIQs are disabled when an FIQ occurs and on reset.
Sets the appropriate LR to the return address.
Sets the PC to the vector address for the exception.
where, CPSR refers to Current Program Status Register and SPSR to Saved Program Status register used to restore the state of the process that was interrupted. Thus, as seen in point 3, the processor circuitry is designed in a way that the hardware itself changes the mode when user mode executes a Supervisor call instruction.
"I heard ARM mode change can happen only in privileged mode". You are partly right here. By partly I mean the control field of the CPSR register can be manually modified (manually modified means through code) in the privileged modes only not in the unprivileged mode (i.e. user mode). When a system call happens in the user mode it happens because of SWI instruction. An SWI instruction has inbuilt mechanism to change the mode to supervisor mode.
So to conclude , there are two ways to change the mode:
1) Explicitly through code. Only allowed in a privileged mode.
2) Implicitly through IRQ, FIQ, SWI, RESET, undefined instruction encountered, data abort, prefetch abort. This is allowed in all the modes.
Does anyone know how to enable ARM FIQ?
Other than enabling or disabling the IRQ/FIQ while you're in supervisor mode, there's no special setup you should have to do on the ARM to use it, unless the system (that the ARM chip is running in) has disabled it in hardware (based on your comment, this is not the case since you're seeing the FIQ input pin driven correctly).
For those unaware of the acronyms, FIQ is simply the last interrupt vector in the list which means it's not limited to a branch instruction as the other interrupts are. That means it can execute faster than the other IRQ handlers.
Normal IRQs are limited to a branch instruction since they have to ensure their code fits into a single word. FIQ, because it won't overwrite any other IRQ vectors, can just run the code directly without a branch instruction (hence the "fast").
The FIQ input line is just a way for external elements to kick the ARM chip into FIQ mode and start executing the correct exception. There's nothing on the ARM itself which prevents that from happening except the CPSR.
To enable FIQ in supervisor mode:
MRS r1, cpsr ; get the cpsr.
BIC r1, r1, #0x40 ; enable FIQ (ORR to disable).
MSR cpsr_c, r1 ; copy it back, control field bit update.
A similar thing can be done for normal IRQs, but using #0x80 instead of #0x40.
The FIQ can be closed off to you by the chip manufacturer using trustzone extensions.
Trustzone creates a Secure world and a normal world. The secure world has its own supervisor, user and memory space. The idea is for secure operations to be routed so they never leave the chip and cannot be traced even if you scan the pins on the bus. I think in OMAP it is used for some cryptography operations.
On Reset the core starts in secure mode. It sets up the secure monitor (gateway between secure and non-secure world) and at this time FIQ can be setup to be routed to the monitor. I think it is the SCR.FIQ bit that may be set and then all FIQs ignore the value of CPSR.F and go to monitor mode. Check out the ARM ARM but if I remember correctly if this is happening there is no way for you to know from nonsecure OS code. Then the monitor will reset the Normal world registers and doing an exception return with PC set to the reset exception vector.
The core will take an interrupt to monitor mode, do its thing and return.
Sorry I can't answer you in the comments, I don't have enough reputation, you could always fix that ;), but I hope you see this