I have a BIOS function I need to call from time to time on an embedded system, and using LRMI I was able to call it successfully from a user space program. Now I want to do the same from a loadable kernel module.
Is there any way to do this? Some other library maybe?
It has to do with the mode the Processor is in (which is protected mode, which turns on after bios initializes all of the resources). In order to use BIOS interrupts again, you will have to use v8086 mode, which is when the processor emulates a 16bit real mode machine. You can then set your registers and call your interrupt with a virtual mode program.
Here's how to get into virtual mode: http://www.brokenthorn.com/Resources/OSDev23.html
You could also try and switching into Real mode, but this involves resetting the processor. I don't know how you would do this programmatically, since you probably can't.
Related
I am trying to write a Qemu TCG plugin to recognize if an instruction is executed in user mode or in kernel mode for a basic program.
I understand that a full system emulation is needed for the same. I wrote a TCG plugin, registering two Qemu callbacks qemu_plugin_register_vcpu_syscall_cb and qemu_plugin_register_vcpu_syscall_ret_cb. I am using the target arm-softmmu for full system emulation (and then the corresponding qemu-system-arm). I can see that there are svc calls in the user mode but I do not get any kernel-mode instructions. Any idea on what might be happening?
I am having a question for myself while taking operating system course.
If I type in any C code to my text editor or though IDE and execute it with the compiler
it translates the code into machine code.
Then I would guess if I run the program, the OS will allocate a memory address to the code which is done by kernel code.
And if there was IO interrupt typed into my code, the kernel code executes.
And so...which bit is the user mode code then?
In the ordinary course of events, any code you write is 'user mode code'. Kernel mode code is executed only when you execute a system call and the control jumps from your user code to the operating system.
Obviously, if you're writing kernel code, or loadable kernel modules, then things are different — that code will be kernel mode code. But most people most of the time are only writing user mode code.
Kernel mode versus user mode actually reflects how the processor is running.
With modern operating systems, code only runs (with the processor) in kernel mode if it is trusted by the operating system, and all other code runs in user mode.
The functional difference, under modern operating systems, is that kernel mode code runs in a single (virtual) address space that represents all system resources, so all functions in kernel mode can affect each other directly. For example, all actions by a kernel mode driver can directly affect functioning of the operating system itself and of any other kernel mode driver. (The specifics implementation details vary somewhat between operating system types, for example between windows, linux, BSD, etc but the basic principles are the same)
Which means that, if you are writing code that will execute within the internal workings of the operating system or within a kernel mode driver, then it might be said to be kernel mode code. Otherwise, it will be user-mode code. Code that attempts some action that can only be executed in kernel mode will be prevented from doing so by the processor itself, unless the processor itself is in kernel mode. The operating system itself mediates when the processor enters kernel mode, which is why code need to be recognised by the operating system (or installed, in the case of kernel mode drivers) in order to do things that can only be done in kernel mode. User mode code can't arbitrarily escalate the processor to kernel mode, without the help of some code that is already recognised by the operating system.
Practically, modern operating systems also supply a set of functions (e.g. in an API) that can be called from user mode. A lot of those functions are, themselves, executed solely in user mode. Some, however, result in the the processor being switched into kernel mode to perform some specific actions, and then the processor is switched back to user mode by the time control returns to the caller. Which code, within the OS itself, executes in user mode or kernel mode depends both on the design of the operating system and administrative settings (e.g. only suitably privileged users (aka administrators) can, for example, install kernel mode drivers).
My question is quite a bit theoretical, but I want to implement disk r/w to my Operating system, while I know how to do it in protected mode, it would take too long to implement ATAPI+ATA+FDC drivers (to make my OS boot on any device). I took two ideas to consider: Make my OS bootable only from pendrive (so I can handle only pendrive for disk r/w and it wouldn't take as much time), and jump to real mode, read sector, and jump back to protected mode. But AFAIK i have to be in conventional memory (IP can be max 65k) so CPU would crash as my kernel resides near 1MB mark. I use GRUB as bootloader and test my code in VMWare VirtualBox. My question is, should I implement USB mass memory driver or do it like DOS extenders do? If so, could you show me some code in C that for example goes back to real mode and waits for keypress, then goes back to protected mode? I can't pack data to kernel. My OS supports at the moment keyboard, advanced terminal functions and some utilities from C standard library.
I am working on a project where I have a router with ARMv7 processor (Cortex A15) and OpenWRT OS. I have a shell on the router and can load kernel modules with insmod.
My goal is to write a kernel module in C which changes the HVBAR register and then executes the hvc instruction to get the processor in the hyp mode.
This is a scientific project where I want to check if I can place my own hypervisor on a running system. But before I start to write my own hypervisor I want to check if and how I can bring the processor in the hyp mode.
According to this picture take from armv7-a manual B.9.3.4 the system must be in insecure mode, not in user mode and the SCR.HCE bit must be set to 1.
My question is how I can prepare the processor with a C kernel module and inline assembly and then execute the hvc instruction. I want to do this with a kernel module because then I start in PL1. This pseudocode describes what I want to achieve:
call smc // to get in monitor mode
set SRC.HCE to 1 // to enable hvc instruction
set SRC.NS to 1 // to set the system to not secure
call hvc #0 // call the hvc instruction to produce a hypervisor exception
The easiest way to elevate privilege is to start off in the needed privilege mode already: You've a root shell. Is the boot chain verified? Could you replace bootloader or kernel, so your code naturally runs in PL2 (HYP) mode? If so, that's probably the easiest way to do it.
If you can't replace the relevant part of the boot chain, the details of writing the rootkit depend a lot on information about your system left out: In which mode is Linux started? Is KVM support enabled and active? Was PL2 initialized? Was it locked? Is there "secure" firmware you can exploit?
The objective is always the same: have HVBAR point at some code you can control and do a hvc. Depending on your environment, solutions may range from spraying as much RAM as possible with your code and hope (perhaps after some reboots) an uninitialized HVBAR would point at an instruction you control to inhibiting KVM from running and accessing the early hypervisor stub to install yourself instead.
Enumerating such exploits is a bit out of scope for a StackOverflow answer; this is rather dissertation material. Indeed, there's a doctoral thesis exactly on this topic:
Strengthening system security on the ARMv7 processor architecture with hypervisor-based security mechanisms
I'm planning to run an RTOS e.g Nuttx as a Process of another RTOS e.g FreeRTOS such that freertos tasks and the Nuttx running as a Freertos task would co-exist.
Would this be feasible implementation given that the underlying hardware is an ARM cortex A8 single core processor? What changes could be required if the implementation is not based on VM concept?
Your requirement, in a nutshell, is to allow a GUEST RTOS to completely work within the realms of an underlying HOST RTOS. First answer would be to use virtualization extension, but A8 processor does not have that, hence will rule this option out. Without Virtualization extensions you have to resort to one of the following methods and would require a lot of code changes.
Option 1 - Port your GUEST OS API's
Take all your GUEST OS API's and replace their implementation, so that it mimics the required API behavior by making use of HOST OS's API's. Technically now your GUEST OS will not have a scheduler, and will be reduced to a porting layer on top of your HOST OS. This method is used by companies when they need their software solutions to work across multiple RTOS's. They would write their software solution based on an RTOS. When a customer comes to them with a requirement to run the software on their RTOS, they would simply port the RTOS API implementations on to the customer's RTOS.
Option 2 - Para-virtualization
Your guest RTOS user and kernel space should both work inside the userspace of your host RTOS. Let us break the problem into a few parts.
Handling Privileged Instructions
When your Guest OS, while executing in "Kernel mode" tries to execute a privileged instruction, will cause an undef instruction abort. You have to modify the undef instruction abort handler of your host kernel to trap/intercept these instructions and act on them. Every single privileged instructions has to be trapped/intercepted and 'simulated'. There are some instructions that wouldn't trap but would need to be handled by modifying code. Eg. If your kernel code reads CPSR to confirm the execution mode, CPSR would say the mode is User mode. (This instruction wouldn't cause an instruction abort, so you could not follow the trap and simulate model. The only way is to identify, search and replace these instructions in your GUEST OS codebase.)
Memory Management Unit
If a privilege violation happens the Data Abort will be triggered to your host OS. It has to be forwarded to your guest OS.
Interrupts
You would have to replace your GUEST OS's interrupt controller driver with dummy SVC calls that would call into your HOST OS to setup interrupts.
Timers
You would have to modify your GUEST timer driver to account for 'lost' ticks when you were running your HOST OS tasks.
Hardware Drivers
All other hardware drivers used by your GUEST OS have to be modified to allow device sharing between GUEST and HOST.
Schedulers
Your GUEST OS scheduler now works inside (and thus is at the mercy of ) another scheduler (HOST OS Scheduler).
It is feasible.
You need to separate resources: memory, timers, IRQs, etc. So that, "Host" OS (FreeRTOS) don't even "know" about resources used by "Guest" OS (Nuttx).
For Cortex-A8 you may want to use IRQ for FreeRTOS and FIQ for GuestOS. It will let you not to rewrite IRQ controller (but again, make sure Host does not control FIQ after GuestOS started).
And some changes might be required for context switch: you need to differ Host-Host context switch, Host-Guest (and Guest-Host) and Guest-Guest context switch.
Though not direct answer to your question, address this problem at design level, do a separation of code that depends hardware (create API) and make the application level code independent of the underlying OS or runtime i.e rather depend on particular implementation let it depend on the API.
where ever needed port the hardware (OS) dependent code to the underlying OS/Runtime