I have doubt with respect to the address space.
I have thought that the RAM if 4 GB is split up into 2 halves for kernel space(1GB) and user space(3GB).
1] Does RAM also maintains stack,heap,code and data section as hard disk.
2] Won't the process running is not given a boundary where the stack, data, code and heap have to grow in RAM.
3] My thought was that the stack,heap,code and data segment all be in the consecutive address space given to the process at the time of process creation.
4] How does the CPU takes the correct address of the process to execute, as the processes are not contiguous in physical memory.
No, only the virtual memory address space is split in two. Physical memory, the RAM in the machine, contains an entirely random collection of blocks that map to virtual memory addresses. From both operating system pages and user program pages. Much like the image shows although it is a bit misleading about showing the OS pages at the bottom.
That mapping constantly changes, a page fault is the essential mechanism to get a virtual memory page mapped to RAM. Which is triggered when a program accesses a virtual memory page that isn't present in RAM yet. As needed, RAM pages may be unmapped to make room, their content is either discarded or written to the pagefile. Code is usually discardable, it can be read back from the executable file, data usually isn't.
Some pages in RAM are special, they contain code and data that's used by drivers. They are page-locked. Required when the driver handles device interrupts and the code/data used by the interrupt handler must be present in RAM to allow the interrupt to be handled, can't afford a page fault at such a critical time. The probable reason the image was drawn like that.
Related
This question already has answers here:
What are the differences between virtual memory and physical memory?
(6 answers)
Closed 3 years ago.
What is virtual memory and, how it differs from physical memory (RAM)? It says that physical memory is stored on sth on motherboard, while virtual memory is stored on disk.
Somewhere it also says that virtual spaces are used only when the physical memory is filled, which confused me a lot.
Then, why Windows uses virtual memory? Is it because the RAMs are small-spaced and not designed for big storage, so use the virtual to store more bigger-sized things?
The next thing is about the address. Since virtuals are on disk, they shouldn't share the address of physicals. So they have independent addresses. Is that right?
And,
When writing memory of another process, why recommend using VirtualAlloc instead of HeapAlloc?
Is it true that virtual memory is process-dependent and the physical memory shared through processes?
"Virtual memory" means there is a valid address space, which does not map to any particular physical memory or storage, hence virtual. In context of modern common operating systems, each process has its own virtual memory space, with overlapping virtual memory addresses.
This address space is divided into pages for easier management (example size 4 KB). Each valid page can be in 3 different states:
not stored physically (assumed to be all 0). If process writes to this kind of page, it needs to be given a page of physical memory (by OS, see below) so value can be stored.
Mapped to physical memory, meaning some page-size area in computers RAM stores the contents, and they can be directly used by the process.
Swapped out to disk (might be a swap file), in order to free physical RAM pages (done automatically by the operating system). If the process accesses the page (read or write), it needs to be loaded to page in RAM first (see above).
Only when virtual memory page is mapped to physical RAM page, is there something there. In other cases, if process accesses that page, there is a CPU exception, which transfers control to operating system. OS then needs to either map that virtual memory page to RAM (possibly needing to free some RAM first by swapping current data out to swap file, or terminating some application if out of all memory) and load the right data into it, or it can terminate the application (address was not in valid range, or is read-only but process tries to write).
Same page of memory can also be mapped to several places at once, for example with shared memory, so same data can be accessed by several processes at once (virtual address is probably different, so can't share pointer variables).
Another special case of virtual memory use is mapping a regular file on disk to virtual memory (same thing which happens with swap file, but now controlled by normal application process). Then OS takes care of actually reading bytes (in page-sized chunks) from disk and writing changes back, the process can just access the memory like any memory.
Every modern multi-tasking general purpose operating system uses virtual memory, because the CPUs they run support it, and because it solves a big bunch of problems, for example memory fragmentation, transparently using swapping to disk, memory protection... They could be solved differently, but virtual memory is the way today.
Physical memory is shared between processes the same way as computer power supply is shared, or CPU is shared. It is part of the physical computer. A normal process never handles actual physical memory addresses, all that it sees is virtual memory, which may be mapped to different physical locations.
The contents of virtual memory are not normally shared, except when they are (when using shared memory for example).
Not sure you mean by "When collecting memory for other process", so can't answer that.
Virtual memory can essentially be thought of as a per process virtual address that's mapped to a physical address. In the case of x86 there is a register CR3 that points to the translation table for that process. When allocating new memory the OS will allocate physical memory, which may not even be contiguous, and then set a free contiguous virtual region to point to that physical memory. Whenever the CPU accesses any virtual memory it uses this translation table in CR3 to convert it to the actual physical address.
More Info
https://en.m.wikipedia.org/wiki/Control_register#CR3
https://en.m.wikipedia.org/wiki/Page_table
To quote Wikipedia:
In computing, virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory."
Because virtual memory is an illusory memory (so, non-existent), some other computer resources rather than RAM is used. In this case, the resource used is the disk, because it has a lot of space, more than RAM, where the OS can run its VM stuff.
Somewhere it also says that virtual spaces are used only when the physical memory is filled, which confused me a lot.
It shouldn't. VM uses the disk and I/O with the disk is much slower than I/O with the RAM. This is why physical memory is preferred nowadays and VM is used when physical memory is not enough.
Then, why Windows uses virtual memory? Is it because the RAMs are small-spaced and not designed for big storage, so use the virtual to store more bigger-sized things?
This is one of the main reasons, yes. In the past (the 70s), computer memory was very expensive so workarounds had to be conceived.
Virtual Memory is a quite complex topic for me. I am trying to understand it. Here is my understanding for a 32-bit system. Example RAM is just 2GB. I have tried reading many links, and I am not confident at the moment. I would like you people to help me in clearing up my concepts. Please acknowledge my points, and also please answer for what you feel is wrong. I have also a confused section in my points. So, here starts the summary.
Every process thinks it is only running. It can access the 4GB of memory - virtual address space.
When a process access a virtual address it is translated to physical address via MMU.
This MMU is a part of a CPU - a hardware.
When the MMU cannot translate the address to a physical one, it raises a page fault.
On page fault, the kernel is notified. The kernel check the VM area struct. If it can find it - may be on disk. It will do some page-in /page-out. And get this memory on the RAM.
Now MMU will again try and will succeed this time.
In case the kernel cannot find the address, it will raise a signal. For example, invalid access will raise a SIGSEGV.
Confused points.
Does Page table is maintained in Kernel? This VM area struct has a page table ?
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
Does Page table is maintained in Kernel? This VM area struct has a page table ?
Yes. Not exactly: each process has a mm_struct, which contains a list of vm_area_struct's (which represent abstract, processor-independent memory regions, aka mappings), and a field called pgd, which is a pointer to the processor-specific page table (which contains the current state of each page: valid, readable, writable, dirty, ...).
The page table doesn't need to be complete, the OS can generate each part of it from the VMAs.
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
The translation fails, e.g. because the page was marked as invalid, or a write access was attempted against a readonly page.
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
There are two kinds of MMUs in common use. One of them only has a TLB (Translation Lookaside Buffer), which is a cache of the page table. When the TLB doesn't have a translation for an attempted access, a TLB miss is generated, the OS does a page table walk, and puts the translation in the TLB.
The other kind of MMU does the page table walk in hardware.
In any case, the OS maintains a page table per process, this maps Virtual Page Numbers to Physical Frame Numbers. This mapping can change at any moment, when a page is paged-in, the physical frame it is mapped to depends on the availability of free memory.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
To a first approximation, yes. Beyond that, there are many reasons why the kernel may decide to fiddle with a process' memory, e.g: if there is memory pressure it may decide to page out some rarely used pages from some random process. User space can also manipulate the mappings via mmap(), execve() and other system calls.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Totally unrelated to the other questions. In summary, high memory is a hack to be able to access lots of memory in a limited address space computer.
Basically, the kernel has a limited address space reserved to it (on x86, a typical user/kernel split is 3Gb/1Gb [processes can run in user space or kernel space. A process runs in kernel space when a syscall is invoked. To avoid having to switch the page table on every context-switch, on x86 typically the address space is split between user-space and kernel-space]). So the kernel can directly access up to ~1Gb of memory. To access more physical memory, there is some indirection involved, which is what high memory is all about.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run Linux?
Laptop/desktop processors come with an MMU. x86 supports paging since the 386.
Linux, specially the variant called µCLinux, supports processors without MMUs (!MMU). Many embedded systems (ADSL routers, ...) use processors without an MMU. There are some important restrictions, among them:
Some syscalls don't work at all: e.g fork().
Some syscalls work with restrictions and non-POSIX conforming behavior: e.g mmap()
The executable file format is different: e.g bFLT or ELF-FDPIC instead of ELF.
The stack cannot grow, and its size has to be set at link-time.
When a program is loaded first the kernel will setup a kernel VM-Area for that process is it? This Kernel VM Area actually holds where the program sections are there in the memory/HDD. Then the entire story of updating CR3 register, and page walkthrough or TLB comes into the picture right? So, whenever there is a pagefault - Kernel will update the page table by looking at Kernel virtual memory area is it? But they say Kernel VM area keeps updating. How this is possible, since cat /proc/pid_value/map will keep updating.The map won't be constant from start to end. SO, the real information is available in the Kernel VM area struct is it? This is the acutal information where the section of program lies, it could be HDD or physical memory -- RAM? So, this is filled during process loading is it, the first job? Kernel does the page in page out on page fault, and will update the Kernel VM area is it? So, it should also know the entire program location on the HDD for page-in / page out right? Please correct me here. This is in continuation to my first question of the previous comment.
When the kernel loads a program, it will setup several VMAs (mappings), according to the segments in the executable file (which on ELF files you can see with readelf --segments), which will be text/code segment, data segment, etc... During the lifetime of the program, additional mappings may be created by the dynamic/runtime linkers, by the memory allocator (malloc(), which may also extend the data segment via brk()), or directly by the program via mmap(),shm_open(), etc..
The VMAs contain the necessary information to generate the page table, e.g. they tell whether that memory is backed by a file or by swap (anonymous memory). So, yes, the kernel will update the page table by looking at the VMAs. The kernel will page in memory in response to page faults, and will page out memory in response to memory pressure.
Using x86 no PAE as an example:
On x86 with no PAE, a linear address can be split into 3 parts: the top 10 bits point to an entry in the page directory, the middle 10 bits point to an entry in the page table pointed to by the aforementioned page directory entry. The page table entry may contain a valid physical frame number: the top 22 bits of a physical address. The bottom 12 bits of the virtual address is an offset into the page that goes untranslated into the physical address.
Each time the kernel schedules a different process, the CR3 register is written to with a pointer to the page directory for the current process. Then, each time a memory access is made, the MMU tries to look for a translation cached in the TLB, if it doesn't find one, it looks for one doing a page table walk starting from CR3. If it still doesn't find one, a GPF fault is raised, the CPU switches to Ring 0 (kernel mode), and the kernel tries to find one in the VMAs.
Also, I believe this reading from CR, page directory->page-table->Page frame number-memory address this all done by MMU. Am I correct?
On x86, yes, the MMU does the page table walk. On other systems (e.g: MIPS), the MMU is little more than the TLB, and on TLB miss exceptions the kernel does the page table walk by software.
Though this is not going to be the best answer, iw ould like to share my thoughts on confused points.
1. Does Page table is maintained...
Yes. kernel maintains the page tables. In fact it maintains nested page tables. And top of the page tables is stored in top_pmd. pmd i suppose it is page mapping directory. You can traverse through all the page tables using this structure.
2. How MMU cannot find the address in physical RAM.....
I am not sure i understood the question. But in case because of some problem, the instruction is faulted or out of its instruction area is being accessed, you generally get undefined instruction exception resulting in undefined exception abort. If you look at the crash dumps, you can see it in the kernel log.
3. Is the Mapping table - virtual to physical is inside a MMU...
Yes. MMU is SW+HW. HW is like TLB and all. The mapping tables are stored here. For instructions, that is for code section i always converted the physical-virtual address and always they matched. And almost all the times it matches for Data sections as well.
4. cat /proc/pid_value/maps. This shows me the current mapping of the vmarea....
This is more used for analyzing the virtual addresses of user space stacks. As you know virtually all the user space programs can have 4 GB of virtual address. So unlike kernel if i say 0xc0100234. You cannot directly go and point to the istruction. So you need this mapping and the virtual address to point the instruction based on the data you have.
5. The high-mem concept is that kernel cannot directly access the Memory...
High-mem corresponds to user space memory(some one correct me if i am wrong). When kernel wants to read some data from a address at user space you will be accessing the HIGHMEM.
6. Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
MMU as i mentioned is HW + SW. So mostly it would be coming with the chipset. and the SW would be generally architecture dependent. You can disable MMU from kernel config and build. I have never tried it though. Mostly these days allthe chipsets have it. But small boards i think they disable MMU. I am not entirely sure though.
As all these are conceptual questions, i may be lacking some knowledge and be wrong at places. If so others please correct me.
I have been bothering with similar questions indirectly on my other posts. Now, my understanding is better. Thus, my questions are better. So, I want to summarize the facts here. This example is based on X86-32-bit system.
Please say yes/no to my points. If no, then please explain.
MMU will look into the CR3 register to find the Process - Page Directory base address.
The CR3 register is set by the kernel.
Now MMU after reading the Page directory base address, will offset to the Page Table index (calculated from VA), from here it will read the Page frame number, now it will find the offset on the page frame number based on the VA given. It gets the physical memory address. All this is done in MMU right? Don't know when MMU is disabled, who will do all this circus? If software then it will be slow right?
I know then page fault occurs when the MMU cannot resolve the address. The kernel is informed. The kernel will update the page table based on the reading from kernel virtual memory area struct. Am I correct?
Keeping in mind, the point 4. Does it mean that before executing any process. Perhaps during loading process. Does Kernel first fills the kernel virtual memory area struct. For example, where the section of memory will be BSS, Code, DS,etc. It could be that some sections are in RAM, and some are in Storage device. When the sections of the program is moved from storage to main memory, I am assuming that kernel would be updating the Kernel virtual memory area struct. Am I correct here? So, it is the kernel who keeps a close track on the program location - whether in storage device or RAM - inode number of device and file offset.
Sequence wise -> During Process loading ( may be a loader program)-> Kernel will populate the data in the kernel virtual memory area struct. It will also set the CR3 register. Now Process starts executing, it will initially get some frequent page faults.Now the VM area struct will be updated (if required) and then the page table. Now, MMU will succeed in translating the address. So, when I say process accessing a memory, it is the MMU which is accessing the memory on behalf of the process. This is all about user-space. Kernel space is entirely different. The kernel space doesn't need the MMU, it can directly map to the physical address - low mem. For high mem ( to access user space from kernel space), it will do the temporary page table updation - internally. This is a separate page table for kernel, a temporary one. The kernel space doesn't need MMU. Am I correct?
Don't know when MMU is disabled, who will do all this circus?
Nobody. All this circus is intended to do two things: translate the virtual address you gave it into a real address, and if it can't do that then to abort the instruction entirely and start executing a routine addressed from an architecturally pre-defined address, see "page fault" there for the basic one.
When the MMU is shut off, no translation is done and the address you gave it is fed directly down the CPU's address-processing pipe just as any address the MMU might have translated it to would have been.
So, when I say process accessing a memory, it is the MMU which is accessing the memory on behalf of the process.
You're on the right track here, the MMU is mediating the access, but it isn't doing the access. It's doing only what you described before, translating it. What's generally called the Load/Store unit, gets it next, and it's the one that handles talking to whatever holds the closest good copy of the data at that address, "does the access".
The kernel space doesn't need the MMU, it can directly map to the physical address
That depends on how you define "need". It can certainly shut it off, but it almost never does. First, it has to talk to user space, and the MMU has to be running to translate what user space has to addresses the Load-Store unit can use. Second, the flexibility and protection provided by the MMU are very valuable, they're not discarded without a really compelling reason. I know at least one OS will (or would, it's been a while) run some bulk copies MMU-off, but that's about it.
I am getting a bit confused for the address space mappings of a program in memory,
here is a link
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
it deals with virtual address mapping of program in executing and has marked 1GB for kernel mode and 3 GB for user mode,
and mentions use of page tables for access to memory segments,
Linux uses a page file which is 4 KB only on x86 architectures if I am not wrong then how does a process can use 3GB of mapped address space from memory (as given on link that each process is of 4GB ).This is a bit counter intuitive to me or my understanding is wrong some where?
I don't know if I understand you right or not. It seems you think the processor uses 4KB page to manage the mapped file.
In fact, this 4KB is just a page memory buffer. When system tries to read data from any file to memory, it will firstly read the content inside the page memory buffer and move the content in the buffer to the real stack or heap later. This 4KB page buffer is not used to control mapped memory address.
The mapped address is something like HANDLE, I think. It stores in one part of memory. When you use this handle to get data, the system will count and transfer it to real address on DISK. Then system will read mapped data to page buffer and finally move them to the memory.
The details, please check following link:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
This blog also has several other good articles about memory management. Complicated but really nice.
I have a doubt when each process has it's own separate page table then why is there s system wide page table required ? Also if Page table is such that it maps virtual address to a physical address then I think two process may map to same physical address because all process have same virtual address space . Any good link on system wide page table will also solve my problem?
Each process has its own independent virtual address space - two processes can have virtpage 1 map to different physpages. Processes can participate in shared memory, in which case they each have some virtpage mapping to the same physpage.
The virtual address space of a process can be used to map virtpages to physpages, to memory mapped files, devices, etc. Virtpages don't have to be wired to RAM. A process could memory-map an entire 1GB file - in which case, its physical memory usage might only be a couple megs, but its virtual address space usage would be 1GB or more. Many processes could do this, in which case the sum of virtual address space usage across all processes might be, say, 40 GB, while the total physical memory usage might be only, say, 100 megs; this is very easy to do on 32-bit systems.
Since lots of processes load the same libraries, the OS typically puts the libs in one set of read-only executable pages, and then loads mappings in the virtpage space for each process to point to that one set of pages, to save on physical memory.
Processes may have virtpage mappings that don't point to anything, for instance if part of the process's memory got written to the page file - the process will try to access that page, the CPU will trigger a page fault, the OS will see the page fault and handle it by suspending the process, reading the pages back into ram from the page file and then resuming the process.
There are typically 3 types of page faults. The first type is when the CPU does not have the virtual-physical mapping in the TLB - the processor invokes the pagefault software interrupt in the OS, the OS puts the mapping into the processor for that process, then the proc re-runs the offending instructions. These happen thousands of times a second.
The second type is when the OS has no mapping because, say, the memory for the process has been swapped to disk, as explained above. These happen infrequently on a lightly loaded machine, but happen more often as memory pressure is increased, up to 100s to 1000s of times per second, maybe even more.
The third type is when the OS has no mapping because the mapping does not exist - the process is trying to access memory that does not belong to it. This generates a segfault, and typically, the process is killed. These aren't supposed to happen often, and solely depend on how well written the software is on the machine, and does not have anything to do with scheduling or machine load.
Even if you already knew that, I figured I throw that in for the community.