In MIPS TLB, Confusion between virtual and physical addresses - c

I have some C code running in RTL mode, I use CPU I6400 the C code is just a simple code to read and write from some subsystem, for example,
I tried to write in this address : 0x001e400000 (physical address) so when CPU executes this address I got a TLB exception because this address is a mapped area, after many research I fond that I need to translate this address from Virtual address to physical address, I replace this address by 0xffffffffbe400000 (Kseg1), now im able to write some value in this address is done, But when I tried to read from this address(W/R) or another address I got an exception on KSEG2,
Have you an idea why the writing step pass correctly but the reading step generate a exception?

Related

How is a C program divided into pages and then put inside RAM frames according to this scenario?

Look at this scheme and then answer the following questions :
As you see there's a simple C program roughly converted into assembly instructions. For the sake of simplicity let's suppose each instruction is 3 Bytes long and let's suppose page and frame size is also 3 Bytes long.
Is it right this flow of procedures ?
Is the program really scattered into pages and then put in RAM frames like that ?
If so how is the system capable of associating specific pages to the specific segments they belong to ?
I read in an OS book that segmentation and paging can coexist. Is this scenario related to this scenario ?
Please answer all this questions and try also to clarify the confusion regarding the coexistence of segmentation and paging. Good reference material about this subject is also very appreciated, specifically some concrete real example with C program and not abstract stuff
Is it right this flow of procedures?
Is the program really scattered into pages and then put in RAM frames like that?
In principle: Yes.
Please note that the sections ("Code", "Data", "Stack" and "Heap") are only names of ranges in the memory. For the CPU, there is no "border" between "Data" and "Stack".
If so how is the system capable of associating specific pages ...
I'm simplifying a lot in the next lines:
Reading memory works the following way: The CPU sends some address to the memory (e.g. RAM) and it sends a signal that data shall be read. The memory then sends the data stored at this address back to the CPU.
Until the early 1990s, there were computers that used CPUs and so-called MMUs in two separate chips. This device was put between the CPU and the memory. When the CPU sends an address to the memory, the address (cpu_address) is actually sent to the MMU. The MMU now performs an operation that is similar to the following C code operation:
int page_table[];
new_address = (page_table[cpu_address / PAGE_SIZE] *
PAGE_SIZE) + (cpu_address % PAGE_SIZE);
... the MMU sends the address new_address to the memory.
The MMU is built in a way that the operating system can write to the page_table.
Let's say the instruction LOAD R1,[XXX1] is stored at the "virtual address" 1234. This means: The CPU thinks that the instruction is stored at address 1234. The CPU has executed the instruction STORE [XXX2],5 and wants to execute the next instruction. For this reason it must read the next instruction from the memory, so it sends the address 1234 to the memory.
However, the address is not received by the memory but by the MMU. The MMU now calculates: 1234/3 = 411, 1234%3 = 1. Now the MMU looks up the page table; let's say that entry #411 is 56. Then it calculates: 56 * 3 + 1 = 169. It sends the address 169 to the memory so the memory will not return the data stored in address 1234, but the data stored in address 169.
Modern desktop CPUs (but not small microcontrollers) have the CPU and the MMU in one chip.
You may ask: Why is that done?
Let's say a program is compiled in a way that it must be loaded to address 1230 and you have another program that must also be loaded to address 1230. Without MMU it would not be possible to load both programs at the same time.
With an MMU, the operating system may load one program to address 100 and the other one to address 200. The OS will then modify the data in the page_table in a way the CPU accesses the currently running program when accessing address 1230.
... to the specific segments they belong to?
When compiling the program, the linker decides at which address some item (variable or function) is found. In the simplest case, it simply places the first function to a fixed address (for example always 1230) and appends all other functions and variables.
So if the code of your program is 100 bytes long, the first variable is found at address 1230 + 100 = 1330.
The executable file (created by the linker) contains some information that says that the code must be loaded to address 1230.
When the operating system loads the program, it checks its length. Let's say your program is 120 bytes long. It calculates 120/3=40, so 40 pages are needed.
The OS searches for 40 pages of RAM which are not used. Then it modifies the page_table in a way that address 1230 actually accesses the first free page, 1233 accesses the second free page, 1236 accesses the third free page and so on.
Now the OS loads the program to virtual address 1230-1349 (addresses sent from the CPU to the MMU); because of the MMU, the data will be written to the free pages in RAM.
I read in an OS book that segmentation and paging can coexist. Is this scenario related to this scenario?
In this case, the word "segmentation" describes a special feature of 16- and 32-bit x86 CPUs. At least I don't know other CPUs supporting that feature. In modern x86 operating systems, this feature is no longer used.
Your example seems not to use segmentation (your drawing may be interpreted differently).
If you run a 16-bit Windows program on an old 32-bit Windows version (recent versions do not support 16-bit programs any more), paging and segmentation are used the same time.
specifically some concrete real example with C program and not abstract stuff
Unfortunately, a concrete real example would be even more difficult to understand because the page sizes are much larger than 3 bytes (4 kilobytes on x86 CPUs) ...

Confusion of virtual memory

Consider a sample below.
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
The 4KB memory is allocated by calling malloc(). OS handles the memory request by the user program in user-space. First, OS requests memory allocation to RAM, then RAM gives physical memory address to OS. Once OS receives physical address, OS maps the physical address to virtual address then OS returns the virtual address which is the address of p to user program.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM). I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory(RAM) even though I didn't care about them.
What happens in behind? What OS does for me? I couldn't found relevant materials in some books(OS, system programming).
Could you give some explanation? (Please omit the contents about cache for easier understanding)
A detailed answer to your question will be very long - and too long to fit here at StackOverflow.
Here is a very simplified answer to a little part of your question.
You write:
I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory
Seems you have a very fundamental misunderstanding here.
There is no memory directly "behind" a virtual address. Whenever you access a virtual address in your program, it is automatically translated to a physical address and the physical address is then used for access in main memory.
The translation happens in HW, i.e. inside the processor in a block called "MMU - Memory management unit" (see https://en.wikipedia.org/wiki/Memory_management_unit).
The MMU holds a small but very fast look-up table that tells how a virtual address is to be translated into a physical address. The OS configures this table but after that, the translation happens without any SW being involved and - just to repeat - it happens whenever you access a virtual memory address.
The MMU also takes some kind of process ID as input in order to do the translation. This is need because two different processes may use the same virtual address but they will need translation to two different physical addresses.
As mentioned above the MMU look-up table (TLB) is small so the MMU can't hold a all translations for a complete system. When the MMU can't do a translation, it can make an exception of some kind so that some OS software can be triggered. The OS will then re-program the MMU so that the missing translation gets into the MMU and the process execution can continue. Note: Some processors can do this in HW, i.e. without involving the OS.
You have to understand that virtual memory is virtual, and it can be more extensive than physical memory RAM, so it is mapped differently. Although they are actually the same.
Your programs use virtual memory addresses, and it is your OS who decides to save in RAM. If it fills up, then it will use some space on the hard drive to continue working.
But the hard drive is slower than the RAM, that's why your OS uses an algorithm, which could be Round-Robin, to exchange pages of memory between the hard drive and RAM, depending on the work being done, ensuring that the data that are most likely to be used are in fast memory. To swap pages back and forth, the OS does not need to modify virtual memory addresses.
Summary overlooking a lot of things
You want to understand how virtual memory works. There's lots of online resources about this, here's one I found that seems to do a fair job of trying to explain it without getting too crazy in technical details, but also doesn't gloss over important terms.
https://searchstorage.techtarget.com/definition/virtual-memory
For Linux on x86 platforms, the assembly equivalent of asking for memory is basically a call into the kernel using int 0x80 with some parameters for the call set into some registers. The interrupt is set at boot by the OS to be able to answer for the request. It is set in the IDT.
An IDT descriptor for 32 bits systems looks like:
struct IDTDescr {
uint16_t offset_1; // offset bits 0..15
uint16_t selector; // a code segment selector in GDT or LDT
uint8_t zero; // unused, set to 0
uint8_t type_attr; // type and attributes, see below
uint16_t offset_2; // offset bits 16..31
};
The offset is the address of the entry point of the handler for that interrupt. So interrupt 0x80 has an entry in the IDT. This entry points to an address for the handler(also called ISR). When you call malloc(), the compiler will compile this code to a system call. The system call returns in some register the address of the allocated memory. I'm pretty sure as well that this system call will actually use the sysenter x86 instruction to switch into kernel mode. This instruction is used alongside an MSR register to securely jump into kernel mode from user mode at the address specified in the MSR (Model Specific Register).
Once in kernel mode, all instructions can be executed and access to all hardware is unlocked. To provide with the request the OS doesn't "ask RAM for memory". RAM isn't aware of what memory the OS uses. RAM just blindly answers to asserted pins on it's DIMM and stores information. The OS just checks at boot using the ACPI tables that were built by the BIOS to determine how much RAM there is and what are the different devices that are connected to the computer to avoid writing to some MMIO (Memory Mapped IO). Once the OS knows how much RAM is available (and what parts are usable), it will use algorithms to determine what parts of available RAM every process should get.
When you compile C code, the compiler (and linker) will determine the address of everything right at compilation time. When you launch that executable the OS is aware of all memory the process will use. So it will set up the page tables for that process accordingly. When you ask for memory dynamically using malloc(), the OS determines what part of physical memory your process should get and changes (during runtime) the page tables accordingly.
As to paging itself, you can always read some articles. A short version is the 32 bits paging. In 32 bits paging you have a CR3 register for each CPU core. This register contains the physical address of the bottom of the Page Global Directory. The PGD contains the physical addresses of the bottom of several Page Tables which themselves contain the physical addresses of the bottom of several physical pages (https://wiki.osdev.org/Paging). A virtual address is split into 3 parts. The 12 bits to the right (LSB) are the offset in the physical page. The 10 bits in the middle are the offset in the page table and the 10 MSB are the offset in the PGD.
So when you write
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
you create a pointer of type char* and making a system call to ask for 4096 bytes of memory. The OS puts the first address of that chunk of memory into a certain conventional register (which depends on the system and OS). You should not forget that the C language is just a convention. It is up to the operating system to implement that convention by writing a compatible compiler. It means that the compiler knows what register and what interrupt number to use (for the system call) because it was specifically written for that OS. The compiler will thus take the address stored into this certain register and store it into this pointer of type char* during runtime. On the second line you are telling the compiler that you want to take the char at the first address and make it an 'a'. On the third line you make the second char a 'b'. In the end, you could write an equivalent:
char* p = (char*)malloc(4096);
*p = 'a';
*(p + 1) = 'b';
The p is a variable containing an address. The + operation on a pointer increments this address by the size of what is stored in that pointer. In this case, the pointer points to a char so the + operation increments the pointer by one char (one byte). If it was pointing to an int then it would be incremented of 4 bytes (32 bits). The size of the actual pointer depends on the system. If you have a 32 bits system then the pointer is 32 bits wide (because it contains an address). On a 64 bits system the pointer is 64 bits wide. A static memory equivalent of what you did is
char p[4096];
p[0] = 'a';
p[1] = 'b';
Now the compiler will know at compile time what memory this table will get. It is static memory. Even then, p represents a pointer to the first char of that array. It means you could write
char p[4096];
*p = 'a';
*(p + 1) = 'b';
It would have the same result.
First, OS requests memory allocation to RAM,…
The OS does not have to request memory. It has access to all of memory the moment it boots. It keeps its own database of which parts of that memory are in use for what purposes. When it wants to provide memory for a user process, it uses its own database to find some memory that is available (or does things to stop using memory for other purposes and then make it available). Once it chooses the memory to use, it updates its database to record that it is in use.
… then RAM gives physical memory address to OS.
RAM does not give addresses to the OS except that, when starting, the OS may have to interrogate the hardware to see what physical memory is available in the system.
Once OS receives physical address, OS maps the physical address to virtual address…
Virtual memory mapping is usually described as mapping virtual addresses to physical addresses. The OS has a database of the virtual memory addresses in the user process, and it has a database of physical memory. When it is fulfilling a request from the process to provide virtual memory and it decides to back that virtual memory with physical memory, the OS will inform the hardware of what mapping it choose. This depends on the hardware, but a typical method is that the OS updates some page table entries that describe what virtual addresses get translated to what physical addresses.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM).
When your process writes to virtual memory that is mapped to physical memory, the processor will take the virtual memory address, look up the mapping information in the page table entries or other database, and replace the virtual memory address with a physical memory address. Then it will write the data to that physical memory.

What is the canonical way to execute code directly from a QEMU device?

I'm modeling a particular evaluation board, which has a leon3 processor and several banks of MRAM mapped to specific addresses. My goal is to start qemu-system-sparc using my bootloader ELF, and then jump to the base address of a MRAM bank to begin executing bare-metal programs therein. To this end, I have been able to successfully run my bootloader and jump to the first instruction, but QEMU immediately stops and exits without reporting any error/trap. I can also run the bare-metal programs in isolation by passing them in ELF format as a kernel to qemu-system-sparc.
Short version: Is there a canonical way to set up a device such that code can be executed from it directly? What steps do I need to take when compiling that code to allow it to execute correctly?
I modeled the MRAM as a device with a MemoryRegion, along with the appropriate read and write operations to expose a heap-allocated array with my program. In my board code (modified version of qemu/hw/sparc/leon3.c), writes to the MRAM address are mapped to the MemoryRegion of the device. Using printfs, I am reporting reads and writes in the style of the unimplemented device (qemu/hw/misc/unimp.c), and I have verified that I am reading and writing to the device correctly.
Unfortunately, this did not work with respect to running the code on the device. I can see the read immediately after the bootloader jumps to the base address of my device, but the instruction read doesn't actually do anything. The bootloader uses a void function pointer, which is tied to the address of the MRAM device to induce a jump.
Another approach I tried is creating an alias to my device starting from address 0; I thought perhaps that my binary has all its addresses set relative to zero, so by mapping writes from addresses [0, MRAM_SIZE) as an alias to my device base address, the code will end up reading the corresponding instructions in the device MemoryRegion.
This approach failed an assert in memory.c:
static void memory_region_add_subregion_common(MemoryRegion *mr,
hwaddr offsset,
MemoryRegion *subregion)
{
assert(!subregion->container);
subregion->container = mr;
subregion->addr = offset;
memory_region_update_container_subregions(subregion);
}
What do I need to do to coerce QEMU to execute the code in my MRAM device? Do I need to produce a binary with absolute addresses?
Older versions of QEMU were simply unable to handle execution from anything other than RAM or ROM, and attempting to do so would give a "qemu: fatal: Trying to execute code outside RAM or ROM" error. QEMU 3.1 and later fixed this limitation, and now can execute code from anywhere -- though execution from a device will be much much slower than executing from RAM.
You mention that you "modeled the MRAM as a device with a MemoryRegion, along with the appropriate read and write operations to expose a heap-allocated array". This sounds like it is probably the wrong approach -- it will work but be very slow. If the MRAM appears to the guest as being like RAM, then model it as RAM (ie with a RAM MemoryRegion). If it's like RAM for reading but writes need to do something other than just-write-to-the-memory (or need to do that some of the time), then model it using a "romd" region, the same way the existing pflash devices do. Nonetheless, modelling it as a device with pure read and write functions should work, it'll just be horribly slow.
The assertion you've run into is the one that says "you can't put a memory region into two things at once" -- the 'subregion' you've passed in is already being used somewhere else, but you've tried to put it into a second container. If you have a MemoryRegion that you need to have appear in two places in the physical memory map, then you need to: create the MemoryRegion; create an alias MemoryRegion that aliases the real one; map the actual MemoryRegion into one place; map the alias into the other. There are plenty of examples of this in existing board models in QEMU.
More generally, you need to figure out what the evaluation board hardware actually is, and then model that. If the eval board has the MRAM visible at multiple physical addresses, then yes, use an alias MR. If it doesn't, then the problem is somewhere else and you need to figure out what's actually happening, not try to bodge around it with aliases that don't exist on the real hardware. QEMU's debug logging (various -d suboptions, plus -D file to log to a file) can be useful for checking what the emulated CPU is really doing in this early bootup phase -- but watch out as the logs can be quite large and they are sometimes tricky to interpret unless you know a little about QEMU internals.

Data transfer between host and endpoint using PCIe

Last few days onwards I am trying to develop data transfer between the host and endpoint, but I am unable to do that implementation.I have tried how to read the configure space using some calls(pci_read_long),it is successfully reading the data like vendor_id,device_id...etc.
In the configuration space BAR(base address register) is stored the memory address as well as I/O address it depends on 0th bit.coming to my problem I am reading the 10h address register, for example, let us consider the value 0XFE000000 what I am doing is to clear last four bits then complement the bits and finally add 1 to the address then the result indicates the size of the address.
my problems are:
whenever I am writing to particular address location(FE000000) using this pci_write_long I am facing the segmentation fault.
why I am facing segmentation fault while writing ? can anyone please
help me to resolve this issue and is it correct calculating the size of memory(above steps).
about bar : Is it represent the memory base address?
coming to my code:
int c = pci_write_long(dev,0X10,0xFFFFFFFF);//write all 1's to that location
c = pci_read_long(dev,0x10); //reading the address
printf("c = %x\n",c);
for(i=0;i<4;i++) //clearing the last four bits
c = c & ~(1<<i);
printf("c = %x\n",c);
c = ~c; // 1's complement
c =c+1; //add the one to that address
printf("c = %x\n",c); // size of the address
int ch1 = pci_write_byte(dev,c,0xf); // i am facing the segmentation fault here
printf("ch1 = %x\n",ch1);
ch1 = pci_read_byte(dev,c); // again i am reading the the data current location
printf("final read = %x\n",ch1);
Is it the correct way of implementing the code or not? if it is not correct can u provide any related information or any link?
whenever I am writing to particular address location(FE000000) using
this pci_write_long I am facing the segmentation fault.
pci_write_long() is used to access the Endpoints config space. The address you are trying to access is memory mapped Endpoint's internal registers and you can simply use pointers to do it. Just mmap() the address and access it like normal buffer provided you are aware of the device's internal register structure.
You can see that first write of 0xFFFFFFFF was successful because you were writing to offset 0x10 which lies in the config space. Next write which caused the segmentation fault, you are writing to offset 0xFE000000 from the config space base which is invalid.
One more point, in case you are on ARM, is this memory i.e. 0xFE000000 lies withing the PCIe controllers memory range? If that is not the case, then you are accessing an invalid memory.
why I am facing segmentation fault while writing ? can anyone please
help me to resolve this issue and is it correct calculating the size
of memory(above steps).
Reason for crash is given above. The size calculation algorithm looks correct. You can simply match your calculated size with the BAR size mentioned in the EP's datasheet to be sure.
about bar : Is it represent the memory base address?
It represents where your Endpoint's or Device's internal registers/memory is mapped in your host address space region. You can use the same address from your host machine to access the device's internal registers as if you are accessing a local memory buffer. Just mmap() the address before accessing if your architecture has virtual memory in place.
EDIT 1 - I can see you are writing all 1's and trying to read the address. You are not string any address first which should be from your host address space. I believe there is some misunderstanding you need to clear with BAR usage.
1 - Get the BAR size.
2 - Write an host address to the BAR where you want EP's internal registers to map. (For ARM, the address should be from PCIe controllers address range. Check manual of your board)
- Enable memory transaction access by writing 0x3 in command register.
3 - from your c code, do mmap() of that address
4 - Access that address now.

Does Virtual Memory area struct only comes into picture when there is a page fault?

Virtual Memory is a quite complex topic for me. I am trying to understand it. Here is my understanding for a 32-bit system. Example RAM is just 2GB. I have tried reading many links, and I am not confident at the moment. I would like you people to help me in clearing up my concepts. Please acknowledge my points, and also please answer for what you feel is wrong. I have also a confused section in my points. So, here starts the summary.
Every process thinks it is only running. It can access the 4GB of memory - virtual address space.
When a process access a virtual address it is translated to physical address via MMU.
This MMU is a part of a CPU - a hardware.
When the MMU cannot translate the address to a physical one, it raises a page fault.
On page fault, the kernel is notified. The kernel check the VM area struct. If it can find it - may be on disk. It will do some page-in /page-out. And get this memory on the RAM.
Now MMU will again try and will succeed this time.
In case the kernel cannot find the address, it will raise a signal. For example, invalid access will raise a SIGSEGV.
Confused points.
Does Page table is maintained in Kernel? This VM area struct has a page table ?
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
Does Page table is maintained in Kernel? This VM area struct has a page table ?
Yes. Not exactly: each process has a mm_struct, which contains a list of vm_area_struct's (which represent abstract, processor-independent memory regions, aka mappings), and a field called pgd, which is a pointer to the processor-specific page table (which contains the current state of each page: valid, readable, writable, dirty, ...).
The page table doesn't need to be complete, the OS can generate each part of it from the VMAs.
How MMU cannot find the address in physical RAM. Let's say it translates to some wrong address in RAM. Still the code will execute, but it will be a bad address. How MMU ensures that it is reading a right data? Does it consult Kernel VM area everytime?
The translation fails, e.g. because the page was marked as invalid, or a write access was attempted against a readonly page.
Is the Mapping table - virtual to physical is inside a MMU. I have read it that is maintained by an individual process. If it is inside a process, why I can't see it.
Or if it is MMU, how MMU generates the address - is it that Segment + 12-bit shift -> Page frame number, and then the addition of offset (bits -1 to 10) -> gives a physical address.
Does it mean that for a 32-bit architecture, with this calculation in my mind. I can determine the physical address from a virtual address.
There are two kinds of MMUs in common use. One of them only has a TLB (Translation Lookaside Buffer), which is a cache of the page table. When the TLB doesn't have a translation for an attempted access, a TLB miss is generated, the OS does a page table walk, and puts the translation in the TLB.
The other kind of MMU does the page table walk in hardware.
In any case, the OS maintains a page table per process, this maps Virtual Page Numbers to Physical Frame Numbers. This mapping can change at any moment, when a page is paged-in, the physical frame it is mapped to depends on the availability of free memory.
cat /proc/pid_value/maps. This shows me the current mapping of the vmarea. Basically, it reads the Vmarea struct and prints it. That means that this is important. I am not able to fit this piece in the complete picture. When the program is executed does the vmarea struct is generated. Is VMAREA comes only into the picture when the MMU cannnot translate the address i.e. Page fault? When I print the vmarea it displays the address range , permission and mapped to file descriptor, and offset. I am sure this file descriptor is the one in the hard-disk and the offset is for that file.
To a first approximation, yes. Beyond that, there are many reasons why the kernel may decide to fiddle with a process' memory, e.g: if there is memory pressure it may decide to page out some rarely used pages from some random process. User space can also manipulate the mappings via mmap(), execve() and other system calls.
The high-mem concept is that kernel cannot directly access the Memory region greater than 1 GB(approx). Thus, it needs a page table to indirectly map it. Thus, it will temporarily load some page table to map the address. Does HIGH MEM will come into the picture everytime. Because Userspace can directly translate the address via MMU. On what scenario, does kernel really want to access the High MEM. I believe the kernel drivers will mostly be using kmalloc. This is a direct memory + offset address. In this case no mapping is really required. So, the question is on what scenario a kernel needs to access the High Mem.
Totally unrelated to the other questions. In summary, high memory is a hack to be able to access lots of memory in a limited address space computer.
Basically, the kernel has a limited address space reserved to it (on x86, a typical user/kernel split is 3Gb/1Gb [processes can run in user space or kernel space. A process runs in kernel space when a syscall is invoked. To avoid having to switch the page table on every context-switch, on x86 typically the address space is split between user-space and kernel-space]). So the kernel can directly access up to ~1Gb of memory. To access more physical memory, there is some indirection involved, which is what high memory is all about.
Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run Linux?
Laptop/desktop processors come with an MMU. x86 supports paging since the 386.
Linux, specially the variant called µCLinux, supports processors without MMUs (!MMU). Many embedded systems (ADSL routers, ...) use processors without an MMU. There are some important restrictions, among them:
Some syscalls don't work at all: e.g fork().
Some syscalls work with restrictions and non-POSIX conforming behavior: e.g mmap()
The executable file format is different: e.g bFLT or ELF-FDPIC instead of ELF.
The stack cannot grow, and its size has to be set at link-time.
When a program is loaded first the kernel will setup a kernel VM-Area for that process is it? This Kernel VM Area actually holds where the program sections are there in the memory/HDD. Then the entire story of updating CR3 register, and page walkthrough or TLB comes into the picture right? So, whenever there is a pagefault - Kernel will update the page table by looking at Kernel virtual memory area is it? But they say Kernel VM area keeps updating. How this is possible, since cat /proc/pid_value/map will keep updating.The map won't be constant from start to end. SO, the real information is available in the Kernel VM area struct is it? This is the acutal information where the section of program lies, it could be HDD or physical memory -- RAM? So, this is filled during process loading is it, the first job? Kernel does the page in page out on page fault, and will update the Kernel VM area is it? So, it should also know the entire program location on the HDD for page-in / page out right? Please correct me here. This is in continuation to my first question of the previous comment.
When the kernel loads a program, it will setup several VMAs (mappings), according to the segments in the executable file (which on ELF files you can see with readelf --segments), which will be text/code segment, data segment, etc... During the lifetime of the program, additional mappings may be created by the dynamic/runtime linkers, by the memory allocator (malloc(), which may also extend the data segment via brk()), or directly by the program via mmap(),shm_open(), etc..
The VMAs contain the necessary information to generate the page table, e.g. they tell whether that memory is backed by a file or by swap (anonymous memory). So, yes, the kernel will update the page table by looking at the VMAs. The kernel will page in memory in response to page faults, and will page out memory in response to memory pressure.
Using x86 no PAE as an example:
On x86 with no PAE, a linear address can be split into 3 parts: the top 10 bits point to an entry in the page directory, the middle 10 bits point to an entry in the page table pointed to by the aforementioned page directory entry. The page table entry may contain a valid physical frame number: the top 22 bits of a physical address. The bottom 12 bits of the virtual address is an offset into the page that goes untranslated into the physical address.
Each time the kernel schedules a different process, the CR3 register is written to with a pointer to the page directory for the current process. Then, each time a memory access is made, the MMU tries to look for a translation cached in the TLB, if it doesn't find one, it looks for one doing a page table walk starting from CR3. If it still doesn't find one, a GPF fault is raised, the CPU switches to Ring 0 (kernel mode), and the kernel tries to find one in the VMAs.
Also, I believe this reading from CR, page directory->page-table->Page frame number-memory address this all done by MMU. Am I correct?
On x86, yes, the MMU does the page table walk. On other systems (e.g: MIPS), the MMU is little more than the TLB, and on TLB miss exceptions the kernel does the page table walk by software.
Though this is not going to be the best answer, iw ould like to share my thoughts on confused points.
1. Does Page table is maintained...
Yes. kernel maintains the page tables. In fact it maintains nested page tables. And top of the page tables is stored in top_pmd. pmd i suppose it is page mapping directory. You can traverse through all the page tables using this structure.
2. How MMU cannot find the address in physical RAM.....
I am not sure i understood the question. But in case because of some problem, the instruction is faulted or out of its instruction area is being accessed, you generally get undefined instruction exception resulting in undefined exception abort. If you look at the crash dumps, you can see it in the kernel log.
3. Is the Mapping table - virtual to physical is inside a MMU...
Yes. MMU is SW+HW. HW is like TLB and all. The mapping tables are stored here. For instructions, that is for code section i always converted the physical-virtual address and always they matched. And almost all the times it matches for Data sections as well.
4. cat /proc/pid_value/maps. This shows me the current mapping of the vmarea....
This is more used for analyzing the virtual addresses of user space stacks. As you know virtually all the user space programs can have 4 GB of virtual address. So unlike kernel if i say 0xc0100234. You cannot directly go and point to the istruction. So you need this mapping and the virtual address to point the instruction based on the data you have.
5. The high-mem concept is that kernel cannot directly access the Memory...
High-mem corresponds to user space memory(some one correct me if i am wrong). When kernel wants to read some data from a address at user space you will be accessing the HIGHMEM.
6. Does the processor specifically comes with the MMU support. Those who doesn't have MMU support cannot run LInux?
MMU as i mentioned is HW + SW. So mostly it would be coming with the chipset. and the SW would be generally architecture dependent. You can disable MMU from kernel config and build. I have never tried it though. Mostly these days allthe chipsets have it. But small boards i think they disable MMU. I am not entirely sure though.
As all these are conceptual questions, i may be lacking some knowledge and be wrong at places. If so others please correct me.

Resources