What happens to my pointers when I initialize paging? C kernel dev - c

I'm writing a kernel from scratch and am confused about what will happen once I initialize paging and map my kernel to a different location in virtual memory. My kernel is loaded to physical address 0x100000 on startup but I plan on mapping it to the virtual address 0xC0100000 so I can leave virtual address 0x100000 available for VM86 processes (more specifically, I plan on mapping physical addresses 0x100000 through 0x40000000 to virtual addresses 0xC0100000 through 0xFFFFF000). Anyway, I have a bitmap to keep track of my page frames located at physical address 0x108000 with the address stored in a uint32_t pointer. My question is, what will happen to this pointer when I initialize paging? Will it still point to my bitmap located at physical address 0x108000 or will it point to whatever the virtual address 0x108000 is mapped to in my page table? If the latter is true, how do I get around the problem that my pointers will not be correct once paging is enabled? Will I have to update my pointers or am I going about this completely wrong?

Related

Confusion of virtual memory

Consider a sample below.
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
The 4KB memory is allocated by calling malloc(). OS handles the memory request by the user program in user-space. First, OS requests memory allocation to RAM, then RAM gives physical memory address to OS. Once OS receives physical address, OS maps the physical address to virtual address then OS returns the virtual address which is the address of p to user program.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM). I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory(RAM) even though I didn't care about them.
What happens in behind? What OS does for me? I couldn't found relevant materials in some books(OS, system programming).
Could you give some explanation? (Please omit the contents about cache for easier understanding)
A detailed answer to your question will be very long - and too long to fit here at StackOverflow.
Here is a very simplified answer to a little part of your question.
You write:
I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory
Seems you have a very fundamental misunderstanding here.
There is no memory directly "behind" a virtual address. Whenever you access a virtual address in your program, it is automatically translated to a physical address and the physical address is then used for access in main memory.
The translation happens in HW, i.e. inside the processor in a block called "MMU - Memory management unit" (see https://en.wikipedia.org/wiki/Memory_management_unit).
The MMU holds a small but very fast look-up table that tells how a virtual address is to be translated into a physical address. The OS configures this table but after that, the translation happens without any SW being involved and - just to repeat - it happens whenever you access a virtual memory address.
The MMU also takes some kind of process ID as input in order to do the translation. This is need because two different processes may use the same virtual address but they will need translation to two different physical addresses.
As mentioned above the MMU look-up table (TLB) is small so the MMU can't hold a all translations for a complete system. When the MMU can't do a translation, it can make an exception of some kind so that some OS software can be triggered. The OS will then re-program the MMU so that the missing translation gets into the MMU and the process execution can continue. Note: Some processors can do this in HW, i.e. without involving the OS.
You have to understand that virtual memory is virtual, and it can be more extensive than physical memory RAM, so it is mapped differently. Although they are actually the same.
Your programs use virtual memory addresses, and it is your OS who decides to save in RAM. If it fills up, then it will use some space on the hard drive to continue working.
But the hard drive is slower than the RAM, that's why your OS uses an algorithm, which could be Round-Robin, to exchange pages of memory between the hard drive and RAM, depending on the work being done, ensuring that the data that are most likely to be used are in fast memory. To swap pages back and forth, the OS does not need to modify virtual memory addresses.
Summary overlooking a lot of things
You want to understand how virtual memory works. There's lots of online resources about this, here's one I found that seems to do a fair job of trying to explain it without getting too crazy in technical details, but also doesn't gloss over important terms.
https://searchstorage.techtarget.com/definition/virtual-memory
For Linux on x86 platforms, the assembly equivalent of asking for memory is basically a call into the kernel using int 0x80 with some parameters for the call set into some registers. The interrupt is set at boot by the OS to be able to answer for the request. It is set in the IDT.
An IDT descriptor for 32 bits systems looks like:
struct IDTDescr {
uint16_t offset_1; // offset bits 0..15
uint16_t selector; // a code segment selector in GDT or LDT
uint8_t zero; // unused, set to 0
uint8_t type_attr; // type and attributes, see below
uint16_t offset_2; // offset bits 16..31
};
The offset is the address of the entry point of the handler for that interrupt. So interrupt 0x80 has an entry in the IDT. This entry points to an address for the handler(also called ISR). When you call malloc(), the compiler will compile this code to a system call. The system call returns in some register the address of the allocated memory. I'm pretty sure as well that this system call will actually use the sysenter x86 instruction to switch into kernel mode. This instruction is used alongside an MSR register to securely jump into kernel mode from user mode at the address specified in the MSR (Model Specific Register).
Once in kernel mode, all instructions can be executed and access to all hardware is unlocked. To provide with the request the OS doesn't "ask RAM for memory". RAM isn't aware of what memory the OS uses. RAM just blindly answers to asserted pins on it's DIMM and stores information. The OS just checks at boot using the ACPI tables that were built by the BIOS to determine how much RAM there is and what are the different devices that are connected to the computer to avoid writing to some MMIO (Memory Mapped IO). Once the OS knows how much RAM is available (and what parts are usable), it will use algorithms to determine what parts of available RAM every process should get.
When you compile C code, the compiler (and linker) will determine the address of everything right at compilation time. When you launch that executable the OS is aware of all memory the process will use. So it will set up the page tables for that process accordingly. When you ask for memory dynamically using malloc(), the OS determines what part of physical memory your process should get and changes (during runtime) the page tables accordingly.
As to paging itself, you can always read some articles. A short version is the 32 bits paging. In 32 bits paging you have a CR3 register for each CPU core. This register contains the physical address of the bottom of the Page Global Directory. The PGD contains the physical addresses of the bottom of several Page Tables which themselves contain the physical addresses of the bottom of several physical pages (https://wiki.osdev.org/Paging). A virtual address is split into 3 parts. The 12 bits to the right (LSB) are the offset in the physical page. The 10 bits in the middle are the offset in the page table and the 10 MSB are the offset in the PGD.
So when you write
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
you create a pointer of type char* and making a system call to ask for 4096 bytes of memory. The OS puts the first address of that chunk of memory into a certain conventional register (which depends on the system and OS). You should not forget that the C language is just a convention. It is up to the operating system to implement that convention by writing a compatible compiler. It means that the compiler knows what register and what interrupt number to use (for the system call) because it was specifically written for that OS. The compiler will thus take the address stored into this certain register and store it into this pointer of type char* during runtime. On the second line you are telling the compiler that you want to take the char at the first address and make it an 'a'. On the third line you make the second char a 'b'. In the end, you could write an equivalent:
char* p = (char*)malloc(4096);
*p = 'a';
*(p + 1) = 'b';
The p is a variable containing an address. The + operation on a pointer increments this address by the size of what is stored in that pointer. In this case, the pointer points to a char so the + operation increments the pointer by one char (one byte). If it was pointing to an int then it would be incremented of 4 bytes (32 bits). The size of the actual pointer depends on the system. If you have a 32 bits system then the pointer is 32 bits wide (because it contains an address). On a 64 bits system the pointer is 64 bits wide. A static memory equivalent of what you did is
char p[4096];
p[0] = 'a';
p[1] = 'b';
Now the compiler will know at compile time what memory this table will get. It is static memory. Even then, p represents a pointer to the first char of that array. It means you could write
char p[4096];
*p = 'a';
*(p + 1) = 'b';
It would have the same result.
First, OS requests memory allocation to RAM,…
The OS does not have to request memory. It has access to all of memory the moment it boots. It keeps its own database of which parts of that memory are in use for what purposes. When it wants to provide memory for a user process, it uses its own database to find some memory that is available (or does things to stop using memory for other purposes and then make it available). Once it chooses the memory to use, it updates its database to record that it is in use.
… then RAM gives physical memory address to OS.
RAM does not give addresses to the OS except that, when starting, the OS may have to interrogate the hardware to see what physical memory is available in the system.
Once OS receives physical address, OS maps the physical address to virtual address…
Virtual memory mapping is usually described as mapping virtual addresses to physical addresses. The OS has a database of the virtual memory addresses in the user process, and it has a database of physical memory. When it is fulfilling a request from the process to provide virtual memory and it decides to back that virtual memory with physical memory, the OS will inform the hardware of what mapping it choose. This depends on the hardware, but a typical method is that the OS updates some page table entries that describe what virtual addresses get translated to what physical addresses.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM).
When your process writes to virtual memory that is mapped to physical memory, the processor will take the virtual memory address, look up the mapping information in the page table entries or other database, and replace the virtual memory address with a physical memory address. Then it will write the data to that physical memory.

How does making the virtual address contiguous in physical address zone improve performance?

Recently I am reading code about hugepage in dpdk(dpdk.org). I see the code makes the virtual address contiguous in physical address zone on purpose. Specifically, it first checks if there exists physically contiguous zone in hugepages and map the physically contiguous zone into contiguous virtual address. How does this improve the performance?
The source code says:
To reserve a big contiguous amount of memory, we use the hugepage feature of linux. For that, we need to have hugetlbfs mounted. This code will create many files in this directory (one per page) and map them in virtual memory. For each page, we will retrieve its physical address and remap it in order to have a virtual contiguous zone as well as a physical contiguous zone.
Why is this remapping necessary?
map the physically contiguous zone into contiguous virtual address. How does this improve the performance?
DPDK needs both physical and virtual addresses. The virtual address is used normally, to load/store some data. The physical address is necessary for the userspace drivers to transfer data to/from devices.
For example, we allocate a few mbufs with virtual addresses 0x41000, 0x42000 and 0x43000. Then we fill them with some data and pass those virtual addresses to the PMD to transfer.
The driver has to convert those virtual addresses to physical. If physical pages are mapped to virtual address space noncontiguous, to convert virtual to physical addresses we need to search through all the mappings. For example, virtual address 0x41000 might correspond to physical 0x81000, 0x42000 corresponds to 0x16000, and 0x43000 — to 0x64000.
The best case of such a search is one memory read, the worst case — a few memory reads for each buffer.
But if we are sure that both virtual and physical addresses of a memory zone are contiguous, we simply add an offset to the virtual address to get the physical and vice versa. For example, virtual 0x41000 corresponds to 0x81000, virtual 0x42000 to physical 0x82000, and 0x43000 — 0x83000.
The offset we know from the mapping. The worst case of such a translation is one memory read per all the buffers in a burst, which is a huge improvement for the translation.
Why is this remapping necessary?
To map a huge page to a virtual address space an mmap system call is used. The API of the call allows to specify the fixed virtual address for the huge page to be mapped. This allows to map huge pages one after another creating a contiguous virtual memory zone. For example, we can mmap a huge page at the virtual address 0x200000, the next one at the virtual address 0x400000 and so on.
Unfortunately, we don't know physical addresses of the huge pages until they are mapped. So at the virtual address 0x200000 we might map the physical address 0x800000, and at the virtual address 0x400000 — the physical 0x600000.
But once we mapped those huge pages for the first time, we know both physical and virtual addresses. So all we need to do is to remap them in the correct order: at virtual address 0x1200000 we map physical 0x600000, and at 0x1400000 — physical 0x800000.
Now we have a virtually and physically contiguous memory zone starting at the virtual address 0x1200000 and physical address 0x600000. So to convert virtual to physical addresses in this memory zone we just need to subtract the offset 0x600000 from the virtual address as described previously.
Hope this clarifies a bit the idea of contiguous memory zones and remapping.

what is the benefit of storing virtual address in a pointer rather than physical address?

I have gone through below link and it says that on most Operating Systems , pointers store the virtual address rather than physical address but I am unable to get the benefit of storing virtual address in a pointer .
when at the end we can modify the contents of a particular memory location directly through a pointer so what is the issue whether it is a virtual address or a physical address ?
Also during the time the code executes , most of the time the data segment will also remain in memory ,so we are dealing with physical memory location only so how the virtual address is useful ?
C pointers and the physical address
Security issues (as noted before) aside, there is another big advantage:
Globals and functions (and your stack) can always be found at fixed addresses (so the assembler can hardcode them), independently of where the instance of your program is loaded.
If you'd really like your code to run from any address, you have to make it position independent (with gcc you'd use the -fPIC argument). This question might be an interresting read wrt the -fPIC and virtual addressing: GCC -fPIC option
Actually, pointers normally hold LOGICAL ADDRESSES.
There are several reasons for using logical addressing, including:
Ease of memory management. The OS never has to allocate contiguous physical page frames to a process.
Security. Each process has access to its own logical address space and cannot mess with another processes address space.
Virtual Memory. Logical address translation is a prerequisite for implementing virtual memory.
Page protection. Aids security by limiting access to system pages to higher processor modes. Helps error (and virus) trapping by limiting the types of access to pages (e.g., not allowing writes to or executes of data pages).
This is not a complete list.
The same virtual address can point to different physical addresses in different moments.
If your physical memory is full, your data is swapped out from the memory to your HDD. When your program wants to access this data again - it is currently not stored in memory - the data is swapped in back to memory, but it often will be a different location as before.
The Page Table, which stores the assignment of virtual to physical addresses, is updated with the new physical address. So your virtual address remains the same, while the physical address may change.

Linux process virtual address space's address range

I'm on 32bit machine. From what I understand, User space's address ranges from 0x00000000 to 0xbfffffff, and kernel's ranges from 0xc0000000 to 0xffffffff.
But when I used pmap to see a process's memory allocation, I see that the library are loaded in around 0xf7777777. Please see the attached screenshot. Does it mean those libraries are loaded in kernel space? And when I used mmap(), I got the address from 0xe0000000. So, mmap() got memory from kernel space?
I'm on 32bit machine. From what I understand, User space's address
ranges from 0x00000000 to 0xbfffffff, and kernel's ranges from
0xc0000000 to 0xffffffff.
Not exactly. Kernel memory space starts at 0xC0000000, but it doesn't have to fill the entire GB. In fact, it fills up to virtual address 0xF7FFFFFF. This covers 896MB of physical memory. Virtual addresses 0xF8000000 and up are used as a 128MB window for the kernel to map any region of physical memory beyond the 896MB limit.
All user processes share the same memory map for virtual addresses 0xC0000000 and beyond, so if the kernel does not use its entire GB of virtual space, it may reuse part of it to map commonly used shared libraries, so every process can see them.

How do I get DRAM address instead of Virtual address

I understand if I try to print the address of an element of an array it would be an address from virtual memory not from real memory (physical memory) i.e DRAM.
printf ("Address of A[5] and A[6] are %u and %u", &A[5], &A[6]);
I found addresses were consecutive (assuming elements are chars). In reality they may not be consecutive at least not in the DRAM. I want to know the real addresses. How do I get that?
I need to know this for either Windows or Linux.
You can't get the physical address for a virtual address from user code; only the lowest levels of the kernel deal with physical addresses, and you'd have to intercept things there.
Note that the physical address for a virtual address may not be constant while the program runs — the page might be paged out from one physical address and paged back in to a different physical address. And if you make a system call, this remapping could happen between the time when the kernel identifies the physical address and when the function call completes because the program requesting the information was unscheduled and partially paged out and then paged in again.
The simple answer is that, in general, for user processes or threads in a multiprocessing OS such as Windows or Linux, it is not possible to find the address even of of a static variable in the processor's memory address space, let alone the DRAM address.
There are a number of reasons for this:
The memory allocated to a process is virtual memory. The OS can remap this process memory from time-to-time from one physical address range to another, and there is no way to detect this remaping in the user process. That is, the physical address of a variable can change during the lifetime of a process.
There is no interface from userspace to kernel space that would allow a userspace process to walk through the kernel's process table and page cache in order to find the physical address of the process. In Linux you can write a kernel module or driver that can do this.
The DRAM is often mapped to the procesor address space through a memory management unit (MMU) and memory cache. Although the MMU maping of DRAM to the processor address space is usually done only once, during system boot, the processor's use of the cache can mean that values written to a variable might not be written through to the DRAM in all cases.
There are OS-specific ways to "pin" a block of allocated memory to a static physical location. This is often done by device drivers that use DMA. However, this requires a level of privilege not available to userspace processes, and, even if you have the physical address of such a block, there is no pragma or directive in the commonly used linkers that you could use to allocate the BSS for a process at such a physical address.
Even inside the Linux kernel, virtual to physical address translation is not possible in the general case, and requires knowledge about the means that were used to allocate the memory to which a particular virtual address refers.
Here is a link to an article called Translating Virtual to Physical Address on Windows: Physical Addresses that gives you a hint as to the extreme ends to which you must go to get physical addresses on Windows.

Resources