why OS create the virtual memory for running process - c

when a program started the OS will create a virtual memory, which divided into stack, heap, data, text to run a process on it.I know that each segment is used for specification purpose such as text saves the binary code of program, data saves static and global variable. My question is why the OS need to create the virtual memory and divide it into the segments ? How about if OS just use the physical memory and the process run directly on the physical memory. I think maybe the answer is related with running many process at the same time, sharing memory between process but i am not sure. It is kind if you give me an example about the benefit of creating virtual memory and dividing it into the segments.

In an environment with memory protection via a memory mapping unit, all memory is virtual (mapped via the MMU). It's possible to simply map each virtual address linearly to physical addresses, while still using the protection capabilities of the MMU, but doing that makes no sense. There are many reasons to prefer that memory not be directly mappped, such as being able to share program instructions and shared library code between instances of the same program or different programs, being able to fork, etc.

Related

Operating system kernel and processes in main memory

Continuing my endeavors in OS development research, I have constructed an almost complete picture in my head. One thing still eludes me.
Here is the basic boot process, from my understanding:
1) BIOS/Bootloader perform necessary checks, initialize everything.
2) The kernel is loaded into RAM.
3) Kernel performs its initializations and starts scheduling tasks.
4) When a task is loaded, it is given a virtual address space in which it resides. Including the .text, .data, .bss, the heap and stack. This task "maintains" its own stack pointer, pointing to its own "virtual" stack.
5) Context switches merely push the register file (all CPU registers), the stack pointer and program counter into some kernel data structure and load another set belonging to another process.
In this abstraction, the kernel is a "mother" process inside of which all other processes are hosted. I tried to convey my best understanding in the following diagram:
Question is, first is this simple model correct?
Second, how is the executable program made aware of its virtual stack? Is it the OS job to calculate the virtual stack pointer and place it in the relevant CPU register? Is the rest of the stack bookkeeping done by CPU pop and push commands?
Does the kernel itself have its own main stack and heap?
Thanks.
Question is, first is this simple model correct?
Your model is extremely simplified but essentially correct - note that the last two parts of your model aren't really considered to be part of the boot process, and the kernel isn't a process. It can be useful to visualize it as one, but it doesn't fit the definition of a process and it doesn't behave like one.
Second, how is the executable program made aware of its virtual stack?
Is it the OS job to calculate the virtual stack pointer and place it
in the relevant CPU register? Is the rest of the stack bookkeeping
done by CPU pop and push commands?
An executable C program doesn't have to be "aware of its virtual stack." When a C program is compiled into an executable, local variables are usually referenced in relative to the stack pointer - for example, [ebp - 4].
When Linux loads a new program for execution, it uses the start_thread macro (which is called from load_elf_binary) to initialize the CPU's registers. The macro contains the following line:
regs->esp = new_esp;
which will initialize the CPU's stack pointer register to the virtual address that the OS has assigned to the thread's stack.
As you said, once the stack pointer is loaded, assembly commands such as pop and push will change its value. The operating system is responsible for making sure that there are physical pages that correspond to the virtual stack addresses - in programs that use a lot of stack memory, the number of physical pages will grow as the program continues its execution. There is a limit for each process that you can find by using the ulimit -a command (on my machine the maximum stack size is 8MB, or 2KB pages).
Does the kernel itself have its own main stack and heap?
This is where visualizing the kernel as a process can become confusing. First of all, threads in Linux have a user stack and a kernel stack. They're essentially the same, differing only in protections and location (kernel stack is used when executing in Kernel Mode, and user stack when executing in User Mode).
The kernel itself does not have its own stack. Kernel code is always executed in the context of some thread, and each thread has its own fixed-size (usually 8KB) kernel stack. When a thread moves from User Mode to Kernel Mode, the CPU's stack pointer is updated accordingly. So when kernel code uses local variables, they are stored on the kernel stack of the thread in which they are executing.
During system startup, the start_kernel function initializes the kernel init thread, which will then create other kernel threads and begin initializing user programs. So after system startup the CPU's stack pointer will be initialized to point to init's kernel stack.
As far as the heap goes, you can dynamically allocate memory in the kernel using kmalloc, which will try to find a free page in memory - its internal implementation uses get_zeroed_page.
You forgot one important point: Virtual memory is enforced by hardware, typically known as the MMU (Memory Management Unit). It is the MMU that converts virtual addresses to physical addresses.
The kernel typically loads the address of the base of the page table for a specific process into a register in the MMU. This is what task-switches the virtual memory space from one process to another. On x86, this register is CR3.
Virtual memory protects processes' memory from each other. RAM for process A is simply not mapped into process B. (Except for e.g. shared libraries, where the same code memory is mapped into multiple processes, to save memory).
Virtual memory also protect kernel memory space from a user-mode process. Attributes on the pages covering kernel address space are set so that, when the processor is running in user-mode, it is not allowed to execute there.
Note that, while the kernel may have threads of its own, which run entirely in kernel space, the kernel shouldn't really be thought of a "a mother process" that runs independently of your user-mode programs. The kernel basically is "the other half" of your user-mode program! Whenever you issue a system call, the CPU automatically transitions into kernel mode, and starts executing at a pre-defined location, dictated by the kernel. The kernel system call handler then executes on your behalf, in the kernel-mode context of your process. Time spent in the kernel handling your request is accounted for, and "charged to" your process.
The helpful ways of thinking about kernel in context of relationships with processes and threads
Model provided by you is very simplified but correct in general.
In the same time the way of thinking about kernel as about "mother process" isn't best, but it still has some sense.
I would like to propose another two better models.
Try to think about kernel as about special kind of shared library.
Like a shared library kernel is shared between different processes.
System call is performed in a way which is conceptually similar to the routine call from shared library.
In both cases, after call, you execute of "foreign" code but in the context your native process.
And in both cases your code continues to perform computations based on stack.
Note also, that in both cases calls to "foreign" code lead to blocking of execution of your "native" code.
After return from the call, execution continues starting in the same point of code and with the same state of the stack from which call was performed.
But why we consider kernel as a "special" kind of shared library? Because:
a. Kernel is a "library" that is shared by every process in the system.
b. Kernel is a "library" that shares not only section of code, but also section of data.
c. Kernel is a specially protected "library". Your process can't access kernel code and data directly. Instead, it is forced to call kernel controlled manner via special "call gates".
d. In the case of system calls your application will execute on virtually continuous stack. But in reality this stack will be consist from two separated parts. One part is used in user mode and the second part will be logically attached to the top of your user mode stack during entering the kernel and deattached during exit.
Another useful way of thinking about organization of computations in your computer is consideration of it as a network of "virtual" computers which doesn't has support of virtual memory.
You can consider process as a virtual multiprocessor computer that executes only one program which has access to all memory.
In this model each "virtual" processor will be represented by thread of execution.
Like you can have a computer with multiple processors (or with multicore processor) you can have multiple oncurrently running threads in your process.
Like in your computer all processors have shared access to the pool of physical memory, all threads of your process share access to the same virtual address space.
And like separate computers are physically isolated from each other, your processes also isolated from each other but logically.
In this model kernel is represented by server having direct connections to each computer in the network with star topology.
Similarly to a networking servers, kernel has two main purposes:
a. Server assembles all computers in single network.
Similarly kernel provides a means of inter-process communication and synchronization. Kernel works as a man in the middle which mediates entire communication process (transfers data, routes messages and requests etc.).
b. Like server provides some set of services to each connected computer, kernel provides a set of services to the processes. For example, like a network file server allows computers read and write files located on shared storage, your kernel allows processes to do the same things but using local storage.
Note, that following the client-server communication paradigm, clients (processes) are the only active actors in the network. They issue request to the server and between each other. Server in its turn is a reactive part of the system and it never initiate communication. Instead it only replies to incoming requests.
This models reflect the resource sharing/isolation relationships between each part of the system and the client-server nature of communication between kernel and processes.
How stack management is performed, and what role plays kernel in that process
When the new process starts, kernel, using hints from executable image, decides where and how much of virtual address space will have reserved for the user mode stack of initial thread of the process.
Having this decision, kernel sets the initial values for the set of processor registers, which will be used by main thread of process just after start of the execution.
This setup includes setting of the initial value of stack pointer.
After actual start of process execution, process itself becomes responsible for stack pointer.
More interesting fact is that process is responsible for initialization of stack pointers of each new thread created by it.
But note that kernel kernel is responsible for allocation and management of kernel mode stack for each and every thread in the system.
Note also that kernel is resposible for physical memory allocation for the stack and usually perform this job lazily on demand using page faults as hints.
Stack pointer of running thread is managed by thread itself. In most cases stack pointer management is performed by compiler, when it builds executable image. Compiler usually tracks stack pointer value and maintain it's consistency by adding and tracking all instructions that relates to the stack.
Such instructions not limited only by "push" and "pop". There are many CPU instructions which affects the stack, for example "call" and "ret", "sub ESP" and "add ESP", etc.
So as you can see, actual policy of stack pointer management is mostly static and known before process execution.
Sometimes programs have a special part of the logic that performs special stack management.
For example implementations of coroutines or long jumps in C.
In fact, you are allowed to do whatever you want with the stack pointer in your program if you want.
Kernel stack architectures
I'm aware about three approaches to this issue:
Separate kernel stack per thread in the system. This is an approach adopted by most well-known OSes based on monolithic kernel including Windows, Linux, Unix, MacOS.
While this approach leads to the significant overhead in terms of memory and worsens cache utilization, but it improves preemption of the kernel, which is critical for the monolithic kernels with long-running system calls especially in the multi-processor environment.
Actually, long time ago Linux had only one shared kernel stack and entire kernel was covered by Big Kernel Lock that limits the number of threads, which can concurrently perform system call, by only one thread.
But linux kernel developers has quickly recognized that blocking execution of one process which wants to know for instance its PID, because another process already have started send of a big packet through very slow network is completely inefficient.
One shared kernel stack.
Tradeoff is very different for microkernels.
Small kernel with short system calls allows microkernel designers to stick to the design with single kernel stack.
In the presence of proof that all system calls are extremely short, they can benefit from improved cache utilization and smaller memory overhead, but still keep system responsiveness on the good level.
Kernel stack for each processor in the system.
One shared kernel stack even in microkernel OSes seriously affects scalability of the entire operating system in multiprocessor environment.
Due to this, designers frequently follow approach which is looks like compromise between two approaches described above, and keep one kernel stack per each processor (processor core) in the system.
In that case they benefit from good cache utilization and small memory overhead, which are much better than in the stack per thread approach and slightly worser than in single shared stack approach.
And in the same time they benefit from the good scalability and responsiveness of the system.
Thanks.

Read from any ram address directly?

Im doing some debugging on hardware with a Linux OS.
Now there no way for me to know if any of it works unless I can check the allocated ram that I asked it to write to.
Is there some way that I can check what is in that block or RAM from an external program running in the same OS?
If I could write a little program in C to do that how will I go about it since I cant just go and assign pointers custom addresses ?
Thanks
I think the best way to do what you are asking for is to use a debugger. And you cannot read another programme's memory unless you execute your code in a privileged space (i.e. the kernel), and privileged from the point of view of the CPU. And this because each programme is running in its own virtual memory space (for security concerns) and even the kernel is running in a virtual memory space but it has the privilege to map any physical memory block inside the virtual memory space it is current running. Anyway, I will not explain more in depth how an modern OS manage memory with the underneath hardware, it would be long.
You should really look at using a debugger. Once you environment with your debugger is ready, you should put a break after that memory block allocation so the debugger will stop the programme there and so you can inspect that freshly allocated memory block as you wish. Depending on whether you use an IDE or not, it can be very easy to use a debugger ;)
/dev/mem could come to use. It is a device file that is an image of the physical memory (including non-RAM memory). It's generally used to read/write memory of peripheral devices.
By mmap()ing to it, you could access physical memory.
See this linux documentation project page
memedit is a handy utility to display and change memory content for testing purposes.
It's main purpose is to display SoC hardware registers but it could be used to display RAM. It is based on mmap() mechanism. It could be good starting point to write custom application.

C memory mapping

I know there are two types of addresses. Virtual and Physical. Printing the address of an integer variable will print its virtual address. Is there a function that will facilitate to print the physical memory of that variable?
Does virtual memory mean a section on hard disk that is treated as RAM by the OS?
No, there is no such (portable) function. In modern operating systems implementing memory protection, user-space (as opposed to kernel-space, i.e. parts of the OS) cannot access physical addresses directly, that's simply not allowed. So there would be little point.
No, virtual memory does not need to involve hard disks, that's "swapping" or "paging". You can implement that once you have virtual memory, since it gives the OS a chance to intervene and manage which pages are kept in physical memory, thus making it possible to "page out" memory to other storage media.
For a very in-depth look at how the Linux kernel manages memory, this blog post is fantastic.
This is a complicated subject.
Physical memory addresses point to a real location in a hardware memory device, whether it be your system memory, graphics card memory, or network card buffer.
Virtual memory is the memory model presented to user-mode processes. Most devices on the system have some virtual memory address space mapped to them, which the processor can write to. When these physical memory addresses are given a virtual memory address, the OS recognises that read/write requests to those addresses need to be serviced by a particular device, and delegates that request off to it.

Virtual Memory allocation without Physical Memory allocation

I'm working on a Linux kernel project and i need to find a way to allocate Virtual Memory without allocating Physical Memory. For example if I use this :
char* buffer = my_virtual_mem_malloc(sizeof(char) * 512);
my_virtual_mem_malloc is a new SYSCALL implemented by my kernel module. All data written on this buffer is stocked on file or on other server by using socket (not on Physical Memory). So to complete this job, i need to request Virtual Memory and get access to the vm_area_struct structure to redefine vm_ops struct.
Do you have any ideas about this ?
thx
This is not architecturally possible. You can create vm areas that have a writeback routine that copies data somewhere, but at some level, you must allocate physical pages to be written to.
If you're okay with that, you can simply write a FUSE driver, mount it somewhere, and mmap a file from it. If you're not, then you'll have to just write(), because redirecting writes without allocating a physical page at all is not supported by the x86, at the very least.
There are a few approaches to this problem, but most of them require you to first write to an intermediate memory.
Network File System (NFS)
The easiest approach is simply to have the server open some sort of a shared file system such as NFS and using mmap() to map a remote file to a memory address. Then, writing to that address will actually write the OS's page cache, wich will eventually be written to the remote file when the page cache is full or after predefined system timeout.
Distributed Shared Memory (DSM)
An alternative approach is using DSM with a very small cache size.
In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as one logically shared address space.
[...] Software DSM systems can be implemented in an operating system, or as a programming library and can be thought of as extensions of the underlying virtual memory architecture. When implemented in the operating system, such systems are transparent to the developer; which means that the underlying distributed memory is completely hidden from the users.
It means that each virtual address is logically mapped to a virtual address on a remote machine and writing to it will do the following: (a) receive the page from the remote machine and gain exclusive access. (b) update the page data. (c) release the page and send it back to the remote machine when it reads it again.
On typical DSM implementation, (c) will only happen when the remote machine will read the data again, but you might start from existing DSM implementation and change the behavior so that the data is sent once the local machine page cache is full.
I/O MMU
[...] the IOMMU maps device-visible virtual addresses (also called device addresses or I/O addresses in this context) to physical addresses.
This basically means to write directly to the network device buffer, which is actually implementing an alternative driver for that device.
Such approach seems the most complicated and I don't see any benefit from that approach.
This approach is actually not using any intermediate memory but is definitely not recommended unless the system has a heavy realtime requirement.

Is there a way to dump all the physical memory value?

I understand that each user process is given a virtual address space, and that can be dumped. But is there a way to dump the Physical Address Space? Suppose I have 32-bit system with 4GB memory, can i write a program to print each physical memory location.
I understand it violates memory protection etc. but if its possible how can convert this into a kernel process or lower level process to allow me access to the entire memory..?
I'd like to know how to write such code (if possible) on Windows/Linux platform( or kernel).. OR in case I've to use Assembly or something like that, how to shift to that privilege level.
In Linux, you can open and map the device file /dev/mem (if you have read permission to it). This corresponds to physical memory.
can i write a program to print each physical memory location.
I think no operating system gives the user access to physical memory location. So, you cann't. What ever, you are seeing are virtual addresses produced by the Operating System.
It is possible, on Windows, to access physical memory directly. Some of the things you can do:
Use the Device\PhysicalMemory object -- you can't access all physical memory, and user-mode access to it is restricted starting from Windows Server 2003 SP1.
Use Address Windowing Extensions -- you can control your own virtual-to-physical address mappings, so in a sense you are accessing physical memory directly, although still through page tables.
Write a kernel-mode driver -- there are kernel-mode APIs to access physical memory directly, to allocate physical memory pages, etc. One reason for that is DMA (Direct Memory Access).
None of these methods will give you easy, unrestricted access to any physical memory location.
If I may ask, what are you trying to accomplish?
I'm thinking you could probably do it with a kernel mode driver, but the result would be gibberish as what is in the user section of RAM at the time you grabbed it would be what the OS had paged in, it may be part of one application or a mish mash of a whole bunch. This previous SO question may also be helpful: How does a Windows Kernel mode Driver, access paged memory ?
Try this NTMIO - A WINDOWS COMMAND LINE TO ACCESS HARDWARE RESOURCES http://siliconkit.com/ocart/index.php?route=product/product&keyword=ntmio&category_id=0&product_id=285

Resources