I understand if I try to print the address of an element of an array it would be an address from virtual memory not from real memory (physical memory) i.e DRAM.
printf ("Address of A[5] and A[6] are %u and %u", &A[5], &A[6]);
I found addresses were consecutive (assuming elements are chars). In reality they may not be consecutive at least not in the DRAM. I want to know the real addresses. How do I get that?
I need to know this for either Windows or Linux.
You can't get the physical address for a virtual address from user code; only the lowest levels of the kernel deal with physical addresses, and you'd have to intercept things there.
Note that the physical address for a virtual address may not be constant while the program runs — the page might be paged out from one physical address and paged back in to a different physical address. And if you make a system call, this remapping could happen between the time when the kernel identifies the physical address and when the function call completes because the program requesting the information was unscheduled and partially paged out and then paged in again.
The simple answer is that, in general, for user processes or threads in a multiprocessing OS such as Windows or Linux, it is not possible to find the address even of of a static variable in the processor's memory address space, let alone the DRAM address.
There are a number of reasons for this:
The memory allocated to a process is virtual memory. The OS can remap this process memory from time-to-time from one physical address range to another, and there is no way to detect this remaping in the user process. That is, the physical address of a variable can change during the lifetime of a process.
There is no interface from userspace to kernel space that would allow a userspace process to walk through the kernel's process table and page cache in order to find the physical address of the process. In Linux you can write a kernel module or driver that can do this.
The DRAM is often mapped to the procesor address space through a memory management unit (MMU) and memory cache. Although the MMU maping of DRAM to the processor address space is usually done only once, during system boot, the processor's use of the cache can mean that values written to a variable might not be written through to the DRAM in all cases.
There are OS-specific ways to "pin" a block of allocated memory to a static physical location. This is often done by device drivers that use DMA. However, this requires a level of privilege not available to userspace processes, and, even if you have the physical address of such a block, there is no pragma or directive in the commonly used linkers that you could use to allocate the BSS for a process at such a physical address.
Even inside the Linux kernel, virtual to physical address translation is not possible in the general case, and requires knowledge about the means that were used to allocate the memory to which a particular virtual address refers.
Here is a link to an article called Translating Virtual to Physical Address on Windows: Physical Addresses that gives you a hint as to the extreme ends to which you must go to get physical addresses on Windows.
Related
Consider a sample below.
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
The 4KB memory is allocated by calling malloc(). OS handles the memory request by the user program in user-space. First, OS requests memory allocation to RAM, then RAM gives physical memory address to OS. Once OS receives physical address, OS maps the physical address to virtual address then OS returns the virtual address which is the address of p to user program.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM). I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory(RAM) even though I didn't care about them.
What happens in behind? What OS does for me? I couldn't found relevant materials in some books(OS, system programming).
Could you give some explanation? (Please omit the contents about cache for easier understanding)
A detailed answer to your question will be very long - and too long to fit here at StackOverflow.
Here is a very simplified answer to a little part of your question.
You write:
I'm confusing that I wrote some value in virtual address, not physical address, but it is really written to main memory
Seems you have a very fundamental misunderstanding here.
There is no memory directly "behind" a virtual address. Whenever you access a virtual address in your program, it is automatically translated to a physical address and the physical address is then used for access in main memory.
The translation happens in HW, i.e. inside the processor in a block called "MMU - Memory management unit" (see https://en.wikipedia.org/wiki/Memory_management_unit).
The MMU holds a small but very fast look-up table that tells how a virtual address is to be translated into a physical address. The OS configures this table but after that, the translation happens without any SW being involved and - just to repeat - it happens whenever you access a virtual memory address.
The MMU also takes some kind of process ID as input in order to do the translation. This is need because two different processes may use the same virtual address but they will need translation to two different physical addresses.
As mentioned above the MMU look-up table (TLB) is small so the MMU can't hold a all translations for a complete system. When the MMU can't do a translation, it can make an exception of some kind so that some OS software can be triggered. The OS will then re-program the MMU so that the missing translation gets into the MMU and the process execution can continue. Note: Some processors can do this in HW, i.e. without involving the OS.
You have to understand that virtual memory is virtual, and it can be more extensive than physical memory RAM, so it is mapped differently. Although they are actually the same.
Your programs use virtual memory addresses, and it is your OS who decides to save in RAM. If it fills up, then it will use some space on the hard drive to continue working.
But the hard drive is slower than the RAM, that's why your OS uses an algorithm, which could be Round-Robin, to exchange pages of memory between the hard drive and RAM, depending on the work being done, ensuring that the data that are most likely to be used are in fast memory. To swap pages back and forth, the OS does not need to modify virtual memory addresses.
Summary overlooking a lot of things
You want to understand how virtual memory works. There's lots of online resources about this, here's one I found that seems to do a fair job of trying to explain it without getting too crazy in technical details, but also doesn't gloss over important terms.
https://searchstorage.techtarget.com/definition/virtual-memory
For Linux on x86 platforms, the assembly equivalent of asking for memory is basically a call into the kernel using int 0x80 with some parameters for the call set into some registers. The interrupt is set at boot by the OS to be able to answer for the request. It is set in the IDT.
An IDT descriptor for 32 bits systems looks like:
struct IDTDescr {
uint16_t offset_1; // offset bits 0..15
uint16_t selector; // a code segment selector in GDT or LDT
uint8_t zero; // unused, set to 0
uint8_t type_attr; // type and attributes, see below
uint16_t offset_2; // offset bits 16..31
};
The offset is the address of the entry point of the handler for that interrupt. So interrupt 0x80 has an entry in the IDT. This entry points to an address for the handler(also called ISR). When you call malloc(), the compiler will compile this code to a system call. The system call returns in some register the address of the allocated memory. I'm pretty sure as well that this system call will actually use the sysenter x86 instruction to switch into kernel mode. This instruction is used alongside an MSR register to securely jump into kernel mode from user mode at the address specified in the MSR (Model Specific Register).
Once in kernel mode, all instructions can be executed and access to all hardware is unlocked. To provide with the request the OS doesn't "ask RAM for memory". RAM isn't aware of what memory the OS uses. RAM just blindly answers to asserted pins on it's DIMM and stores information. The OS just checks at boot using the ACPI tables that were built by the BIOS to determine how much RAM there is and what are the different devices that are connected to the computer to avoid writing to some MMIO (Memory Mapped IO). Once the OS knows how much RAM is available (and what parts are usable), it will use algorithms to determine what parts of available RAM every process should get.
When you compile C code, the compiler (and linker) will determine the address of everything right at compilation time. When you launch that executable the OS is aware of all memory the process will use. So it will set up the page tables for that process accordingly. When you ask for memory dynamically using malloc(), the OS determines what part of physical memory your process should get and changes (during runtime) the page tables accordingly.
As to paging itself, you can always read some articles. A short version is the 32 bits paging. In 32 bits paging you have a CR3 register for each CPU core. This register contains the physical address of the bottom of the Page Global Directory. The PGD contains the physical addresses of the bottom of several Page Tables which themselves contain the physical addresses of the bottom of several physical pages (https://wiki.osdev.org/Paging). A virtual address is split into 3 parts. The 12 bits to the right (LSB) are the offset in the physical page. The 10 bits in the middle are the offset in the page table and the 10 MSB are the offset in the PGD.
So when you write
char* p = (char*)malloc(4096);
p[0] = 'a';
p[1] = 'b';
you create a pointer of type char* and making a system call to ask for 4096 bytes of memory. The OS puts the first address of that chunk of memory into a certain conventional register (which depends on the system and OS). You should not forget that the C language is just a convention. It is up to the operating system to implement that convention by writing a compatible compiler. It means that the compiler knows what register and what interrupt number to use (for the system call) because it was specifically written for that OS. The compiler will thus take the address stored into this certain register and store it into this pointer of type char* during runtime. On the second line you are telling the compiler that you want to take the char at the first address and make it an 'a'. On the third line you make the second char a 'b'. In the end, you could write an equivalent:
char* p = (char*)malloc(4096);
*p = 'a';
*(p + 1) = 'b';
The p is a variable containing an address. The + operation on a pointer increments this address by the size of what is stored in that pointer. In this case, the pointer points to a char so the + operation increments the pointer by one char (one byte). If it was pointing to an int then it would be incremented of 4 bytes (32 bits). The size of the actual pointer depends on the system. If you have a 32 bits system then the pointer is 32 bits wide (because it contains an address). On a 64 bits system the pointer is 64 bits wide. A static memory equivalent of what you did is
char p[4096];
p[0] = 'a';
p[1] = 'b';
Now the compiler will know at compile time what memory this table will get. It is static memory. Even then, p represents a pointer to the first char of that array. It means you could write
char p[4096];
*p = 'a';
*(p + 1) = 'b';
It would have the same result.
First, OS requests memory allocation to RAM,…
The OS does not have to request memory. It has access to all of memory the moment it boots. It keeps its own database of which parts of that memory are in use for what purposes. When it wants to provide memory for a user process, it uses its own database to find some memory that is available (or does things to stop using memory for other purposes and then make it available). Once it chooses the memory to use, it updates its database to record that it is in use.
… then RAM gives physical memory address to OS.
RAM does not give addresses to the OS except that, when starting, the OS may have to interrogate the hardware to see what physical memory is available in the system.
Once OS receives physical address, OS maps the physical address to virtual address…
Virtual memory mapping is usually described as mapping virtual addresses to physical addresses. The OS has a database of the virtual memory addresses in the user process, and it has a database of physical memory. When it is fulfilling a request from the process to provide virtual memory and it decides to back that virtual memory with physical memory, the OS will inform the hardware of what mapping it choose. This depends on the hardware, but a typical method is that the OS updates some page table entries that describe what virtual addresses get translated to what physical addresses.
I wrote some value(a and b) in virtual address and they are really written into main memory(RAM).
When your process writes to virtual memory that is mapped to physical memory, the processor will take the virtual memory address, look up the mapping information in the page table entries or other database, and replace the virtual memory address with a physical memory address. Then it will write the data to that physical memory.
This question already has answers here:
What are the differences between virtual memory and physical memory?
(6 answers)
Closed 3 years ago.
What is virtual memory and, how it differs from physical memory (RAM)? It says that physical memory is stored on sth on motherboard, while virtual memory is stored on disk.
Somewhere it also says that virtual spaces are used only when the physical memory is filled, which confused me a lot.
Then, why Windows uses virtual memory? Is it because the RAMs are small-spaced and not designed for big storage, so use the virtual to store more bigger-sized things?
The next thing is about the address. Since virtuals are on disk, they shouldn't share the address of physicals. So they have independent addresses. Is that right?
And,
When writing memory of another process, why recommend using VirtualAlloc instead of HeapAlloc?
Is it true that virtual memory is process-dependent and the physical memory shared through processes?
"Virtual memory" means there is a valid address space, which does not map to any particular physical memory or storage, hence virtual. In context of modern common operating systems, each process has its own virtual memory space, with overlapping virtual memory addresses.
This address space is divided into pages for easier management (example size 4 KB). Each valid page can be in 3 different states:
not stored physically (assumed to be all 0). If process writes to this kind of page, it needs to be given a page of physical memory (by OS, see below) so value can be stored.
Mapped to physical memory, meaning some page-size area in computers RAM stores the contents, and they can be directly used by the process.
Swapped out to disk (might be a swap file), in order to free physical RAM pages (done automatically by the operating system). If the process accesses the page (read or write), it needs to be loaded to page in RAM first (see above).
Only when virtual memory page is mapped to physical RAM page, is there something there. In other cases, if process accesses that page, there is a CPU exception, which transfers control to operating system. OS then needs to either map that virtual memory page to RAM (possibly needing to free some RAM first by swapping current data out to swap file, or terminating some application if out of all memory) and load the right data into it, or it can terminate the application (address was not in valid range, or is read-only but process tries to write).
Same page of memory can also be mapped to several places at once, for example with shared memory, so same data can be accessed by several processes at once (virtual address is probably different, so can't share pointer variables).
Another special case of virtual memory use is mapping a regular file on disk to virtual memory (same thing which happens with swap file, but now controlled by normal application process). Then OS takes care of actually reading bytes (in page-sized chunks) from disk and writing changes back, the process can just access the memory like any memory.
Every modern multi-tasking general purpose operating system uses virtual memory, because the CPUs they run support it, and because it solves a big bunch of problems, for example memory fragmentation, transparently using swapping to disk, memory protection... They could be solved differently, but virtual memory is the way today.
Physical memory is shared between processes the same way as computer power supply is shared, or CPU is shared. It is part of the physical computer. A normal process never handles actual physical memory addresses, all that it sees is virtual memory, which may be mapped to different physical locations.
The contents of virtual memory are not normally shared, except when they are (when using shared memory for example).
Not sure you mean by "When collecting memory for other process", so can't answer that.
Virtual memory can essentially be thought of as a per process virtual address that's mapped to a physical address. In the case of x86 there is a register CR3 that points to the translation table for that process. When allocating new memory the OS will allocate physical memory, which may not even be contiguous, and then set a free contiguous virtual region to point to that physical memory. Whenever the CPU accesses any virtual memory it uses this translation table in CR3 to convert it to the actual physical address.
More Info
https://en.m.wikipedia.org/wiki/Control_register#CR3
https://en.m.wikipedia.org/wiki/Page_table
To quote Wikipedia:
In computing, virtual memory (also virtual storage) is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory."
Because virtual memory is an illusory memory (so, non-existent), some other computer resources rather than RAM is used. In this case, the resource used is the disk, because it has a lot of space, more than RAM, where the OS can run its VM stuff.
Somewhere it also says that virtual spaces are used only when the physical memory is filled, which confused me a lot.
It shouldn't. VM uses the disk and I/O with the disk is much slower than I/O with the RAM. This is why physical memory is preferred nowadays and VM is used when physical memory is not enough.
Then, why Windows uses virtual memory? Is it because the RAMs are small-spaced and not designed for big storage, so use the virtual to store more bigger-sized things?
This is one of the main reasons, yes. In the past (the 70s), computer memory was very expensive so workarounds had to be conceived.
Given the following C code:
void *ptr;
ptr = malloc(100);
printf("Address: %p\n", ptr);
When compiling this code using GCC 4.9 in Ubuntu 64 bit and running it the output is similar to this:
Address: 0x151ab10
The value 0x151ab10 seems a reasonable number since my machine has 8 GB of RAM, but when compiling the same code using GCC 4.9 in Mac OS X 64 bit and running it, it gives an output similar to this:
Address: 0x7fb9cb43ed30
... which is strange because 0x7fb9cb43ed30 is well above the 8 GB of RAM. Is there some kind of bit masking that one has to do in Mac OS X so that the real address of ptr can be printed out?
When processes run in general-purpose operating systems, the operating system constructs a “virtual” address space for each process, using assistance from hardware.
Whenever a process with a virtual address space accesses memory, the hardware translates the virtual address (in the process’ address space) to a physical address (in actual memory hardware), using special registers in the hardware and tables in system memory that describe how the translation should be done.1 The operating system configures the registers and tables for each process.
Commonly, the operating system, or the loader (the software that loads programs into memory for execution) assigns various ranges of the virtual address space for various purposes. It may put the stack in one place, executable code in another, general space for allocatable memory in another, and special system data in another. These addresses may come from base locations set arbitrarily by human designers or from various calculations, or combinations of those.
So seeing a large virtual address is not unusual; it is simply a location that was assigned in an imaginary address space.
Footnote
1 There are additional complications in translating virtual addresses to physical addresses. When the processor translates an address, the result may be that the desired location is not in physical memory at all. When this happens, the processor notifies the operating system. In response, the operating system can allocate some physical memory, read the necessary data from disk, update the memory map of the process so that the virtual address points to the newly allocated physical memory, and resume execution of the process. Then your process can continue as if the data were there all along. Additionally, when the system allocated physical memory, it may have had to make some memory available by writing data that was in memory to disk, and also removing it from the memory map of some process (possibly the same one). In this way, disk space becomes auxiliary memory, and processes can execute with more memory in their virtual address spaces than there is in actual physical memory.
I have gone through below link and it says that on most Operating Systems , pointers store the virtual address rather than physical address but I am unable to get the benefit of storing virtual address in a pointer .
when at the end we can modify the contents of a particular memory location directly through a pointer so what is the issue whether it is a virtual address or a physical address ?
Also during the time the code executes , most of the time the data segment will also remain in memory ,so we are dealing with physical memory location only so how the virtual address is useful ?
C pointers and the physical address
Security issues (as noted before) aside, there is another big advantage:
Globals and functions (and your stack) can always be found at fixed addresses (so the assembler can hardcode them), independently of where the instance of your program is loaded.
If you'd really like your code to run from any address, you have to make it position independent (with gcc you'd use the -fPIC argument). This question might be an interresting read wrt the -fPIC and virtual addressing: GCC -fPIC option
Actually, pointers normally hold LOGICAL ADDRESSES.
There are several reasons for using logical addressing, including:
Ease of memory management. The OS never has to allocate contiguous physical page frames to a process.
Security. Each process has access to its own logical address space and cannot mess with another processes address space.
Virtual Memory. Logical address translation is a prerequisite for implementing virtual memory.
Page protection. Aids security by limiting access to system pages to higher processor modes. Helps error (and virus) trapping by limiting the types of access to pages (e.g., not allowing writes to or executes of data pages).
This is not a complete list.
The same virtual address can point to different physical addresses in different moments.
If your physical memory is full, your data is swapped out from the memory to your HDD. When your program wants to access this data again - it is currently not stored in memory - the data is swapped in back to memory, but it often will be a different location as before.
The Page Table, which stores the assignment of virtual to physical addresses, is updated with the new physical address. So your virtual address remains the same, while the physical address may change.
I mean the physical memory, the RAM.
In C you can access any memory address, so how does the operating system then prevent your program from changing memory address which is not in your program's memory space?
Does it set specific memory adresses as begin and end for each program, if so how does it know how much is needed.
Your operating system kernel works closely with memory management (MMU) hardware, when the hardware and OS both support this, to make it impossible to access memory you have been disallowed access to.
Generally speaking, this also means the addresses you access are not physical addresses but rather are virtual addresses, and hardware performs the appropriate translation in order to perform the access.
This is what is called a memory protection. It may be implemented using different methods. I'd recommend you start with a Wikipedia article on this subject — http://en.wikipedia.org/wiki/Memory_protection
Actually, your program is allocated virtual memory, and that's what you work with. The OS gives you a part of the RAM, you can't access other processes' memory (unless it's shared memory, look it up).
It depends on the architecture, on some it's not even possible to prevent a program from crashing the system, but generally the platform provides some means to protect memory and separate address space of different processes.
This has to do with a thing called 'paging', which is provided by the CPU itself. In old operating systems, you had 'real mode', where you could directly access memory addresses. In contrast, paging gives you 'virtual memory', so that you are not accessing the raw memory itself, but rather, what appears to your program to be the entire memory map.
The operating system does "memory management" often coupled with TLB's (Translation Lookaside Buffers) and Virtual Memory, which translate any address to pages, which the operation system can tag readable or executable in the current processes context.
The minimum requirement for a processors MMU or memory management unit is in current context restrict the accessable memory to a range which can be only set in processors registers in supervisor mode (as opposed to user mode).
The logical address is generated by the CPU which is mapped to the physical address by the memory mapping unit. Unlike the physical address space the logical address is not restricted by memory size and you just get to work with the logical address space. The address binding is done by MMU. So you never deal with the physical address directly.
Most computers (and all PCs since the 386) have something called the Memory Management Unit (or MMU). It's job is to translate local addresses used by a program into the physical addresses needed to fetch real bytes from real memory. It's the operating system's job to program the MMU.
As a result of this, programs can be loaded into any region of memory and appear, from that program's point of view while executing, to be be any any other address. It's common to find that the code for all programs appear (locally) to be at the same address and their data always appears (locally) to be at the same address even though physically they will be in different locations. With each memory access, the MMU transparently translates from the local address space to the physical one.
If a program trys to access a memory address that has not been mapped into its local address space, the hardware generates an exception and typically gets flagged as a "segmentation violation", followed by the forcible termination of the program. This protects from accessing the memory of other processes.
But that doesn't have to be the case! On systems with "virtual memory" and current resource demands on RAM that exceed the amount of physical memory, some pages (just blocks of memory of a common size, often on the order of 4-8kB) can be written out to disk and given as RAM to a program trying to allocate and use new memory. Later on, when that page is needed by whatever program owns it, the memory access causes an exception and the OS swaps out some other memory page and re-loads the needed one from disk. The program that "page-faulted" gets delayed while this happens but otherwise notices nothing.
There are lots of other tricks the MMU/OS can do as well, like sharing memory between processes, making a disk file appear to be direct-memory-accessible, setting some pages as "NX" so they can't be treated as executable code, using arbitrary sections of the logical memory space regardless of how much and at what address the physical ram uses, and more.