memory starting location in C [duplicate] - c

This question already has an answer here:
Why do virtual memory addresses for linux binaries start at 0x8048000?
(1 answer)
Closed 8 years ago.
I am looking into to the memory layout of a given process. I notice that the starting memory location of each process is not 0. On this website, TEXT starts at 0x08048000. One reason can be to distinguish the address with the NULL pointer. I am just wondering if there is any another good reasons? Thanks.

The null pointer doesn't actually have to be 0. It's guaranteed in the C standard that when a 0 value is given in the context of a pointer it's treated as NULL by the compiler.
But the 0 that you use in your source code is just syntactic sugar that has no relation to the actual physical address the null-pointer value is "pointing" to.
For further details see:
Why is NULL/0 an illegal memory location for an object?
Why is address zero used for the null pointer?
An application on your operating system has its unique address space, which it sees as a continuous block of memory (the memory isn't physically continuous, it's just "the impression" the operating system gives to every program).
For the most part, each process's virtual memory space is laid out in a similar and predictable manner (this is the memory layout in a Linux process, 32-bit mode):
(image from Anatomy of a Program in Memory)
Look at the text segment (the default .text base on x86 is 0x08048000, chosen by the default linker script for static binding).
Why the magical 0x08048000? Likely because Linux borrowed that address from the System V i386 ABI.
... and why then did System V use 0x08048000?
The value was chosen to accommodate the stack below the .text section,
growing downward. The 0x48000 bytes could be mapped by the same page
table already required by the .text section (thus saving a page table
in most cases), while the remaining 0x08000000 would allow more room
for stack-hungry applications.
Is there anything below 0x08048000? There could be nothing (it's only 128M), but you can pretty much map anything you desire there, using the mmap() system call.
See also:
What's the memory before 0x08048000 used for in 32 bit machine?
Reorganizing the address space
mmap

I think this sums it up:
Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. Thus a portion of the virtual address space must be reserved to the kernel.
So while the process gets it's own address space. Without allocating a block to the kernel, it would not be able to address kernel code and data.
This is always the first block of memory it appears and so includes address 0. The user mode space starts beyond this, and so that is where both the stack and heap reside.
Distinguishing from NULL pointer
Even if the user mode space started at address 0, there would not be any data allocated to the address 0 as that will be in the stack or the heap which themselves do not start at the beginning of the user area. Therefore NULL (with the value of 0) could be used still and is not a reason for this layout.
However one benefit related to the NULL and the first block being kernel memory is any attempt to read/write to NULL throws a Segmentation Fault.

A loader loads a binary in segments into memory: text (constants), data, code. There is no need to start from 0, and as C is has the problem from bugs accessing around null, like in a[i] that is even dangerous. This allows (on some processors) to intercept segmentation faults.
It would be the C runtime introducing a linear address space from 0. That might be imaginable where C is the operating system's implementation language. But serves no purpose; to have the heap start from 0. The memory model is one of segments. A code segment might be protected against modification by some processors.
And in segments allocation happens in C runtime managed memory blocks.
I might add, that physical 0 and upwards is often used by the operating system itself.

Related

How memory allocation of variables or data in a program are done by compiler and OS

Want to get an overview on a few things about how exactly the memory for a variable is allocated.
In C programming,
Taking the context of "auto" variables, which are allocated on the stack section, I have the following question:
Does the compiler generate a logical address for the variables? If yes, then how? Won't the compiler need OS permission to generate or assign such addresses? If no, then is there some sort of indication or instruction that the compiler puts in the code segment asking the OS to allocate memory when running the executable?
Now taking the context of heap allocated variables,
Is the heap of the same size for all programs? If not, then does the executable consist of a header or something that tells the OS how much heap space it needs for dynamic allocation?
I'd be grateful if someone provides the answer or shares any related content/links that explains this.
Stack (most implementations use stack for automatic storage duration objects) and static storage duration objects memory is allocated during the program load and startup.
Does the compiler generate a logical address for the variables? If
yes, then how?
I do not know what is the "logical address" but compilers do "calculate" the references to the automatic storage duration objects. How? Simply compiler knows how far from the stack pointer address the automatic storage duration object is located (offset).
Generally the same applies to the static duration objects and the code, the compiler only calculates the offset from the their sections.
Is the heap of the same size for all programs?
It is implementation defined.
A method typically used in operating systems is that, when a program is starting, there is a piece (or collection) of software used that loads the program. The program loader reads the executable file and sets up memory for the program.
Part of the executable file says what size stack should be allocated for it. Most often, this is set by default when linking the program. (It is 8 MiB for macOS, 2 MiB for Linux, and 1 MiB for Windows.) However, it can be changed by asking the linker to set a different size.
The program loader calls operating system routines to request virtual memory be mapped. It does this for the stack and for other parts of the program, such as the code sections (the parts of memory that contain, mostly, the executable instructions of the program), and the initialized and uninitialized data. When it starts the program, it tells the program where the stack starts by putting that address into a designated register (or similar means).
One of the processor registers is used as a stack pointer; it points to address within the memory allocated for the stack that is the current top of stack. When the compiler arranges to use stack space for objects, it generates instructions that adjust the stack pointer. The addresses for the objects are calculated relative to the stack pointer. If a function needs 128 bytes of data, the compiler generates an instruction that subtracts 128 from the stack pointer. (This may occur in multiple steps, such as “call” and “push” instructions that make some changes to the stack pointer plus an additional “subtract” instruction that finishes the changes.) Then the addresses of all the objects in this stack frame are calculated as offsets from the value of the stack pointer. For example, by taking the stack pointer and adding 40, we calculate the address of the object that has been assigned to be 40 bytes higher than the top of the stack.
(There is some confusion about the wording of directions here because stacks commonly grow from high addresses to low addresses. The program loader may allocate some chunk of memory from, say, address 12300000016 to 12400000016. The stack pointer will start at 12400000016. Subtracting 128 will make it 123FFFF8016. Then 123FFFFA816 is an address that is 40 bytes “higher” than 123FFFF8016 in the address space, but the “top of stack” is below that. That is because the term “top of stack” refers to the model of physically stacking things on top of each other, with the latest thing on top.)
The so-called “heap” is not the same size of all programs. In typical implementations, the memory management routines call system routines to request more virtual memory when they need it.
Note that “heap” is properly a word for a general data structure. Heaps may be used to organize things other than available memory, and the memory management routine keep track of available memory using data structures other than heaps. When referring to memory allocated via the memory management routines, you can call it “dynamically allocated memory.” It may also be shortened to “allocated memory,” but that can be confusing in some situations since all memory that has been reserved for some use is allocated memory.
Some background first
In C programming, Taking the context of "auto" variables, which are allocated on the stack section ...
To understand my answer, you first need to know how the stack works.
Imagine you write the following function:
int myFunction()
{
return function1() + function2() + function3();
}
Unfortunately, you do not use C as programming language but you use a programming language that neither supports local variables nor return values. (This is how most CPUs work internally.)
You may return a value from a function in a global variable:
function1()
{
result = 1234; // instead of: return 1234;
}
And your program may now look the following way if you use a global variable instead of local ones:
int a;
myFunction()
{
function1();
a = result;
function2();
a += result;
function3();
result += a;
}
Unfortunately, one of the three functions (e.g. function3()) may call myFunction() (so the function is called recursively) and the variable a is overwritten when calling function3().
To solve this problem, you may define an array for local variables (myvars[]) and a variable named mypos. In the example, the elements 0...mypos in myvars[] are used; the elements (mypos+1)...(MAX_LOCALS-1) are free:
int myvars[MAX_LOCALS];
int mypos;
...
myFunction()
{
function1();
mypos++;
myvars[mypos] = result;
function2();
myvars[mypos] += result;
function3();
result += myvars[mypos];
mypos--;
}
By changing the value of mypos from 10 to 11 (as an example), your program indicates that the element mypos[11] is now in use and that the functions being called shall store their data in elements mypos[x] with x>=12.
Exactly this is how the stack is working.
Typically, the "variable" mypos is not a variable but a CPU register named "stack pointer". (However, there are a few historic CPUs where an ordinary variable was used for this!)
The actual answers
Does the compiler generate a logical address for the variables?
In the example above, the compiler will perform a mypos+=3 if there are 3 local variables. Let's say they are named a, b and c.
The compiler simply replaces a by myvars[mypos-2], b by myvars[mypos-1] and c by myvars[mypos].
On most CPUs, the stack pointer (named mypos in the example) is not an index into an array but already a pointer (comparable to int * mypos;), so the compiler would replace a by *(mypos-2) instead of myvars[mypos-2] in the example.
For global variables, the compiler simply counts the number of bytes needed for all global variables. In the simplest case, it chooses a range of memory of the same size (e.g. 0x10000...0x10123) and places the variables there.
Won't the compiler need OS permission to generate or assign such addresses?
No.
The "stack" (in the example this is the array myvars[]) is already provided by the OS and the stack pointer (mypos in the example) is also set to a correct value by the OS before the program is started.
Your program knows that the elements myvars[x] with x>mypos can be used.
For global variables, the information about the range used by global variables (e.g. 0x10000...0x10123) is stored in the executable file. The OS must ensure that this memory range can be used by the program. (For example by configuring the MMU accordingly.)
If this is not possible, the OS will simply refuse to start the program with an error message.
... asking the OS to allocate memory when running the executable?
For variables on the stack:
There may be operating systems where this is done.
However, in most cases, the program will simply crash with a "stack overflow" if too much stack is needed. In the example, this would mean: The program crashes if an elements myvars[x] with x>=MAX_LOCALS is accessed.
Now taking the context of heap allocated variables ...
Please first note that global variables are not stored on the heap.
The heap is used for data allocated using malloc() and similar functions (new in C++ for example).
Under many operating systems, malloc() calls an operating system function - so it is actually the operating system that allocates memory on the heap.
... and if there is not enough space, the OS (and malloc()) will return NULL.
Does the compiler generate a logical address for the variables? If yes, then how?
Yes, but they are related to the stack pointer at function entry time (which is normally saved as a constant base pointer, stored in a cpu register) This is because the function can be recursive, and you can have two calls to the function with different instances for that variable (and related to different copies of the base pointer), the compiler assigns the offset to the base pointer for the variable, but the base pointer can be different, depending on the stack contents at function entry time.
Won't the compiler need OS permission to generate or assign such addresses?
Nope, the compiler just generates an executable in the format and form needed for the operating system to manage process' memory. When the program starts, it is given normally three (or more) segments of memory:
text segment. A normally read-only (or execute only) segment that gives no write access to the program. This is normally because the text segment is shared between all programs that are using the same executable at the same time. A program can demand exclusive read-write acces to the text (to allow programs that modify their own executable code) but this happens only rarely. This is normally specified to the compiler and the compiler writes an special flag in the text segment to inform the kernel of this requirement.
Data segment. A read-write segment, that can be grown by means of a system cal (sbrk(2)) This is used for global variables and the heap (while in modern systems, the heap is allocated into a new segment acquired by calling the mmap(2) system call. Sometimes this segment is divided in two. A data segment read-only for constants (so the program receives a signal in case you try to change the value of a constant) and a read-write segment, freely usable by the program. This is where global variables are stored.
Stack segment. A read-write segment, that is allocated for the process to use as the stack segment. It has the capability of growing in one direction as the process starts using it. When the process accesses the data one memory page below the start of the segment, it generates a page fault trap that results in a new page being appended to the segment, so its workings are transparent to the process. This is the memory we are talking about.
the process can ask the kernel explicitly to get a new segment if it wants to (let's say it needs to map some file on memory, or if it has to load a shared executable/library) and on some systems, the read only variables (declared as const) are explititly stored in the text segment or in a specific section called .rodata that demands from the system a special data segment that is read-only. The compiler doesn't normally code this kind of resource itself, it is normally encoded in the program being compiled.
The complete memory is limited by system imposed limits, so if you try to overpass them (around 8Mb of stack space, by default, and depending on the operating system) you will get signalled by the system and your program aborted.
As you see, the process memory is owned by the process, and it can make whatever use it is permitted to. The stack memory is read/write, and allocated on demand, you can use up to 8Mb, but there's no provision to check on the use you do about it.
If no, then is there some sort of indication or instruction that the compiler puts in the code segment asking the OS to allocate memory when running the executable?
The system will know the size of the text segment of the process by the size it has on the executable. The data segment is divided into two parts, normally based on the assumption of what are the global initialized variables and what are the ones defaulting to zero (the memory allocated by the kernel to a process is initialized to zeros for security reasons) so the sum of both the initialized/data and the non initialized data sections are added to know how much memory to assign to the data segment. And the stack segment is assigned initialy just one page of memory, but as the process starts running and filling the stack, it grows as the process generates page faults on the so called next page of the stack segment. As you see, there's no nedd for the compiler to embed in the code any instruction to ask for more memory. Everything is read from the executable file.
The compiler runs as a normal program... it only generates all this information and writes it in a file (the executable file) for the kernel to know the resources needed to run the program. But the compiler's communication with the kernel is just to ask it to open files, write on them, read from source code and struggle it's head to achieve its task. :)
In most POSIX systems, the kernel loads a program in memory by means of the exec*(2) system calls. The kernel reads the executable file pointed to in a parameter of the call and creates the segments above mentioned, based on the parameters passed in the file, checks if another instance of the same program is running in the system to avoid loading the instructions from the file and referencing in this process the segment already open by the other. The data segment contents is initialized to zeros, and the contents of the initialization data are read into the segment (so the first part has the .data section of initialized global variables and the .bss section, which has only a size, is used to calculate the total size of the data segment). Then the stack is normally allocated one or more pages, depending on the initial contents that the exec() calls put in the initial stack. The initial stack is filled with:
a structure of data containing references to the program parameter list that was used on legacy systems to provide the kernel about the command line parameters to show in the ps(1) command output (this is still being generated for legacy purposes, but not used by the kernel for obvious security reasons) Today, a special system call is used to indicate the kernel the command line parameters to be output in the ps(1) output.
a snippet of machine code to use in the return from a system call to allow the execution (in user mode) of any signal handler that should be executed (this is the reason for the requirement that all signal handlers are called when the kernel returns from kernel mode and switches back again to user mode, and not otherwise)
the environment of the process.
the array of pointers to environment strings.
the command line parameters.
the array of char pointers that point to the command line parameters.
the envp array referenct to main().
the argv array reference to the command line parameters.
the argc counter of the number of command line parameters.
Once all these data is pushed to the stack, the program jumps to the start address (fixed by the linker, or by the user by a linker option) and is let to start running.
Before the program jumps to main() the executed code is part of the C runtime, that loads a special shared executable (called /lib/ld.so or similar) that is responsible of searching and loading of all the shared libraries that are linked to the program. Not all the programs have this feature (but almost all of them today are dynamically linked) but IMHO this is out of the scope to this question, as the program has already started and is running.

How Process Size is determined? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am very new to these concepts but I want to ask you all a question that is very basic I think, but I am confused, So I am asking it.
The question is...
How is the size of a process determined by the OS?
Let me clear it first, suppose that I have written a C program and I want to know that how much memory it is going to take, how can I determine it? secondly I know that there are many sections like code section, data section, BSS of a process. Now does the size of these are predetermined? secondly how the size of Stack and heap are determined. does the size of stack and heap also matters while the Total size of process is calculated.
Again we say that when we load the program , an address space is given to the process ( that is done by base and limit register and controlled by MMU, I guess) and when the process tries to access a memory location that is not in its address space we get segmentation fault. How is it possible for a process to access a memory that is not in its address space. According to my understanding when some buffer overflows happens then the address gets corrupted. Now when the process wants to access the corrupted location then we get the segmentation fault. Is there any other way of Address violation.
and thirdly why the stack grows downward and heap upwards.Is this process is same with all the OS. How does it affects the performance.why can't we have it in other way?
Please correct me, if I am wrong in any of the statement.
Thanks
Sohrab
When a process is started it gets his own virtual address space. The size of the virtual address space depends on your operating system. In general 32bit processes get 4 GiB (4 giga binary) addresses and 64bit processes get 18 EiB (18 exa binary) addresses.
You cannot in any way access anything that is not mapped into your virtual address space as by definition anything that is not mapped there does not have an address for you. You may try to access areas of your virtual address space that are currently not mapped to anything, in which case you get a segfault exception.
Not all of the address space is mapped to something at any given time. Also not all of it may be mapped at all (how much of it may be mapped depends on the processor and the operating system). On current generation intel processors up to 256 TiB of your address space may be mapped. Note that operating systems can limit that further. For example for 32 bit processes (having up to 4 GiB addresses) Windows by default reserves 2 GiB for the system and 2 GiB for the application (but there's a way to make it 1 GiB for the system and 3 GiB for the application).
How much of the address space is being used and how much is mapped changes while the application runs. Operating system specific tools will let you monitor what the currently allocated memory and virtual address space is for an application that is running.
Code section, data section, BSS etc. are terms that refer to different areas of the executable file created by the linker. In general code is separate from static immutable data which is separate from statically allocated but mutable data. Stack and heap are separate from all of the above. Their size is computed by the compiler and the linker. Note that each binary file has his own sections, so any dynamically linked libraries will be mapped in the address space separately each with it's own sections mapped somewhere. Heap and stack, however, are not part of the binary image, there generally is just one stack per process and one heap.
The size of the stack (at least the initial stack) is generally fixed. Compilers and/or linkers generally have some flags you can use to set the size of the stack that you want at runtime. Stacks generally "grow backward" because that's how the processor stack instructions work. Having stacks grow in one direction and the rest grow in the other makes it easier to organize memory in situations where you want both to be unbounded but do not know how much each can grow.
Heap, in general, refers to anything that is not pre-allocated when the process starts. At the lowest level there are several logical operations that relate to heap management (not all are implemented as I describe here in all operating systems).
While the address space is fixed, some OSs keep track of which parts of it are currently reclaimed by the process. Even if this is not the case, the process itself needs to keep track of it. So the lowest level operation is to actually decide that a certain region of the address space is going to be used.
The second low level operation is to instruct the OS to map that region to something. This in general can be
some memory that is not swappable
memory that is swappable and mapped to the system swap file
memory that is swappable and mapped to some other file
memory that is swappable and mapped to some other file in read only mode
the same mapping that another virtual address region is mapped to
the same mapping that another virtual address region is mapped to, but in read only mode
the same mapping that another virtual address region is mapped to, but in copy on write mode with the copied data mapped to the default swap file
There may be other combinations I forgot, but those are the main ones.
Of course the total space used really depends on how you define it. RAM currently used is different than address space currently mapped. But as I wrote above, operating system dependent tools should let you find out what is currently happening.
The sections are predetermined by the executable file.
Besides that one, there may be those of any dynamically linked libraries. While the code and constant data of a DLL is supposed to be shared across multiple processes using it and not be counted more than once, its process-specific non-constant data should be accounted for in every process.
Besides, there can be dynamically allocated memory in the process.
Further, if there are multiple threads in the process, each of them will have its own stack.
What's more, there are going to be per-thread, per-process and per-library data structures in the process itself and in the kernel on its behalf (thread-local storage, command line params, handles to various resources, structures for those resources as well and so on and so forth).
It's difficult to calculate the full process size exactly without knowing how everything is implemented. You might get a reasonable estimate, though.
W.r.t. According to my understanding when some buffer overflows happens then the address gets corrupted. It's not necessarily true. First of all, the address of what? It depends on what happens to be in the memory near the buffer. If there's an address, it can get overwritten during a buffer overflow. But if there's another buffer nearby that contains a picture of you, the pixels of the picture can get overwritten.
You can get segmentation or page faults when trying to access memory for which you don't have necessary permissions (e.g. the kernel portion that's mapped or otherwise present in the process address space). Or it can be a read-only location. Or the location can have no mapping to the physical memory.
It's hard to tell how the location and layout of the stack and heap are going to affect performance without knowing the performance of what we're talking about. You can speculate, but the speculations can turn out to be wrong.
Btw, you should really consider asking separate questions on SO for separate issues.
"How is it possible for a process to access a memory that is not in its address space?"
Given memory protection it's impossible. But it might be attempted. Consider random pointers or access beyond buffers. If you increment any pointer long enough, it almost certainly wanders into an unmapped address range. Simple example:
char *p = "some string";
while (*p++ != 256) /* Always true. Keeps incrementing p until segfault. */
;
Simple errors like this are not unheard of, to make an understatement.
I can answer to questions #2 and #3.
Answer #2
When in C you use pointers you are really using a numerical value that is interpreted as address to memory (logical address on modern OS, see footnotes). You can modify this address at your will. If the value points to an address that is not in your address space you have your segmentation fault.
Consider for instance this scenario: your OS gives to your process the address range from 0x01000 to 0x09000. Then
int * ptr = 0x01000;
printf("%d", ptr[0]); // * prints 4 bytes (sizeof(int) bytes) of your address space
int * ptr = 0x09100;
printf("%d", ptr[0]); // * You are accessing out of your space: segfault
Mostly the causes of segfault, as you pointed out, are the use of pointers to NULL (that is mostly 0x00 address, but implementation dependent) or the use of corrupted addresses.
Note that, on linux i386, base and limit register are not used as you may think. They are not per-process limits but they point to two kind of segments: user space or kernel space.
Answer #3
The stack growth is hardware dependent and not OS dependent. On i386 assembly instruction like push and pop make the stack grow downwards with regard to stack related registers. For instance the stack pointer automatically decreases when you do a push, and increases when you do a pop. OS cannot deal with it.
Footnotes
In a modern OS, a process uses the so called logic address. This address is mapped with physical address by the OS. To have a note of this compile yourself this simply program:
#include <stdio.h>
int main()
{
int a = 10;
printf("%p\n", &a);
return 0;
}
If you run this program multiple times (even simultaneously) you would see, even for different instances, the same address printed out. Of course this is not the real memory address, but it is a logical address that will be mapped to physical address when needed.

What happens in OS when we dereference a NULL pointer in C?

Let's say there is a pointer and we initialize it with NULL.
int* ptr = NULL;
*ptr = 10;
Now , the program will crash since ptr isn't pointing to any address and we're assigning a value to that , which is an invalid access. So , the question is , what happens internally in the OS ? Does a page-fault / segmentation-fault occur ? Will the kernel even search in the page table ? Or the crash occur before that?
I know I wouldn't do such a thing in any program but this is just to know what happens internally in the OS or Compiler in such a case. And it is NOT a duplicate question.
Short answer: it depends on a lot of factors, including the compiler, processor architecture, specific processor model, and the OS, among others.
Long answer (x86 and x86-64): Let's go down to the lowest level: the CPU. On x86 and x86-64, that code will typically compile into an instruction or instruction sequence like this:
movl $10, 0x00000000
Which says to "store the constant integer 10 at virtual memory address 0". The Intel® 64 and IA-32 Architectures Software Developer Manuals describe in detail what happens when this instruction gets executed, so I'm going to summarize it for you.
The CPU can operate in several different modes, several of which are for backwards compatibility with much older CPUs. Modern operating systems run user-level code in a mode called protected mode, which uses paging to convert virtual addresses into physical addresses.
For each process, the OS keeps a page table which dictates how the addresses are mapped. The page table is stored in memory in a specific format (and protected so that they can not be modified by the user code) that the CPU understands. For every memory access that happens, the CPU translates it according to the page table. If the translation succeeds, it performs the corresponding read/write to the physical memory location.
The interesting things happen when the address translation fails. Not all addresses are valid, and if any memory access generates an invalid address, the processor raises a page fault exception. This triggers a transition from user mode (aka current privilege level (CPL) 3 on x86/x86-64) into kernel mode (aka CPL 0) to a specific location in the kernel's code, as defined by the interrupt descriptor table (IDT).
The kernel regains control and, based on the information from the exception and the process's page table, figures out what happened. In this case, it realizes that the user-level process accessed an invalid memory location, and then it reacts accordingly. On Windows, it will invoke structured exception handling to allow the user code to handle the exception. On POSIX systems, the OS will deliver a SIGSEGV signal to the process.
In other cases, the OS will handle the page fault internally and restart the process from its current location as if nothing happened. For example, guard pages are placed at the bottom of the stack to allow the stack to grow on demand up to a limit, instead of preallocating a large amount of memory for the stack. Similar mechanisms are used for achieving copy-on-write memory.
In modern OSes, the page tables are usually set up to make the address 0 an invalid virtual address. But sometimes it's possible to change that, e.g. on Linux by writing 0 to the pseudofile /proc/sys/vm/mmap_min_addr, after which it's possible to use mmap(2) to map the virtual address 0. In that case, dereferencing a null pointer would not cause a page fault.
The above discussion is all about what happens when the original code is running in user space. But this could also happen inside the kernel. The kernel can (and is certainly much more likely than user code to) map the virtual address 0, so such a memory access would be normal. But if it's not mapped, then what happens then is largely similar: the CPU raises a page fault error which traps into a predefined point at the kernel, the kernel examines what happened, and reacts accordingly. If the kernel can't recover from the exception, it will typically panic in some fashion (kernel panic, kernel oops, or a BSOD on Windows, e.g.) by printing out some debug information to the console or serial port and then halting.
See also Much ado about NULL: Exploiting a kernel NULL dereference for an example of how an attacker could exploit a null pointer dereference bug from inside the kernel in order to gain root privileges on a Linux machine.
As a side note, just to compel the differences in architectures, a certain OS developed and maintained by a company known for their three-letter acronym name and often referred to as a large primary color has a most-fasicnating NULL determination.
They utilize a 128-bit linear address space for ALL data (memory AND disk) in one giant "thing". In accordance with their OS, a "valid" pointer must be placed on a 128-bit boundary within that address space. This, btw, causes fascinating side effects for structs, packed or not, that house pointers. Anyway, tucked away in a per-process dedicated page is a bitmap that assigns one bit for every valid location in a process address space where a valid pointer can lay. ALL opcodes on their hardware and OS that can generate and return a valid memory address and assign it to a pointer will set the bit that represents the memory address where that pointer (the target pointer) is located.
So why should anyone care? For this simple reason:
int a = 0;
int *p = &a;
int *q = p-1;
if (p)
{
// p is valid, p's bit is lit, this code will run.
}
if (q)
{
// the address stored in q is not valid. q's bit is not lit. this will NOT run.
}
What is truly interesting is this.
if (p == NULL)
{
// p is valid. this will NOT run.
}
if (q == NULL)
{
// q is not valid, and therefore treated as NULL, this WILL run.
}
if (!p)
{
// same as before. p is valid, therefore this won't run
}
if (!q)
{
// same as before, q is NOT valid, therefore this WILL run.
}
Its something you have to see to believe. I can't even imagine the housekeeping done to maintain that bit map, especially when copying pointer values or freeing dynamic memory.
On CPU which support virtual mermory, a page fault exception will be usually issued if you try to read at memory address 0x0. The OS page fault handler will be invoked, the OS will then decide that the page is invalid and aborts your program.
Note that on some CPU you can also safely access memory address 0x0.
As the C Standard says dereferencing a null pointer is undefined, if the compiler is able to detect at compile time (or even runtime) that your are dereferencing a null pointer it can do whatever it wants, like aborting the program with a verbose error message.
(C99, 6.5.3.2.p4) "If an invalid value has been assigned to the pointer, the behavior of the unary * operator is undefined.87)"
87): "Among the invalid values for dereferencing a pointer by the unary * operator are a null pointer, an address inappropriately aligned for the type of object pointed to, and the address of an object after the end of its lifetime."
In a typical case, int *ptr = NULL; will set ptr to point to address 0. The C standard (and the C++ standard) is very careful to not require that, but it's extremely common nonetheless.
When you do *ptr = 10;, the CPU would normally generate 0 on the address lines, and 10 on the data lines, while setting a R/W line to indicate a write (and, if the bus has such a thing, assert the memory vs. I/O line to indicate a write to memory, not I/O).
Assuming the CPU supports memory protection (and you're using an OS that enables it), the CPU will check that (attempted) access before it happens though. For example, a modern Intel/AMD CPU will use paging tables that map virtual addresses to physical addresses. In a typical case, address 0 won't be mapped to any physical address. In this case, the CPU will generate an access violation exception. For one fairly typical example, Microsoft Windows leaves the first 4 megabytes un-mapped, so any address in that range will normally result in an access violation.
On an older CPU (or an older operating system that doesn't enable the CPUs protection features) the attempted write will often succeed. For example, under MS-DOS, writing through a NULL pointer would simply write to address zero. In small or medium model (with 16-bit addresses for data) most compilers would write some known pattern to the first few bytes of the data segment, and when the program ended, they'd check to see if that pattern remained intact (and do something to indicate that you'd written via a NULL pointer if it failed). In compact or large model (20-bit data addresses) they'd generally just write to address zero without warning.
I imagine that this is platform and compiler dependent. The NULL pointer could be implemented by using a NULL page, in which case you'd have a page fault, or it could be below the segment limit for an expand-down segment, in which case you'd have a segmentation fault.
This is not a definitive answer, just my conjecture.

How are the different segments like heap, stack, text related to the physical memory?

When a C program is compiled and the object file(ELF) is created. the object file contains different sections such as bss, data, text and other segments. I understood that these sections of the ELF are part of virtual memory address space. Am I right? Please correct me if I am wrong.
Also, there will be a virtual memory and page table associated with the compiled program. Page table associates the virtual memory address present in ELF to the real physical memory address when loading the program. Is my understanding correct?
I read that in the created ELF file, bss sections just keeps the reference of the uninitialised global variables. Here uninitialised global variable means, the variables that are not intialised during declaration?
Also, I read that the local variables will be allocated space at run time (i.e., in stack). Then how they will be referenced in the object file?
If in the program, there is particular section of code available to allocate memory dynamically. How these variables will be referenced in object file?
I am confused that these different segments of object file (like text, rodata, data, bss, stack and heap) are part of the physical memory (RAM), where all the programs are executed.
But I feel that my understanding is wrong. How are these different segments related to the physical memory when a process or a program is in execution?
1. Correct, the ELF file lays out the absolute or relative locations in the virtual address space of a process that the operating system should copy the ELF file contents into. (The bss is just a location and a size, since its supposed to be all zeros, there is no need to actually have the zeros in the ELF file). Note that locations can be absolute locations (like virtual address 0x100000 or relative locations like 4096 bytes after the end of text.)
2. The virtual memory definition (which is kept in page tables and maps virtual addresses to physical addresses) is not associated with a compiled program, but with a "process" (or "task" or whatever your OS calls it) that represents a running instance of that program. For example, a single ELF file can be loaded into two different processes, at different virtual addresses (if the ELF file is relocatable).
3. The programming language you're using defines which uninitialized state goes in the bss, and which gets explicitly initialized. Note that the bss does not contain "references" to these variables, it is the storage backing those variables.
4. Stack variables are referenced implicitly from the generated code. There is nothing explicit about them (or even the stack) in the ELF file.
5. Like stack references, heap references are implicit in the generated code in the ELF file. (They're all stored in memory created by changing the virtual address space via a call to sbrk or its equivalent.)
The ELF file explains to an OS how to setup a virtual address space for an instance of a program. The different sections describe different needs. For example ".rodata" says I'd like to store read-only data (as opposed to executable code). The ".text" section means executable code. The "bss" is a region used to store state that should be zeroed by the OS. The virtual address space means the program can (optionally) rely on things being where it expects when it starts up. (For example, if it asks for the .bss to be at address 0x4000, then either the OS will refuse to start it, or it will be there.)
Note that these virtual addresses are mapped to physical addresses by the page tables managed by the OS. The instance of the ELF file doesn't need to know any of the details involved in which physical pages are used.
I am not sure if 1, 2 and 3 are correct but I can explain 4 and 5.
4: They are referenced by offset from the top of the stack. When executing a function, the top of the stack is increased to allocate space for local variables. Compiler determines the order of local variables in the stack so the compiler nows what is the offset of the variables from the top of the stack.
Stack in physical memory is positioned upside down. Beginning of stack usually has highest memory address available. As programs runs and allocates space for local variables the address of the top of the stack decrements (and can potentially lead to stack overflow - overlapping with segments on lower addresses :-) )
5: Using pointers - Address of dynamically allocated variable is stored in (local) variable. This corresponds to using pointers in C.
I have found nice explanation here: http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/mem.html
All the addresses of the different sections (.text, .bss, .data, etc.) you see when you inspect an ELF with the size command:
$ size -A -x my_elf_binary
are virtual addresses. The MMU with the operating system performs the translation from the virtual addresses to the RAM physical addresses.
If you want to know these things, learn about the OS, with source code (www.kernel.org) if possible.
You need to realize that the OS kernel is actually running the CPU and managing the memory resource. And C code is just a light weight script to drive the OS and to run only simple operation with registers.
Virtual memory and Physical memory is about CPU's TLB letting the user space process to use contiguous memory virtually through the power of TLB (using page table) hardware.
So the actual physical memory, mapped to the contiguous virtual memory can be scattered to anywhere on the RAM.
Compiled program doesn't know about this TLB stuff and physical memory address stuff. They are managed in the OS kernel space.
BSS is a section which OS prepares as zero filled memory addresses, because they were not initialized in the c/c++ source code, thus marked as bss by the compiler/linker.
Stack is something prepared only a small amount of memory at first by the OS, and every time function call has been made, address will be pushed down, so that there is more space to place the local variables, and pop when you want to return from the function.
New physical memory will be allocated to the virtual address when the first small amount of memory is full and reached to the bottom, and page fault exception would occur, and the OS kernel will prepare a new physical memory and the user process can continue working.
No magic. In object code, every operation done to the pointer returned from malloc is handled as offsets to the register value returned from malloc function call.
Actually malloc is doing quite complex things. There are various implementations (jemalloc/ptmalloc/dlmalloc/googlemalloc/...) for improving dynamic allocations, but actually they are all getting new memory region from the OS using sbrk or mmap(/dev/zero), which is called anonymous memory.
Just do a man on the command readelf to find out the starting addresses of the different segments of your program.
Regarding the first question you are absolutely right. Since most of today's systems use run-time binding it is only during execution that the actual physical addresses are known. Moreover, it's the compiler and the loader that divide the program into different segments after linking the different libraries during compile and load time. Hence, the virtual addresses.
Coming to the second question it is at the run-time due to runtime binding. The third question is true. All uninitialized global variables and static variables go into BSS. Also note the special case: they go into BSS even if they are initialized to 0.
4.
If you look at a assembler code generated by gcc you can see that memory local variables is allocated in stack through command push or through changing value of the register ESP. Then they are initiated with command mov or something like that.

Can a 32-bit processor really address 2^32 memory locations?

I feel this might be a weird/stupid question, but here goes...
In the question Is NULL in C required/defined to be zero?, it has been established that the NULL pointer points to an unaddressable memory location, and also that NULL is 0.
Now, supposedly a 32-bit processor can address 2^32 memory locations.
2^32 is only the number of distinct numbers that can be represented using 32 bits. Among those numbers is 0. But since 0, that is, NULL, is supposed to point to nothing, shouldn't we say that a 32-bit processor can only address 2^32 - 1 memory locations (because the 0 is not supposed to be a valid address)?
If a 32-bit processor can address 2^32 memory locations, that simply means that a C pointer on that architecture can refer to 2^32 - 1 locations plus NULL.
the NULL pointer points to an unaddressable memory location
This is not true. From the accepted answer in the question you linked:
Notice that, because of how the rules for null pointers are formulated, the value you use to assign/compare null pointers is guaranteed to be zero, but the bit pattern actually stored inside the pointer can be any other thing
Most platforms of which I am aware do in fact handle this by marking the first few pages of address space as invalid. That doesn't mean the processor can't address such things; it's just a convenient way of making low values a non valid pointer. For instance, several Windows APIs use this to distinguish between a resource ID and a pointer to actual data; everything below a certain value (65k if I recall correctly) is not a valid pointer, but is a valid resource ID.
Finally, just because C says something doesn't mean that the CPU needs to be restricted that way. Sure, C says accessing the null pattern is undefined -- but there's no reason someone writing in assembly need be subject to such limitations. Real machines typically can do much more than the C standard says they have to. Virtual memory, SIMD instructions, and hardware IO are some simple examples.
First, let's note the difference between the linear address (AKA the value of the pointer) and the physical address. While the linear address space is, indeed, 32 bits (AKA 2^32 different bytes), the physical address that goes to the memory chip is not the same. Parts ("pages") of the linear address space might be mapped to physical memory, or to a page file, or to an arbitrary file, or marked as inaccessible and not backed by anything. The zeroth page happens to be the latter. The mapping mechanism is implemented on the CPU level and maintained by the OS.
That said, the zero address being unaddressable memory is just a C convention that's enforced by every protected-mode OS since the first Unices. In MS-DOS-era real-mode operaring systems, null far pointer (0000:0000) was perfectly addressable; however, writing there would ruin system data structures and bring nothing but trouble. Null near pointer (DS:0000) was also perfectly accessible, but the run-time library would typically reserve some space around zero to protect from accidental null pointer dereferencing. Also, in real mode (like in DOS) the address space was not a flat 32-bit one, it was effectively 20-bit.
It depends upon the operating system. It is related to virtual memory and address spaces
In practice (at least on Linux x86 32 bits), addresses are byte "numbers"s, but most are for 4-bytes words so are often multiple of 4.
And more importantly, as seen from a Linux application, only at most 3Gbytes out of 4Gbytes is visible. a whole gigabyte of address space (including the first and last pages, near the null pointer) is unmapped. In practice the process see much less of that. See its /proc/self/maps pseudo-file (e.g. run cat /proc/self/maps to see the address map of the cat command on Linux).

Resources