An ELF file for executables has a program (segment) header and a section header, which can be seen through readelf -a, here is an example:
The two pictures above are section header and program (segment) header, respectively. It can be seen that a segment header is composed of several section headers, which is used for loading program into the memory.
Is it only necessary for .text, .rodata, .data, .bss sections to be loaded into the memory?
Are all of the other sections in the segment (e.g. .ctors, .dtors .jcr in the 3rd segment) used for aligning?
Sections and segments are two different concepts completely. Sections pertain the the semantics of the data stored there (i.e. what it will be used for) and are actually irrelevant once a program or shared library is linked except for debugging purposes. You could even remove the section headers entirely (or overwrite them with random garbage) and a program would still work.
Segments (i.e. program header load directives) are what the kernel and/or dynamic linker actually look at when loading a program. For example, in your case you have two load directives. The first one causes the first 4k (1 page) of the file to be mapped at address 0x08048000, and indicates that only the first 0x4b8 bytes of this mapping are actually to be used (the rest is alignment). The second causes the first 8k (2 pages) of the file to be mapped at address 0x08049000. The vast majority of that is alignment. The first 0xf14 bytes are not part of the load directive (just alignment) and will be wasted. Beginning at 0x08049f14, 0x108 bytes mapped from the file are actually used, and another 0x10 bytes (to reach the MemSize of 0x118) are zero-filled by the loader (kernel or dynamic linker). This spans up to 0x0804a02c (in the second mapped page). The rest of the second mapped page is unused/wasted (but malloc might be able to recover it for use as part of the heap).
Finally, while the section headers will not be used at all, the contents of many different sections may be used by your program while it's running. Note that the address ranges of .ctors and .dtors lie in the beginning of the second load mapping, so they are mapped and accessible by the program at runtime (the runtime startup/exit code will use them to run global constructors and destructors, if C++ or "GNU C" code with ctor/dtor attribute was used). Also note that .data starts at address 0x0804a00c, in the second mapped page. This allows the first page to be protected read-only after relocations are applied (the RELRO directive in the program header).
Related
For this C code:
foobar.c:
static int array[256];
int main() {
return 0;
}
the array is initialized to all 0's, by the C standard. However, when I compile
gcc -S foobar.c
this produces the assembly code foobar.s that I can inspect, and nowhere in foobar.s, is there any initialization of contents of the array.
Hence I reason, that the contents are not initialized, only when an element of the array is inspected, is it initialized, kind of like "copy-on-write" mechanism for fork.
Is my reasoning correct? If so, is this a documented feature, and if so where can I find that documentation?
There's kind of a lot of levels here. This answer addresses Linux in particular, but the same concepts are likely to apply on other systems, possibly with different names.
The compiler requires that the object be "zero initialized". In other words, when a memory read instruction is executed with an address in that range, the value that it reads must be zero. As you say, this is necessary to achieve the behavior dictated by the C standard.
The compiler accomplishes this by asking the assembler to fill the space with zeros, one way or another. It may use the .space or .zero directive which implicitly requests this. It will also place the object in a section with the special name .bss (the reasons for this name are historical). If you look further up in the assembly output, you should see a directive like .bss or .section .bss. The assembler and linker promises that this entire section will be (somehow) initialized to zero. This is documented:
The bss section is used for local common variable storage. You may allocate address space in the bss section, but you may not dictate data to load into it before your program executes. When your program starts running, all the contents of the bss section are zeroed bytes.
Okay, so now what do the assembler and linker do to make it happen? Well, an ELF executable file has a segment header, which specifies how and where code and data from the file should be mapped into the program's memory. (Please note that the use of the word "segment" here has nothing to do with the x86 memory segmentation model or segment registers, and is only vaguely related to the term "segmentation fault".) The size of the segment, and the amount of data to be mapped, are specified separately. If the size is greater, then all remaining bytes are to be initialized to zero. This is also documented in the above-linked man page:
PT_LOAD
The array element specifies a loadable segment,
described by p_filesz and p_memsz. The bytes
from the file are mapped to the beginning of the
memory segment. If the segment's memory size
p_memsz is larger than the file size p_filesz,
the "extra" bytes are defined to hold the value
0 and to follow the segment's initialized area.
So the linker ensures that the ELF executable contains such a segment, and that all objects in the .bss section are in this segment, but not within the part that is mapped to the file.
Once all this is done, then the observable behavior is guaranteed: as above, when an instruction attempts to read from this object before it has been written, the value it reads will be zero.
Now as to how that behavior is ensured at runtime: that is the job of the kernel. It could do it by pre-allocating actual physical memory for that range of virtual addresses, and filling it with zeros. Or by an "allocate on demand" method, like what you describe, by leaving those pages unmapped in the CPU's page tables. Then any access to those pages by the application will cause a page fault, which will be handled by the kernel, which will allocate zero-filled physical memory at that time, and then restart the faulting instruction. This is completely transparent to the application. It just sees that the read instruction got the value zero. If there was a page fault, then it just seems to the application like the read instruction took a long time to execute.
The kernel normally uses the "on demand" method, because it is more efficient in case not all of the "zero initialized" memory is actually used. But this is not going to be documented as guaranteed behavior; it is an implementation detail. An application programmer need not care, and in fact must not care, how it works under the hood. If the Linux kernel maintainers decide tomorrow to switch everything to the pre-allocate method, every application will work exactly as it did before, just maybe a little faster or slower.
I was reading about how programs get loaded into memory. I wanted to understand how different sections (like text,data,rodata etc.) of a PE/ELF file are mapped at different places in the virtual address space of a process. I particularly wanted to understand how does the linker know where will a particular section (say for example rodata) will be mapped onto the address space, so that it can resolve correct addresses for all the symbols before creating the executable.
Do operating systems (eg. Windows) have a fixed range of virtual addresses for a process where they load/map a particular section? If not, then how will the linker resolve the correct addresses for the symbols in different sections?
It doesn't. Linker can only propose the executable image be loaded at certain VA, usually 0x00400000 for PE or 0x10000000 for DLL. Virtual adresses of sections (.text, .rodata, .data etc) are aligned by the section alignment (usually 0x00001000) and their proposed VA are therefore 0x00401000, 0x00402000 etc. Linker then fixes adresses of symbols to those assumed VAs.
The default ImageBase address (taken from linker script or linker arguments) is not required by OS loader, but I don't see a reason to change it, its a good habit to see nice rounded addresses during debugging.
In rare cases the Windows loader may find out that part of the proposed address space is occupied, so it will load the image at a different VA and fix position-dependent (absolute) references to their new VA.
Special PE section relocs contains addresses of references to program symbols which need relocation at load-time.
This question confused me a lot. As far as I know, .bss section is for saving data that initialized but not used yet. But I don't understand what 'content' here mean and why there is no content here?
Thanks for any helps!
The quick response is: Well, there's no content to fill the .bss with, so there's no sense in putting any data on the executable in relation to that section. Only the positions of the variables are stored, but that belongs to another ELF section.
.bss section is where your program has all the uninitialized variables (by default all initialized to zero) The linker only needs to know the actual size of this region and the actual variable positions, but not the values, because its contents are obvious, independently of the nature or the distribution of the variables put there.
When your program is loaded, the kernel normally assigns a read-only segment for the unmodifiable text of the program (.text section) and also puts in that segment the contents of the initialized const variables (.rodata section) so in case yo attempt to modify something there, you get an exception. Then comes the initialized data section with the initial values of all the initialized variables of your program (.data section) and the uninitialized ones (.bss section)
The data segment (look how I call different a section and a load segment) is given more space, the sum of .data and .bss sections, to hold all the variables (both are included, so that's the reason it uses its length) but while the contents of the .data section have to be filled from the file, the contents of the .bss section don't, because all are zeroed by the operating system, before allowing the user process to access the allocated segment. That's not true for small systems, where the operating system doesn't fill the data with zeros... but there, the compiler adds some code to zero all the .bss segment, so again, there's no need to copy any data from the executable file.
The historic (and main) reason for this behaviour is that the pages the kernel assigns that have to be loaded with your program, are cleared to zero for security reasons (so you cannot luckily get a page full of other users' passwords, or other sensible information) so there's no reason to fill it with zeros again and nothing has to be copied there, there's no reason to put anything on the executable file. The pages the kernel maintains normally are zeroed only when they are going to be given to a user, but maintain (as they are designed for that purpose) the information until they are overwritten.
There's no content in the BSS (Block started By Symbol) section because it would be wasted storage. The contents of the BSS is all zeros and it is cleared by the startup code before main is called. Think of the BSS as a run-length compressed block of bytes. All you need to know to uncompress that block is the value (0) and the length, which is stored in the ELF entry for the BSS.
Your notion of "data that [is] initialized but not used yet" is a bit off. Consider that all sections in an ELF file are somehow "not used yet". The text segment may or may not become used (it may contain dead/unreachable code). The data segment may or may not be used at all (you can define objects never used by code).
All globally initialised values are stored in .data segment .i.e. initialised data segment and uninitialised values are stored in bss and compiler initialise those uninitiated values to zero automatically in bss. Then why data segment is separated as .data and bss.
whether it has an advantage or not ? or any benefit
The C programming language (it is a specification written in English) does not know about .bss or .data or data segments. Check that by reading n1570 (the latest draft of C11).
In some cases (e.g. embedded computing) you don't have any data segment. For example when you cross-compile for an Arduino, the resulting code gets uploaded to the flash memory of that microcontroller (and data is in RAM, which your program would perhaps explicitly clear).
(most of my answer below is focused on Linux, but you could adapt it to other OSes)
Regarding the data segment on Unix-like systems like Linux, read more the specification of ELF. It is convenient to avoid spending file space in the executable file for something (the .bss) which is all zeros. The kernel code for execve(2) is able to set up an initial virtual address space with some cleared data (which is not mapped to some file segment). See also mmap(2) & elf(5) & ld-linux(8). Try some cat /proc/$$/maps command to understand the virtual address space of your shell. See proc(5).
So the executable contains segments (some of which are all zeros and don't take any disk space), and is constructed by the linker -which also handle relocations- from several object files provided by the compiler from source code files. On Linux, use objdump(1) and readelf(1) (and nm(1)...) to explore and inspect ELF executables and ELF object files.
BTW a cleared data segment don't need to be fetched from disk by the virtual memory subsystem, and that might make things slightly faster. The kernel would just clear a page in RAM when it is paged in.
So the .bss exist to avoid wasting disk space in executables (and to speedup initializing of zeroed data). Obviously, some mutable data is explicitly initialized to non-zero content (and that needs to sit in .data and takes some disk space in the executable). Constant immutable read-only data goes into .rodata (into the text segment, generally shared by several processes running the same program)
You could configure your linker (e.g. with some linker script) to put all the data (even the cleared ones) in some explicit data segment (but doing so is just wasting disk space)...... Historically, Unix have been developed on machines with little but costly disk space (so wasting it was unthinkable at that time, hence the need of .bss; today you care less!)
Read Levine's book Linkers and Loaders for more, and Advanced Linux Programming and Operating Systems : Three Easy Pieces.
This is a repeat of a question but I could not quickly find an answer to my question. That's why asking it.
Some ELF files contain(executables or shared libs) program headers which explain segments.
They contain a field called virtual address and file offsets and some other fields.
There are also corresponding sections which explain "address in memory" and file offset.
Now I am little confused how sections and segments are related. (For statically compiled executables and for non-statically compiled executables.)
How file offsets are different for statically compiled binaries? Is there any relation between virtual address in program headers and memory address in section headers.
Thanks
A section is the smallest continuous region of the file. So ELF files are subdivided into sections. Sections cannot overlap, that is, no byte can possibly be part of more than one section. But there can be bytes that don't belong to any section ("garbage").
Sections are generally used for linking purposes. They contain different parts of the file that can be rearranged, merged etc. by the linker.
But executable files can contain sections too – to describe the contents of the file, and where each piece of code or data begins. Shared objects use sections too. Those contain symbol tables for dynamic linking & stuff like that.
All sections contained in the ELF file are described in the Section Headers Table, each section having an entry in it.
But in order to make an executable, you need something else: segments. These tell the loader which parts of the file it should load into memory and to what addresses. So segments map into the executable process's memory space. They can contain code as well as data, so segments can be subdivided into sections to achieve that. And I guess that's the answer to your question.
Loadable segments are described in the Program Headers Table.
Long story short:
In executables, you have segments, that can be subdivided further into sections. Segments are loaded into the process's memory. Sections are optional, but can help subdivide the segments further or describe their contents.
In relocatable modules (compiler outputs, .o files) it's the other way around: sections are required, because they describe what's in the file and allow for linking.
As for the memory addresses & stuff:
On modern systems only the virtual addresses matter. A process is being deceived by the operating system that it runs alone in memory, with the entire address space available to it (though not all the address space could be available at the same time due to physical memory limitations). The system maps virtual addresses to physical addresses on the fly, transparently to the process.
Physical addresses are not used, so they can be left as zeros, but could be set to the same addresses just in case.