Running code from BSS section - c

In a buffer overflow attack, it's possible to run code from the BSS section (assuming the user disabled some security protections). How is code running there different than code running in the text section? Does it make sense to push things onto the stack while running code from the BSS section? If not, how can functions be called from there?
I'm using linux x86.

As much as i am aware, your premise of the BSS segment containing executable instructions is flawed.
The BSS segment is used to hold only static variables that haven't been assigned values for example:
static char *test_var;
The text segment is the segment that contains the executable instructions and not the BSS segment.
For more clarity refer to:
http://en.wikipedia.org/wiki/.bss
http://en.wikipedia.org/wiki/Code_segment
Also, you might want to look at Virtual Memory layout. The link http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory/ illustrates this very well with diagrams etc.
However, if you want to see which segments of an executable are marked as executable, use this tool called readelf on an executable as shown below:
readelf -l ./test

Yes, you are correct. Provided that the memory segment or selector that holds the BSS is not marked non-executable you can easily execute code from it if:
You know where it is in memory
You have a way to control the EIP to redirect execution here
You have some input (file, actual input, network or environment) that will end up in a statically allocated variable.
Simply inject your code into #3 and you're off to the races.
By the way.. I would not expect BSS to be marked executable, but don't despair. This by no means indicates that some other selector doesn't point at exactly the same memory and is marked executable. This means that you could approach it through BSS to inject code since that will be read/write and then through some other selector to execute.
For example, I find a fair number of examples where CS is pointing to precisely the same memory as DS, but CS is read-only and executable while DS is readwrite and non-executable. Make sense?

Related

How are code segments and data segments of a source code program really handled and separated from each other during process execution?

Consider the following picture showing a RAM within which is stored a very simple program divided into instruction block and data block. The example is very similar to the ones found in the book "Code" by Charles Petzold:
As you see there's an instruction block and a data block. In the book this RAM is put inside a rudimentary computer within which you have to put manually both data and instructions by using some switches (just like the old altair 8800). In order for the machine to start executing instructions you had to set the initial address of instruction block and then the machine started executing one instruction after the other sequentially. Basically all this program does is loading the value 1 into accumulator, then add 5 to it, store the result in the address 000Ch (h stands for hex) and finally it stops executing with Halt instruction.
Now when I try to connect the knowledge I got from this book to the way a C source code is compiled I get a bit confused. Specifically the phase in which there's some separation between code segment and data segment. Consider this simple source code:
#include <stdlib.h>
#include <stdio.h>
int test=10;
int main(){
test ++;
return 0;
}
Now my idea is that the compiler should tell the computer to execute machine instructions like this :
int test=10; -> STORE [addressX],10
int main()
{
test++; -> LOAD A,[addressX]
-> INR A
-> STORE [addressX],A
return 0;
}
According to the definition of Wikipedia the data segment "contains initialized static variables, i.e. global variables and local static variables which have a defined value and can be modified".
In my simple example the variable test is a global variable.
However my idea is that before the variable is put inside the data segment of the RAM some sort of machine instructions like STORE must have been invoked. Otherwise how can the global variable be stored inside RAM?
Can someone explain in detail what is really happening and how the simple source code I showed here is really divided into text segment and data segment. What is exactly the text segment for this example? And what about the data segment?
I hope you understood what my doubt is and be able to answer as clear as possible. I appreciate if you could also address me to some good and in deep (with example and not abstract) resources to understand what's really going on when dealing with code, data, stack and heap segments.
I can explain how things work roughly on Windows.
First of all, the given information in the book does not apply that much to modern nowadays OSes. Most OS (such as Windows, Linux, etc.) has an executable file format that describes how the code and data are stored within the file, how they can be mapped into RAM, where to start to execute the code so on. On Windows, the format is called Portable Executable. PE format consists of zero or more sections to store the code and other data. Sections contain some important information such as how the OS will find the data of the section in the file, how to map this data to the memory, what kind of protection method will be used for this data in the memory. Sections can also have a name such as .text, .data, .bss, .idata, .rdata giving a clue about what kind of data the section contains.
When you compile and link your code with MSVC on Windows, you have a portable executable file for your program. This PE file will have one or more sections. For your example, it may have a .text for code, a .data for initialized data, and a .idata section for your imports from other modules. .text section has the compiled machine code, .data section has the data of value 10 for the variable test. When you execute the file, the OS loader will try to load, parse and map it into the memory created for its process.
So, you don't need a STORE instruction to store and initialize the data in RAM. All data in your program is located at the corresponding section and will be mapped into memory by the loader.

Where memory segments are defined?

I just learned about different memory segments like: Text, Data, Stack and Heap. My question is:
1- Where the boundaries between these sections are defined? Is it in Compiler or OS?
2- How the compiler or OS know which addresses belong to each section? Should we define it anywhere?
This answer is from the point of view of a more special-purpose embedded system rather than a more general-purpose computing platform running an OS such as Linux.
Where the boundaries between these sections are defined? Is it in Compiler or OS?
Neither the compiler nor the OS do this. It's the linker that determines where the memory sections are located. The compiler generates object files from the source code. The linker uses the linker script file to locate the object files in memory. The linker script (or linker directive) file is a file that is a part of the project and identifies the type, size and address of the various memory types such as ROM and RAM. The linker program uses the information from the linker script file to know where each memory starts. Then the linker locates each type of memory from an object file into an appropriate memory section. For example, code goes in the .text section which is usually located in ROM. Variables go in the .data or .bss section which are located in RAM. The stack and heap also go in RAM. As the linker fills one section it learns the size of that section and can then know where to start the next section. For example, the .bss section may start where the .data section ended.
The size of the stack and heap may be specified in the linker script file or as project options in the IDE.
IDEs for embedded systems typically provide a generic linker script file automatically when you create a project. The generic linker file is suitable for many projects so you may never have to customize it. But as you customize your target hardware and application further you may find that you also need to customize the linker script file. For example, if you add an external ROM or RAM to the board then you'll need to add information about that memory to the linker script so that the linker knows how to locate stuff there.
The linker can generate a map file which describes how each section was located in memory. The map file may not be generated by default and you may need to turn on a build option if you want to review it.
How the compiler or OS know which addresses belong to each section?
Well I don't believe the compiler or OS actually know this information, at least not in the sense that you could query them for the information. The compiler has finished its job before the memory sections are located by the linker so the compiler doesn't know the information. The OS, well how do I explain this? An embedded application may not even use an OS. The OS is just some code that provides services for an application. The OS doesn't know and doesn't care where the boundaries of memory sections are. All that information is already baked into the executable code by the time the OS is running.
Should we define it anywhere?
Look at the linker script (or linker directive) file and read the linker manual. The linker script is input to the linker and provides the rough outlines of memory. The linker locates everything in memory and determines the extent of each section.
For your Query :-
Where the boundaries between these sections are defined? Is it in Compiler or OS?
Answer is OS.
There is no universally common addressing scheme for the layout of the .text segment (executable code), .data segment (variables) and other program segments. However, the layout of the program itself is well-formed according to the system (OS) that will execute the program.
How the compiler or OS know which addresses belong to each section? Should we define it anywhere?
I divided your this question into 3 questions :-
About the text (code) and data sections and their limitation?
Text and Data are prepared by the compiler. The requirement for the compiler is to make sure that they are accessible and pack them in the lower portion of address space. The accessible address space will be limited by the hardware, e.g. if the instruction pointer register is 32-bit, then text address space would be 4 GiB.
About Heap Section and limit? Is it the total available RAM memory?
After text and data, the area above that is the heap. With virtual memory, the heap can practically grow up close to the max address space.
Do the stack and the heap have a static size limit?
The final segment in the process address space is the stack. The stack takes the end segment of the address space and it starts from the end and grows down.
Because the heap grows up and the stack grows down, they basically limit each other. Also, because both type of segments are writeable, it wasn't always a violation for one of them to cross the boundary, so you could have buffer or stack overflow. Now there are mechanism to stop them from happening.
There is a set limit for heap (stack) for each process to start with. This limit can be changed at runtime (using brk()/sbrk()). Basically what happens is when the process needs more heap space and it has run out of allocated space, the standard library will issue the call to the OS. The OS will allocate a page, which usually will be manage by user library for the program to use. I.e. if the program wants 1 KiB, the OS will give additional 4 KiB and the library will give 1 KiB to the program and have 3 KiB left for use when the program ask for more next time.
Most of the time the layout will be Text, Data, Heap (grows up), unallocated space and finally Stack (grows down). They all share the same address space.
The sections are defined by a format which is loosely tied to the OS. For example on Linux you have ELF and on Mac OS you have Mach-O.
You do not define the sections explicitly as a programmer, in 99.9% of cases. The compiler knows what to put where.

when code segment, data segment or created when compiling a c program?

I am trying to understand the compilation process of a C program. The pre-processed program was given to the compiler (to create obj file). The compiler will check for compilation errors. But somewhere I read that code segment, data segment will be created by the compiler and places the corresponding entries in to those segments. Is this correct?
How will the compiler create the segments in the memory? Since we haven't started running the program. Can anyone please let me know what are the exact things performed by the compiler?
As you mentioned, the text and data segments (and technically the BSS) are generated by the compiler. The text contains program code, the data contains global and static data. Those are all part of your binary image on disk.
The stack and the heap are not created by the compiler, but rather allocated at runtime -- they only exist in memory while the process is still alive.
This is quite simple.
So code segment is for instructions and data segment is for global and static variables.
It's obvious then, that in the end the compiler knows the size of both the code segment and data segment and this exactly the amount of memory required to load your program/library.
It's not actually memory allocation - this will happen at runtime.
But the point is that processor's instruction pointer should not get out of code segment. And this makes the length of code block quite important.
The compiler does not load the program. It only creates the executable file.
The text section and data section is created by the compiler and placed at the right places but only in the executable file. The executable is really just made up of descriptions and instructions to the runtime loader to tell it where to place code and data at run time.

Location of variables in C

I'm trying to understand how C allocates memory to global variables.
I'm working on a simple Kernel. So far it can't do much more than print to screen and enable interrupts. I'm now working on a basic physical memory manager.
My memory manager is a bitmap that sets a 1 or 0 if memory is allocated or available. I need to add the memory that my Kernel is using to the bitmap as 'allocated', so nothing overwrites it.
I can easily find out the start of the Kernel, as it's statically loaded to 0x100000. Figuring out the length shouldn't be too difficult either. The part I'm not sure about is where global variables are put in memory?
Let's say my Kernel is 12K, I can then allocate these 3x 4K blocks of memory to it for protection. Do I need to allocate more to cover the variables it uses? Or are the variables part of that 12K?
Thank you for your help, I hope I am making enough sense.
have a look at
http://www.geeksforgeeks.org/archives/14268
your globals mostly are in the BSS
As the previous answer says, most variables are stored in the .bss section but they can also be stored in the .data or .rodata section depending on if you defined the global variables as static or const. After compiling you can use readelf -S kernel.bin to see exactly how much space each section will utilize. For the .bss section the memory is only occupied when the binary is loaded in memory and does not take any space on disk. This means that your compiled kernel binary will be smaller than the actual size it will later use when brought into memory (by grub usually).
A simple way to figure out exactly how much data your kernel will use besides using readelf is to place the .bss section inside the .data section within your linker script. The size of the kernel binary will then be the same size both on disk as in memory (or actually it will be a bit smaller in memory since not all sections are copied by grub) but then at least you know the minimum amount of memory you need to allocate.
I'd recommend using a custom linker script (assuming you use gcc): it makes the layout of kernel sections explicit and customizable (to read more about linker scripts, read info ld). You can see an example of my OS's linker script here.
To see the default linker script use -v/--verbose option of ld.
Mostly global variables are located in .data.* and .rodata.* sections, variables initialized with 0 go in .bss.

load-time ELF relocation

I am writing a simple user-space ELF loader under Linux (why? for 'fun'). My loader at the moment is quite simple and is designed to load only statically-linked ELF files containing position-independent code.
Normally, when a program is loaded by the kernel's ELF loader, it is loaded into its own address space. As such, the data segment and code segment can be loaded at the correct virtual address as specified in the ELF segments.
In my case, however, I am requesting addresses from the kernel via mmap, and may or may not get the addresses requested in the ELF segments. This is not a problem for the code segment since it is position independent. However, if the data segment is not loaded at the expected address, code will not be able to properly reference anything stored in the data segment.
Indeed, my loader appears to work fine with a simple assembly executable that does not contain any data. But as soon as I add a data segment and reference it, the executable fails to run correctly or SEGFAULTs.
How, if possible, can I fixup any references to the data segment to point to the correct place? Is there a relocation section stored in the (static) ELF file for this purpose?
If you modify the absolute addresses available in the .got section, (global offset table) your program should work. Make sure to modify the absolute address calculation to cater for the new distance between .text and .data, I'm afraid you need to figure out where this information comes from, for your architecture.
See this: Global Offset Table (Processor-Specific)
Good luck.
I don't see any way you can do that, unless you emulate the kernel-provided virtual address space completely, and run the code inside that virtual space. When you mmap the data section from the file, you are intrinsically relocating it to an unknown address of the virtual address space of your ELF interpreter, and your code will not be able to reference to it in any way.
Glad to be proven wrong. There's something very cool to learn here.

Resources