C : Memory layout of C program execution - c

I wanted know how the kernel is providing memory for simple C program .
For example :
#include<stdio.h>
#include<malloc.h>
int my_global = 10 ;
main()
{
char *str ;
static int val ;
str = ( char *) malloc ( 100 ) ;
scanf ( "%s" , str ) ;
printf( " val:%s\n",str ) ;
free(str) ;
return 1 ;
}
See, In this program I have used static , global and malloc for allocating dynamic memory
So , how the memory lay out will be ... ?
Any one give me url , which will have have details information about this process..

Very basically, in C programs built to target ELF (Executable and Linkable Format) such as those built on linux there is a standard memory layout that is created. Similar layouts probably exist for other architectures, but I don't know enough to tell you more about them.
The Layout:
There are some global data sections that are initialized at low memory addresses in memory (such as sections for the currently executing code, global data, and any strings that are created with "..." inside your C code).
Below that there is a heap of open memory that can be used. The size of this heap increases automatically as calls to malloc and free move what is called the "program break" to higher addresses in memory.
Starting at a high address in memory, the stack grows towards lower addresses. The stack contains memory for any locally allocated variables, such as those at the top of functions or within a scope ({ ... }).
More Info:
There is a good description of a running ELF program here and more details on the format itself on the Wikipedia article. If you want an example of how a compiler goes about translating C code into assembly you might look at GCC, their Internals Manual has some interesting stuff in it; the most relevant sections are probably those in chapter 17, especially 17.10, 17.19 and 17.21. Finally, Intel has a lot of information about memory layout in its IA-32 Architectures Software Developer’s Manual. It describes how Intel processors handle memory segmentation and the creation of stacks and the like. There's no detail about ELF, but it's possible to see where the two match up. The most useful bits are probably section 3.3 of Volume 1: Basic Architecture, and chapter 3 of Volume 3A: System Programming Guide, Part 1.
I hope this helps anyone diving into the internals of running C programs, good luck.

There's a brief discussion at wikipedia.
A slightly longer introduction is here.
More details available here, but I'm not sure it's presented very well.

All static and global variables are stored in the Data segment, all automatic and temporary variables are stored on the stack, and all dynamic variable are stored on the heap.
All function parameters are stored on the stack and there is a different stack frame for each function call this is how recursion function works.
For more on this, see this site.

In practical words, when you run any C-program, its executable image is loaded into RAM of computer in an organized manner which is called process address space or Memory layout of C program.
http://www.firmcodes.com/memory-layout-c-program-2/

all the static and global uninitialized variables goes into bss(Block started by Symbol).
all Initialized GLobal/Local/static variable further divide as
read only
const int x=10;
& read/write
char Str[]="StackOverFlow"
The stack segment is area where local variables are stored. By saying local variable means that all those variables which are declared in every function including main( ) in your C program.
Text segment contain executable instructions of your C program, its also called code segment. This is the machine language representation of the program steps to be carried out, including all functions making up the program, both user defined and system. The text segment is sharable so that only a single copy needs to be in memory for different executing programs, such as text editors, shells, and so on. Usually, text segment is read-only, to prevent a program from accidentally modifying its instructions.
one more region in the memory layout of a program is Unmapped or reserved segment contain command line arguments and other program related data like lower address-higher address of executable image, etc.

Related

How memory allocation of variables or data in a program are done by compiler and OS

Want to get an overview on a few things about how exactly the memory for a variable is allocated.
In C programming,
Taking the context of "auto" variables, which are allocated on the stack section, I have the following question:
Does the compiler generate a logical address for the variables? If yes, then how? Won't the compiler need OS permission to generate or assign such addresses? If no, then is there some sort of indication or instruction that the compiler puts in the code segment asking the OS to allocate memory when running the executable?
Now taking the context of heap allocated variables,
Is the heap of the same size for all programs? If not, then does the executable consist of a header or something that tells the OS how much heap space it needs for dynamic allocation?
I'd be grateful if someone provides the answer or shares any related content/links that explains this.
Stack (most implementations use stack for automatic storage duration objects) and static storage duration objects memory is allocated during the program load and startup.
Does the compiler generate a logical address for the variables? If
yes, then how?
I do not know what is the "logical address" but compilers do "calculate" the references to the automatic storage duration objects. How? Simply compiler knows how far from the stack pointer address the automatic storage duration object is located (offset).
Generally the same applies to the static duration objects and the code, the compiler only calculates the offset from the their sections.
Is the heap of the same size for all programs?
It is implementation defined.
A method typically used in operating systems is that, when a program is starting, there is a piece (or collection) of software used that loads the program. The program loader reads the executable file and sets up memory for the program.
Part of the executable file says what size stack should be allocated for it. Most often, this is set by default when linking the program. (It is 8 MiB for macOS, 2 MiB for Linux, and 1 MiB for Windows.) However, it can be changed by asking the linker to set a different size.
The program loader calls operating system routines to request virtual memory be mapped. It does this for the stack and for other parts of the program, such as the code sections (the parts of memory that contain, mostly, the executable instructions of the program), and the initialized and uninitialized data. When it starts the program, it tells the program where the stack starts by putting that address into a designated register (or similar means).
One of the processor registers is used as a stack pointer; it points to address within the memory allocated for the stack that is the current top of stack. When the compiler arranges to use stack space for objects, it generates instructions that adjust the stack pointer. The addresses for the objects are calculated relative to the stack pointer. If a function needs 128 bytes of data, the compiler generates an instruction that subtracts 128 from the stack pointer. (This may occur in multiple steps, such as “call” and “push” instructions that make some changes to the stack pointer plus an additional “subtract” instruction that finishes the changes.) Then the addresses of all the objects in this stack frame are calculated as offsets from the value of the stack pointer. For example, by taking the stack pointer and adding 40, we calculate the address of the object that has been assigned to be 40 bytes higher than the top of the stack.
(There is some confusion about the wording of directions here because stacks commonly grow from high addresses to low addresses. The program loader may allocate some chunk of memory from, say, address 12300000016 to 12400000016. The stack pointer will start at 12400000016. Subtracting 128 will make it 123FFFF8016. Then 123FFFFA816 is an address that is 40 bytes “higher” than 123FFFF8016 in the address space, but the “top of stack” is below that. That is because the term “top of stack” refers to the model of physically stacking things on top of each other, with the latest thing on top.)
The so-called “heap” is not the same size of all programs. In typical implementations, the memory management routines call system routines to request more virtual memory when they need it.
Note that “heap” is properly a word for a general data structure. Heaps may be used to organize things other than available memory, and the memory management routine keep track of available memory using data structures other than heaps. When referring to memory allocated via the memory management routines, you can call it “dynamically allocated memory.” It may also be shortened to “allocated memory,” but that can be confusing in some situations since all memory that has been reserved for some use is allocated memory.
Some background first
In C programming, Taking the context of "auto" variables, which are allocated on the stack section ...
To understand my answer, you first need to know how the stack works.
Imagine you write the following function:
int myFunction()
{
return function1() + function2() + function3();
}
Unfortunately, you do not use C as programming language but you use a programming language that neither supports local variables nor return values. (This is how most CPUs work internally.)
You may return a value from a function in a global variable:
function1()
{
result = 1234; // instead of: return 1234;
}
And your program may now look the following way if you use a global variable instead of local ones:
int a;
myFunction()
{
function1();
a = result;
function2();
a += result;
function3();
result += a;
}
Unfortunately, one of the three functions (e.g. function3()) may call myFunction() (so the function is called recursively) and the variable a is overwritten when calling function3().
To solve this problem, you may define an array for local variables (myvars[]) and a variable named mypos. In the example, the elements 0...mypos in myvars[] are used; the elements (mypos+1)...(MAX_LOCALS-1) are free:
int myvars[MAX_LOCALS];
int mypos;
...
myFunction()
{
function1();
mypos++;
myvars[mypos] = result;
function2();
myvars[mypos] += result;
function3();
result += myvars[mypos];
mypos--;
}
By changing the value of mypos from 10 to 11 (as an example), your program indicates that the element mypos[11] is now in use and that the functions being called shall store their data in elements mypos[x] with x>=12.
Exactly this is how the stack is working.
Typically, the "variable" mypos is not a variable but a CPU register named "stack pointer". (However, there are a few historic CPUs where an ordinary variable was used for this!)
The actual answers
Does the compiler generate a logical address for the variables?
In the example above, the compiler will perform a mypos+=3 if there are 3 local variables. Let's say they are named a, b and c.
The compiler simply replaces a by myvars[mypos-2], b by myvars[mypos-1] and c by myvars[mypos].
On most CPUs, the stack pointer (named mypos in the example) is not an index into an array but already a pointer (comparable to int * mypos;), so the compiler would replace a by *(mypos-2) instead of myvars[mypos-2] in the example.
For global variables, the compiler simply counts the number of bytes needed for all global variables. In the simplest case, it chooses a range of memory of the same size (e.g. 0x10000...0x10123) and places the variables there.
Won't the compiler need OS permission to generate or assign such addresses?
No.
The "stack" (in the example this is the array myvars[]) is already provided by the OS and the stack pointer (mypos in the example) is also set to a correct value by the OS before the program is started.
Your program knows that the elements myvars[x] with x>mypos can be used.
For global variables, the information about the range used by global variables (e.g. 0x10000...0x10123) is stored in the executable file. The OS must ensure that this memory range can be used by the program. (For example by configuring the MMU accordingly.)
If this is not possible, the OS will simply refuse to start the program with an error message.
... asking the OS to allocate memory when running the executable?
For variables on the stack:
There may be operating systems where this is done.
However, in most cases, the program will simply crash with a "stack overflow" if too much stack is needed. In the example, this would mean: The program crashes if an elements myvars[x] with x>=MAX_LOCALS is accessed.
Now taking the context of heap allocated variables ...
Please first note that global variables are not stored on the heap.
The heap is used for data allocated using malloc() and similar functions (new in C++ for example).
Under many operating systems, malloc() calls an operating system function - so it is actually the operating system that allocates memory on the heap.
... and if there is not enough space, the OS (and malloc()) will return NULL.
Does the compiler generate a logical address for the variables? If yes, then how?
Yes, but they are related to the stack pointer at function entry time (which is normally saved as a constant base pointer, stored in a cpu register) This is because the function can be recursive, and you can have two calls to the function with different instances for that variable (and related to different copies of the base pointer), the compiler assigns the offset to the base pointer for the variable, but the base pointer can be different, depending on the stack contents at function entry time.
Won't the compiler need OS permission to generate or assign such addresses?
Nope, the compiler just generates an executable in the format and form needed for the operating system to manage process' memory. When the program starts, it is given normally three (or more) segments of memory:
text segment. A normally read-only (or execute only) segment that gives no write access to the program. This is normally because the text segment is shared between all programs that are using the same executable at the same time. A program can demand exclusive read-write acces to the text (to allow programs that modify their own executable code) but this happens only rarely. This is normally specified to the compiler and the compiler writes an special flag in the text segment to inform the kernel of this requirement.
Data segment. A read-write segment, that can be grown by means of a system cal (sbrk(2)) This is used for global variables and the heap (while in modern systems, the heap is allocated into a new segment acquired by calling the mmap(2) system call. Sometimes this segment is divided in two. A data segment read-only for constants (so the program receives a signal in case you try to change the value of a constant) and a read-write segment, freely usable by the program. This is where global variables are stored.
Stack segment. A read-write segment, that is allocated for the process to use as the stack segment. It has the capability of growing in one direction as the process starts using it. When the process accesses the data one memory page below the start of the segment, it generates a page fault trap that results in a new page being appended to the segment, so its workings are transparent to the process. This is the memory we are talking about.
the process can ask the kernel explicitly to get a new segment if it wants to (let's say it needs to map some file on memory, or if it has to load a shared executable/library) and on some systems, the read only variables (declared as const) are explititly stored in the text segment or in a specific section called .rodata that demands from the system a special data segment that is read-only. The compiler doesn't normally code this kind of resource itself, it is normally encoded in the program being compiled.
The complete memory is limited by system imposed limits, so if you try to overpass them (around 8Mb of stack space, by default, and depending on the operating system) you will get signalled by the system and your program aborted.
As you see, the process memory is owned by the process, and it can make whatever use it is permitted to. The stack memory is read/write, and allocated on demand, you can use up to 8Mb, but there's no provision to check on the use you do about it.
If no, then is there some sort of indication or instruction that the compiler puts in the code segment asking the OS to allocate memory when running the executable?
The system will know the size of the text segment of the process by the size it has on the executable. The data segment is divided into two parts, normally based on the assumption of what are the global initialized variables and what are the ones defaulting to zero (the memory allocated by the kernel to a process is initialized to zeros for security reasons) so the sum of both the initialized/data and the non initialized data sections are added to know how much memory to assign to the data segment. And the stack segment is assigned initialy just one page of memory, but as the process starts running and filling the stack, it grows as the process generates page faults on the so called next page of the stack segment. As you see, there's no nedd for the compiler to embed in the code any instruction to ask for more memory. Everything is read from the executable file.
The compiler runs as a normal program... it only generates all this information and writes it in a file (the executable file) for the kernel to know the resources needed to run the program. But the compiler's communication with the kernel is just to ask it to open files, write on them, read from source code and struggle it's head to achieve its task. :)
In most POSIX systems, the kernel loads a program in memory by means of the exec*(2) system calls. The kernel reads the executable file pointed to in a parameter of the call and creates the segments above mentioned, based on the parameters passed in the file, checks if another instance of the same program is running in the system to avoid loading the instructions from the file and referencing in this process the segment already open by the other. The data segment contents is initialized to zeros, and the contents of the initialization data are read into the segment (so the first part has the .data section of initialized global variables and the .bss section, which has only a size, is used to calculate the total size of the data segment). Then the stack is normally allocated one or more pages, depending on the initial contents that the exec() calls put in the initial stack. The initial stack is filled with:
a structure of data containing references to the program parameter list that was used on legacy systems to provide the kernel about the command line parameters to show in the ps(1) command output (this is still being generated for legacy purposes, but not used by the kernel for obvious security reasons) Today, a special system call is used to indicate the kernel the command line parameters to be output in the ps(1) output.
a snippet of machine code to use in the return from a system call to allow the execution (in user mode) of any signal handler that should be executed (this is the reason for the requirement that all signal handlers are called when the kernel returns from kernel mode and switches back again to user mode, and not otherwise)
the environment of the process.
the array of pointers to environment strings.
the command line parameters.
the array of char pointers that point to the command line parameters.
the envp array referenct to main().
the argv array reference to the command line parameters.
the argc counter of the number of command line parameters.
Once all these data is pushed to the stack, the program jumps to the start address (fixed by the linker, or by the user by a linker option) and is let to start running.
Before the program jumps to main() the executed code is part of the C runtime, that loads a special shared executable (called /lib/ld.so or similar) that is responsible of searching and loading of all the shared libraries that are linked to the program. Not all the programs have this feature (but almost all of them today are dynamically linked) but IMHO this is out of the scope to this question, as the program has already started and is running.

How the function calls works in terms of Stack and Code memory?

I have been trying to understand how memory works, what happens step by step inside execution of the application in terms of memory, specially in embedded system. More of context in C/C++
Out of Stack, heap, static and Code memory of a application, which is Stored in RAM or volatile memory and which part is stored in non-volatile memory? Or when a application is executed, the whole application is copied to RAM or volatile memory?
When a function is called, does all the assembly instruction of that function gets copied to the stack or only memory is allocated to function?
If only memory is allocated to the function in real time, that means the address of those variable has to be added to the assembly code of the function, how does that happen?
Who does all this in an embedded system stack memory allocation etc in a embedded system when we write code for it in C? There is no OS in the MCU to do memory management for us so who manages this memory allocation during function calls in MCU
Out of Stack, heap, static and Code memory of a application, which is
Stored in RAM or volatile memory and which part is stored in
non-volatile memory? Or when a application is executed, the whole
application is copied to RAM or volatile memory?
When a function is called, does all the assembly instruction of that
function gets copied to the stack or only memory is allocated to
function?
If only memory is allocated to the function in real time, that means
the address of those variable has to be added to the assembly code of
the function, how does that happen?
For a comprehensive and correct understanding of code execution in bare-metal (without OS, and OS environments (embedded systems), different memory structures and all their internals, I would recommend you to cover this book - Extreme C (Auth: Kamran Amini), especially these sections:
Chapter 2: From Source to Binary
Chapter 4: Process Memory Structure
Chapter 5: Stack and Heap
In my experience, heresay and random comments will hurt your understanding instead and you will be barely able to make sense out of it. Consult authentic published content.
Who does all this in an embedded system stack memory allocation etc in a embedded system when we write code for it in C? There is no OS in the MCU to do memory management for us so who manages this memory allocation during function calls in MCU
The programmer's interface in C is the main.c file:
int main (){
// User code here
return 0;
}
But technically, your IDE (Keil uVision or Code Composer Studio etc.) along with it's RTE (Run-time environment) component, places a boilperlate code of initialization and de-initialization files (commonly known as MCU startup code) before and after this main.c file, during building (compilation) phase of your source code for that particular target hardware such as Tiva C TM4C123GH6PM or STM32F400 etc.
The code inside these startup/initialization files initializes your stack and heap memory, clock-gating controls of your mcu, setups up different peripherals (gpio, serial, i2c etc.), interrupt vector table, PC (program counter), SP (stack pointer) etc. and a lot more other things.
You can open a debug session of a simple program for your target MCU, while configuring it to 'stop at first assembly instruction/startup' and then performing stepping operations to go through all the code until you reach main.c file.
The findings are fantastic.
in embedded system
I am answering assuming a system without MMU, assuming some small microcontroller. Overall, note the that answers highly depend on specific configuration of specific system. In the end, you can store everything in volatile memory if you want, and you can configure microcontrollers that way.
Out of Stack, heap, static and Code memory of a application, which is Stored in RAM or volatile memory and which part is stored in non-volatile memory?
There are sections, see like https://www.embeddedtutor.com/2019/07/memory-layout-executable-in-embedded.html . TBH it's really trivial - if something changes, it's volatile, if something does not change and persist between power losses, it's non-volatile. Let's say:
what
where
stack
volatile
heap
volatile
static
initialization of static variables is stored in non-volatile memory, static variables themselves are stored in volatile memory. I.e. data segment
Code
We live on Harvard architectures and assembly instructions do not change, so they go to non-volatile memory
Or when a application is executed, the whole application is copied to RAM or volatile memory?
No, it is not copied.
But, anyway, you can configure a microcontroller to execute instructions from volatile memory, like STM32 code_in_ram. Still there is no copy (there can be, you can write code for that...), the instructions are there. I.e. it all depends on specific configuration.
If only memory is allocated to the function in real time, that means the address of those variable has to be added to the assembly code of the function, how does that happen?
There is a stack pointer that contains the location of the top of the stack. It is incremented or decremented depending on if adding or removing data from the stack (and note, that for example on x86 stack grows towards decreasing memory addresses, like upside down). The functions themselves manage the stack, whereas functions are created by a C compiler and the C compiler creates that code to manage that stack.
There are standards that specify how it specifically works on specific architectures - these standard specify architecture ABI. For example, x86-64 ABI or ARM ABI. They specify what registers are used, how stack is used and passed, etc. Then compilers "implement that ABI" - create assembly code that adheres to the rules given in ABI standards.
Who does all this in an embedded system stack memory allocation etc in a embedded system when we write code for it in C?
C code itself manages memory by itself. Compiler glues it together.
so who manages this memory allocation during function calls in MCU
The code itself, I guess? (Nowadays) C standard libraries are written in C, and some are open source. Newlib is a popular free C standard library implementation for embedded systems - you can inspect it's malloc implementation.

Stack and heap confusion for embedded 8051

I am trying to understand a few basics concepts regarding the memory layout for a 8051 MCU architecture. I would be grateful if anyone could give me some clarifications.
So, for a 8051 MCU we have several types of memories:
IRAM - (idata) - used for general purpose registers and SFRs
PMEG - (code) - used to store code - (FLASH)
XDATA
on chip (data) - cache memory for data (RAM) /
off-chip (xdata) - external memory (RAM)
Questions:
So where is the stack actually located?
I would assume in IRAM (idata) but it's quite small (30-7Fh)- 79 bytes
What does the stack do?
Now, on one hand I read that it stores the return addresses when we call a function (e.g. when I call a function the return address is stored on the stack and the stack pointer is incremented).
http://www.alciro.org/alciro/microcontroladores-8051_24/subrutina-subprograma_357_en.htm
On the other hand I read that the stack stores our local variables from a function, variables which are "deleted" once we return from that function.
http://gribblelab.org/CBootcamp/7_Memory_Stack_vs_Heap.html
If I use dynamic memory allocation (heap), will that memory always be reserved in off-chip RAM (xdata), or it depends on compiler/optimization?
The 8051 has its origin in the 1970ies/early 80ies. As such, it has very limited ressources. The original version did (for instance) not even have XRAM, that was "patched" aside later and requires special (ans slow) accesses.
The IRAM is the "main memory". It really includes the stack (yes, there are only few bytes). The rest is used for global variables ("data" and "bss" section: initialized and uninitialized globals and statics). The XRAM might be used by a compiler for the same reason.
Note that with these small MCUs you do not use many local variables (and if, only 8bit types). A clever compiler/linker (I actually used some of these) can allocate local variables statically overlapping - unless there is recursion used (very unlikely).
Most notably, programs for such systems mostly do not use a heap (i.e. dynamic memory allocation), but only statically allocated memory. At most, they might use a memory pool, which provides blocks of fixed size and does not merged blocks.
Note that the IRAM includes some special registers which can be bit-addressed by the hardware. Normally, you would use a specialized compiler which can exploit these functions. Very likely some features require special assembler functions (these might be provided in a header as C-functions just generating the corresponding machine code instruction), called intrinsics.
The different memory areas might also require compiler-extensions to be used.
You might have a look at sdcc for a suitable compiler.
Note also that the 8051 has an extended harvard architecture (code and data seperated with XRAM as 3rd party).
Regarding your 2nd link: This is a very generalized article; it does not cover MCUs like the 8051 (or AVR, PIC and the like), but more generalized CPUs like x86, ARM, PowerPC, MIPS, MSP430 (which is also a smaller MCU), etc. using an external von Neumann architecture (internally most (if not all) 32+ bitters use a harvard architecture).
I don't have direct experience with your chips, but I have worked with very constrained systems in the past. So here is what I can answer:
Question 1 and 2: The stack is more than likely set within a very early startup routine. This will set a register to tell it where the stack should start. Typically, you want this in memory that is very fast to access because compiled code loves pushing and popping memory from the stack all the time. This includes return addresses in calls, local variable declarations, and the occasional call to directly allocate stack memory (alloca).
For your 3rd question, the heap is set wherever your startup routine set it to.
There is no particular area that a heap needs to live. If you want it to live in external memory, then it can be set there. You want it in your really small/fast area, you can do that too, though that is probably a very bad idea. Again, your chip's/compiler's manual or included code should show you an overloaded call to malloc(). From here, you should be able to walk backwards to see what addresses are being passed into its memory routines.
Your IRAM is so dang small that it feels more like Instruction RAM - RAM where you would put a subroutine or two to make running code from them more efficient. 80 bytes of stack space will evaporate very quickly in a typical C function call framework. Actually, for sizes like this, you might have to hand assemble stuff to get the most out of things, but that may be beyond your scope.
If you have other questions, let me know. This is the kind of stuff I like doing :)
Update
This page has a bunch of good information on stack management for your particular chip. It appears that the stack for this chip is indeed in IRAM and is very very constrained. It also appears that assembly level coding on this chip would be the norm as this amount of RAM is quite small indeed.
Heck, this is the first system I've seen in many years that has bank switching as a way to access more RAM. I haven't done that since the Color Gameboy's Z80 chip.
Concerning the heap:
There is also a malloc/free couple
You have to call init_mempool(), which is indicated in compiler documentation but it is somewhat uncommon.
The pseudo-code below to illustrate this.
However I used it only this way and did not try heavy used of malloc/free like you may find in dynamic linked list management, so I have no idea of the performance you get out of this.
//A "large" place in xdata to be used as heap
static char xdata heap_mem_pool [1000];
//A pointer located in data and pointing to something in xdata
//The size of the pointer is then 2 bytes instead of 3 ( the 3rd byte
//store the area specification data, idata, xdata )
//specifier not mandatory but welcome
char xdata * data shared_memory;
//...
u16 mem_size_needed;
init_mempool (heap_mem_pool, sizeof(heap_mem_pool));
//..
mem_size_needed = calcute_needed_memory();
shared_memory = malloc(mem_size_needed);
if ( 0 == shared_memory ) return -1;
//...use shared_memory pointer
//free if not needed anymore
free(shared_memory);
Some additionnal consequences about the fact that in general no function is reentrant ( or with some effort ) due to this stackless microcontroller.
I will call "my system" the systemI am working on at the present time: C8051F040 (Silab) with Keil C51 compiler ( I have no specific interest in these 2 companies )
The (function return address) stack is located low in the iram (idata on my system).
If it start at 30(dec) it means you have either global or local variables in your code that you requested to be in data RAM ( either because you choose a "small" memory model or because you use the keyword data in the variable declaration ).
Whenever you call a function the return 2 bytes address of the caller function will be save in this stack ( 16 bits code space ) and that's all: no registers saving, no arguments pushed onto the (non-existing)(data) stack. Your compiler may also limit the functions call depth.
Necessary arguments and local variables ( and certainly saved registers ) are placed somewhere in the RAM ( data RAM or XRAM )
So now imagine that you want to use the same innocent function ( like memcpy() ) both in your interrupt and in you normal infinite loop, it will cause sporadic bugs. Why ?
Due to the lack of stack, the compiler must share RAM memory places for arguments, local variables ... between several functions THAT DO NOT BELONG to the same call tree branch
The pitfall is that an interrupt is its own call tree.
So if an interrupt occurs while you were executing e.g the memcpy() in your "normal task", you may corrupt the execution of memcpy() because when going out of the interrupt execution, the pointers dedicated to the copy performed in the normal task will have the (end) value of the copy performed in the interrupt.
On my system I get a L15 linker error when the compiler detects that a function is called by more than one independant "branch"
You may make a function reentrant with the addition of the reentrant keyword and requiring the creation of an emulated stack on the top of the XRAM for example. I did not test on my system because I am already lacking of XRAM memory which is only 4kB.
See link
C51: USING NON-REENTRANT FUNCTION IN MAIN AND INTERRUPTS
In the standard 8051 uC, the stack occupies the same address space as register bank 1(08H to 0FH) by default at start up. That means, the stack pointer(SP register) will be having a value of 07H at startup(incremented to 08H when stack is PUSHed). This probably limits the stack memory to 8 bytes, if register bank 2(starting from 10H) is occuppied. If register banks 2 and 3 are not used, even that can be taken up by the stack(08H to 1FH).
If in a given program we need more than 24 bytes (08 to 1FH = 24 bytes) of stack, we can change the SP to point to RAM locations 30 – 7FH. This is done with the instruction “MOV SP, #xx”. This should clarify doubts surrounding the 8051 stack usage.

Beginner's confusion about x86 stack

First of all, I'd like to know if this model is an accurate representation of the stack "framing" process.
I've been told that conceptually, the stack is like a Coke bottle. The sugar is at the bottom and you fill it up to the top. With this in mind, how does the Call tell the EIP register to "target" the called function if the EIP is in another bottle (it's in the code segment, not the stack segment)? I watched a video on YouTube saying that the "Code Segment of RAM" (the place where functions are kept) is the place where the EIP register is.
Typically, a computer program uses four kinds of memory areas (also called sections or segments):
The text section: This contains the program code. It is reserved when the program is loaded by the operating system. This area is fixed and does not change while the program is running. This would better be called "code" section, but the name has historical reasons.
The data section: This contains variables of the program. It is reserved when the program is loaded and initialized to values defined by the programmer. These values can be altered by the program while it executes.
The stack: This is a dynamic area of memory. It is used to store data for function calls. It basically works by "pushing" values onto the stack and popping from the stack. This is also called "LIFO": last in first out. This is where local variables of a function reside. If a function complets, the data is removed from the stack and is lost (basically).
The heap: This is also a dynamic memory region. There are special function in the programming language which "allocate" (reserve) a piece of this area on request of the program. Another function is available to return this area to the heap if it is not required anymore. As the data is released explicitly, it can be used to store data which lives longer than just a function call (different from the stack).
The data for text and data section are stored in the program file (they can be found in Linux for example using objdump (add a . to the names). stack and heap are not stored anywhere in the file as they are allocated dynamically (on-demand) by the program itself.
Normally, after the program has been loaded, the memory area reamining is treated as a single large block where both, stack and heap are located. They start from opposite end of that area and grow towards each other. For most architectures the heap grows from low to high memory addresses (ascending) and the stack downwards (decending). If they ever intersect, the program has run out of memory. As this may happen undetected, the stack might corrupt (change foreign data) the heap or vice versa. This may result in any kind of errors, depending how/what data has changed. If the stack gets corrupted, this may result in the program going wild (this is actually one way a trojan might work). Modern operating systems, however should take measures to detect this situation before it becomes critical.
This is not only for x86, but also for most other CPU families and operating system, notably: ARM, x86, MIPS, MSP430 (microcontroller), AVR (microcontroller), Linux, Windows, OS-X, iOS, Android (which uses Linux OS), DOS. For microcontrollers, there is often no heap (all memory is allocated at run-time) and the stack may be organized a bit differently; this is also true for the ARM-based Cortex-M microcontrollers. But anyway, this is quite a special subject.
Disclaimer: This is very simplified, so please no comments like "how about bss, const, myspecialarea";-) . There also is not requirement from the C standard for these areas, specifically to use a heap or a stack. Indeed there are implementations which don't use either. Those are most times embedded systems with small (8 or 16 bit) MCUs or DSPs. Also modern architectures use CPU registers instead of the stack to pass parameters and keep local variables. Those are defined in the Application Binary Interface of the target platform.
For the stack, you might read the wikipedia article. Note the difference in implementation between the datatstructure "stack" and the "hardware stack" as implemented in a typical (micro)processor.

Questions about the Memory Layout of a C program

I have some questions about memory layout of C programs.
Text Segment
Here is my first question:
When I searched the text segment (or code segment) I read that "Text segment contain executable instructions". ut what are executable instructions for any function? Could you give some different examples?
I also read that "Text segment is sharable so that only a single copy needs to be in memory for frequently executed programs such as text editors, the C compiler, etc.", but I couldn't make a connection between C programs and "text editors".
What should I understand from this statement?
Initialized Data Segment
It is said that the "Initialized Data Segment" contains the global variables and static variables, but I also read that const char* string = "hello world" makes the string literal "hello world" to be stored in initialized read-only area and the character pointer variable string in initialized read-write area. char* string is stored read-only area or read-write area? Since both are written here I'm a bit confused.
Stack
From what I understand, the stack contains the local variables. Is this right?
The text segment contains the actual code of your program, i.e. the machine code emitted by your compiler. The idea of the last statement is that your C program and, say, a text editor is exactly the same thing; it's just machine code instructions executing from memory.
For example, we'll take the following code, and a hypothetical architecture I've just thought up now because I can't remember x86 assembly.
while(i != 10)
{
x -= 5;
i++;
}
This would translate to the following instructions
LOOP_START:
CMP eax, 10 # EAX contains i. Is it 10?
JZ LOOP_END # If it's 10, exit the loop
SUB ebx, 5 # Otherwise, subtract 5 from EBX (x)
ADD eax, 1 # And add 1 to i
JMP LOOP_START # And then go to the top of the loop.
LOOP_END:
# Do something else
These are low-level operations that your processor can understand. These would then be translated into binary machine code, which is then stored in memory. The actual data stored might be 5, 2, 7, 6, 4, 9, for example, given a mapping between operation and opcode that I just thought up. For more information on how this actually happens, look up the relationship between assembler and machine code.
-- Ninja-edit - if you take RBK's comment above, you can view the actual instructions which make up your application using objdump or a similar disassembler. There's one in Visual Studio somewhere, or you could use OllyDbg or IDA on Windows.
Because the actual instructions of your program should be read-only, the text segment doesn't need to be replicated for multiple runs of your program since it should always be the same.
As for your question on the data segment, char* string will actually be stored in the .bss segment, since it doesn't have an initializer. This is an area of memory that is cleared before your program runs (by the crt0 or equivalent) unless you give GCC a flag that I can't remember off-hand. The .bss segment is read-write.
Yes, the stack segment contains your local variables. In reality, it stores what are called "stack frames". One of these is created for each function you call, and they stack on top of each other. It contains stuff like the local variables, as you said, and other useful bits like the address that the function was called from, and other useful data so that when the function exits, the previous state can be reinstated. For what is actually contained on a stack frame, you need to delve into your architecture's ABI (Application Binary Interface).
The text segment is often also called "code" ("text" tends to be the Unix/linux name, other OS's doesn't necessarily use that name).
And it is shareable in the sense that if you run TWO processes that both execute the C-compiler, or you open the text editor in two different windows, both of those share the same "text" section - because it doesn't change during the running of the code (self-modifying code is not allowed in text-segment).
Initialized string value is stored in either "ro-data" or "text", depending on the compiler. And yes, it's not writeable.
If string is a global variable, it will end up in "initialized data", which will hold the address of the "hello world" message in the value of string. The const part is referring to the fact that the contents the pointer points at is constant, so we can actually change the pointer by string = "foo bar"; later in the code.
The stack is, indeed, used for local variables and, typically, the call stack (where the code returns to after it finishes the current function).
However, the actual layout of a program's in-memory image is left entirely up to the operating system, and often the program itself as well. Yet, conceptually we can think of two segments of memory for a running program[1].
Text or Code Segment - Contains compiled program code.
Data Segment - Contains data (global, static, and local) both initialized and uninitialized. Data segment can further be sub-categorized as follows:
2.1 Initialized Data Segments
2.2 Uninitialized Data Segments
2.3 Stack Segment
2.4 Heap Segment
Initialized data segment stores all global, static, constant, and external variables (declared with extern keyword) that are initialized beforehand.
Uninitialized data segment or .bss segment stores all uninitialized global, static, and external variables (declared with extern keyword).
Stack segment is used to store all local variables and is used for passing arguments to the functions along with the return address of the instruction which is to be executed after the function call is over.
Heap segment is also part of RAM where dynamically allocated variables are stored.
Coming to your first question - If you are aware of function pointers then you know that the function name returns the address of the function (which is the entry point for that function). These instructions are coded in assembly. Instruction set may vary from architecture to architecture.
Text or code section is shareable - If more than one running process belong to the same program then the common compiled code need not be loaded into memory separately. For example if you have opened two .doc documents then there will be two processes for them but definitely there will be some common code being used by both processes.
The stack segment is area where local variables are stored. By saying local variable means that all those variables which are declared in every function including main( ) in your C program.
When we call any function, stack frame is created and when function returns, stack frame is destroyed including all local variables of that particular function.
Stack frame contain some data like return address, arguments passed to it, local variables, and any other information needed by the invoked function.
A “stack pointer (SP)” keeps track of stack by each push & pop operation onto it, by adjusted stack pointer to next or previous address.
you can refer this link for practical info:- http://www.firmcodes.com/memory-layout-c-program-2/

Resources