How do pointers work "under the hood" in C? - c

Take a simple program like this:
int main(void)
{
char p;
char *q;
q = &p;
return 0;
}
How is &p determined? Does the compiler calculate all such references before-hand or is it done at runtime? If at runtime, is there some table of variables or something where it looks these things up? Does the OS keep track of them and it just asks the OS?
My question may not even make sense in the context of the correct explanation, so feel free to set me straight.

How is &p determined? Does the compiler calculate all such references before-hand or is it done at runtime?
This is an implementation detail of the compiler. Different compilers can choose different techniques depending on the kind of operating system they are generating code for and the whims of the compiler writer.
Let me describe for you how this is typically done on a modern operating system like Windows.
When the process starts up, the operating system gives the process a virtual address space, of, let's say 2GB. Of that 2GB, a 1MB section of it is set aside as "the stack" for the main thread. The stack is a region of memory where everything "below" the current stack pointer is "in use", and everything in that 1MB section "above" it is "free". How the operating system chooses which 1MB chunk of virtual address space is the stack is an implementation detail of Windows.
(Aside: whether the free space is at the "top" or "bottom" of the stack, whether the "valid" space grows "up" or "down" is also an implementation detail. Different operating systems on different chips do it differently. Let's suppose the stack grows from high addresses to low addresses.)
The operating system ensures that when main is invoked, the register ESP contains the address of the dividing line between the valid and free portions of the stack.
(Aside: again, whether the ESP is the address of the first valid point or the first free point is an implementation detail.)
The compiler generates code for main that pushes the stack pointer by lets say five bytes, by subtracting from it if the stack is growing "down". It decreases by five because it needs one byte for p and four for q. So the stack pointer changes; there are now five more "valid" bytes and five fewer "free" bytes.
Let's say that q is the memory that is now in ESP through ESP+3 and p is the memory now in ESP+4. To assign the address of p to q, the compiler generates code that copies the four byte value ESP+4 into the locations ESP through ESP+3.
(Aside: Note that it is highly likely that the compiler lays out the stack so that everything that has its address taken is on an ESP+offset value that is divisible by four. Some chips have requirements that addresses be divisible by pointer size. Again, this is an implementation detail.)
If you do not understand the difference between an address used as a value and an address used as a storage location, figure that out. Without understanding that key difference you will not be successful in C.
That's one way it could work but like I said, different compilers can choose to do it differently as they see fit.

The compiler cannot know the full address of p at compile-time because a function can be called multiple times by different callers, and p can have different values.
Of course, the compiler has to know how to calculate the address of p at run-time, not only for the address-of operator, but simply in order to generate code that works with the p variable. On a regular architecture, local variables like p are allocated on the stack, i.e. in a position with fixed offset relative to the address of the current stack frame.
Thus, the line q = &p simply stores into q (another local variable allocated on the stack) the address p has in the current stack frame.
Note that in general, what the compiler does or doesn't know is implementation-dependent. For example, an optimizing compiler might very well optimize away your entire main after analyzing that its actions have no observable effect. The above is written under the assumption of a mainstream architecture and compiler, and a non-static function (other than main) that may be invoked by multiple callers.

This is actually an extraordinarily difficult question to answer in full generality because it's massively complicated by virtual memory, address space layout randomization and relocation.
The short answer is that the compiler basically deals in terms of offsets from some “base”, which is decided by the runtime loader when you execute your program. Your variables, p and q, will appear very close to the “bottom” of the stack (although the stack base is usually very high in VM and it grows “down”).

Address of a local variable cannot be completely calculated at compile time. Local variables are typically allocated in the stack. When called, each function allocates a stack frame - a single continuous block of memory in which it stores all its local variables. The physical location of the stack frame in memory cannot be predicted at compile time. It will only become known at run-time. The beginning of each stack frame is typically stored at run-time in a dedicated processor register, like ebp on Intel platform.
Meanwhile, the internal memory layout of a stack frame is pre-determined by the compiler at compile-time, i.e. it is the compiler who decides how local variables will be laid out inside the stack frame. This means that the compiler knows the local offset of each local variable inside the stack frame.
Put this all together and we get that the exact absolute address of a local variable is the sum of the address of the stack frame itself (the run-time component) and the offset of this variable inside that frame (the compile-time component).
This is basically exactly what the compiled code for
q = &p;
will do. It will take the current value of the stack frame register, add some compile-time constant to it (offset of p) and store the result in q.

In any function, the function arguments and the local variables are allocated on the stack, after the position (program counter) of the last function at the point where it calls the current function. How these variables get allocated on the stack and then deallocated when returning from the function, is taken care of by the compiler during compile time.
For e.g. for this case, p (1 byte) could be allocated first on the stack followed by q (4 bytes for 32-bit architecture). The code assigns the address of p to q. The address of p naturally then is 5 added or subtracted from the the last value of the stack pointer. Well, something like that, depends on how the value of the stack pointer is updated and whether the stack grows upwards or downwards.
How the return value is passed back to the calling function is something that I'm not certain of, but I'm guessing that it is passed through the registers and not the stack. So, when the return is called, the underlying assembly code should deallocate p and q, place zero into the register, then return to the last position of the caller function. Of course, in this case, it is the main function, so it is more complicated in that, it causes the OS to terminate the process. But in other cases, it just goes back to the calling function.
In ANSI C, all the local variables should be placed at the top of the function and is allocated once into the stack when entering the function and deallocated when returning from the function. In C++ or later versions of C, this becomes more complicated when local variables can also be declared inside blocks (like if-else or while statement blocks). In this case, the local variable is allocated onto the stack when entering the block and deallocated when leaving the block.
In all cases, the address of a local variable is always a fixed number added or subtracted from the stack pointer (as calculated by the compiler, relative to the containing block) and the size of the variable is determined from the variable type.
However, static local variables and global variables are different in C. These are allocated in fixed locations in the memory, and thus there's a fixed address for them (or a fixed offset relative to the process' boundary), which is calculated by the linker.
Yet a third variety is memory allocated on the heap using malloc/new and free/delete. I think this discussion would be too lengthy if we include that as well.
That said, my description is only for a typical hardware architecture and OS. All of these are also dependent on a wide variety of things, as mentioned by Emmet.

p is a variable with automatic storage. It lives only as long as the function it is in lives. Every time its function is called memory for it is taken from the stack, therefore, its address can change and is not known until runtime.

Related

Automatic memory allocation occurs at compile time or at run time in C?

If we talk about static memory allocation it is said to be allocated at compile time but actually compiler just handles this memory allocation and it is actually allocated at start of program only. For example the compiler may create a large data section in the compiled binary and when the program is loaded in memory, the address within the data segment of the program will be used as the location of the allocated memory.
If I talk about automatic memory allocation, it is allocated when the control enters a new scope. Now my doubt is whether in this case also compiler comes into picture and passes some virtual addresses into the compiled binary which later becomes the addresses of actual allocated memory during runtime or this memory is allocated only at run time without any role of compiler just like the way dynamic memory is allocated ?
What if I have some local variable like:
int a = 10;
Will it have a compile time allocation or run time allocation ?
Automatic allocation happens in run-time, though the nature of it is very system-specific. Automatic storage duration variables may end up in registers, on the stack or optimized away entirely.
In case they do end up on the stack, the compiler creates a local scope offset to the function where the variable is allocated. That is, the variable might be referred to as SP + 8 or something similar, where SP is the stack pointer. Which in turn could hold any value when the function is entered - the compiler or machine code does not know or care about that, which is why stack overflows exist.
You might find this useful: What gets allocated on the stack and the heap?.
Local variables are placed on the stack when they are stored in memory.
Typically, the total stack size requirement for a function is calculated at compile time.
Then, when entering a function, the stack pointer is adjusted down for the entire stack size of the function - the stack usually grows from a higher address towards a lower address.
Each local variable is assigned an address within the current stack frame, and they are typically accessed with memory access instructions that read or write memory with a given offset to the current stack pointer.
However, in optimized builds, local variables are often also kept in CPU registers (whenever there are enough registers available) and not necessarily stored in memory at all. The purpose of this is to avoid memory accesses in order to speed up the program. Register allocation (the compiler choosing which variables to store in registers and which register to use for which variable) depends on large amount of black magic the compiler does analysing the lifetime of the variable and how much it is used.

Where is the address to the first element of an array stored?

I was playing with C, and I just discovered that a and &a yield to the same result that is the address to the first element of the array. By surfing here over the topics, I discovered they are only formatted in a different way. So my question is: where is this address stored?
This is an interesting question! The answer will depend on the specifics of the hardware you're working with and what C compiler you have.
From the perspective of the C language, each object has an address, but there's no specific prescribed mechanism that accounts for how that address would actually be stored or accessed. That's left up to the compiler to decide.
Let's imagine that you've declared your array as a local variable, and then write something like array[137], which accesses the 137th element of the array. How does the generated program know how to find your array? On most systems, the CPU has a dedicated register called the stack pointer that keeps track of the position of the memory used for all the local variables of the current function. As the compiler translates your C code into an actual executable file, it maintains an internal table mapping each local variable to some offset away from where the stack pointer points. For example, it might say something like "because 64 bytes are already used up for other local variables in this function, I'm going to place array 64 bytes past where the stack pointer points." Then, whenever you reference array, the compiler generates machine instructions of the form "look 64 bytes past the stack pointer to find the array."
Now, imagine you write code like this:
printf("%p\n", array); // Print address of array
How does the compiler generate code for this? Well, internally, it knows that array is 64 bytes past the stack pointer, so it might generate code of the form "add 64 to the stack pointer, then pass that as an argument to printf."
So in that sense, the answer to your question could be something like "the hardware stores a single pointer called the stack pointer, and the generated code is written in a way that takes that stack pointer and then adds some value to it to get to the point in memory where the array lives."
Of course, there are a bunch of caveats here. For example, some systems have both a stack pointer and a frame pointer. Interpreters use a totally different strategy and maintain internal data structures tracking where everything is. And if the array is stored at global scope, there's a different mechanism used altogether.
Hope thi shelps!
It isn't stored anywhere - it's computed as necessary.
Unless it is the operand of the sizeof, _Alignof, or unary & operators, or is a string literal used to initialize a character array in a declaration, an expression of type "N-element array of T" is converted ("decays") to an expression of type "pointer to T", and the value of the expression is the address of the first element of the array.
When you declare an array like
T a[N]; // for any non-function type T
what you get in memory is
+---+
| | a[0]
+---+
| | a[1]
+---+
...
+---+
| | a[N-1]
+---+
That's it. No storage is materialized for any pointer. Instead, whenever you use a in any expression, the compiler will compute the address of a[0] and use that instead.
Consider this C code:
int x;
void foo(void)
{
int y;
...
}
When implementing this program, a C compiler will need to generate instructions that access the int objects named x and y and the int object allocated by the malloc. How does it tell those instructions where the objects are?
Each processor architecture has some way of referring to data in memory. This includes:
The machine instruction includes some bits that identify a processor register. The address in memory is in that processor register.
The machine instruction includes some bits that specify an address.
The machine instruction includes some bits that specify a processor register and some bits that specify an offset or displacement.
So, the compiler has a way of giving an address to the processor. It still needs to know that address. How does it do that?
One way is the compiler could decide exactly where everything in memory is going to go. It could decide it is going to put all the program’s instructions at addresses 0 to 10,000, and it is going to put data at 10,000 and on, and that x will go at address 12300. Then it could write an instruction to fetch x from address 12300. This is called absolute addressing, and it is rarely used anymore because it is inflexible.
Another option is that the compiler can let the program loader decide where to put the data. When the software that loads the program into memory is running, it will read the executable, see how much space is needed for instructions, how much is needed for data that is initialized to zero, how much space is needed for data with initial values listed in the executable file, how much space is needed for data that does not need to be initialized, how much space is requested for the stack, and so on. Then the loader will decide where to put all of these things. As it does so, it will set some processor registers, or some tables in memory, to contain the addresses where things go.
In this case, the compiler may know that x goes at displacement 2300 from the start of the “zero-initialized data” section, and that the loader sets register r12 to contain the base address of that section. Then, when the compiler wants to access x, it will generate an instruction that says “Use register r12 plus the displacement 2300.” This is largely the method used today, although there are many embellishments involving linking multiple object modules together, leaving a placeholder in the object module for the name x that the linker or loader fills in with the actual displacement as they do their work, and other features.
In the case of y, we have another problem. There can be two or more instances of y existing at once. The function foo might call itself, which causes there to be a y for the first call and a different y for the second call. Or foo might call another function that calls foo. To deal with this, most C implementations use a stack. One register in the processor is chosen to be a stack pointer. The loader allocates a large amount of space and sets the stack pointer register to point to the “top” of the space (usually the high-address end, but this is arbitrary). When a function is called, the stack pointer is adjusted according to how much space the new function needs for its local data. When the function executes, it puts all of its local data in memory locations determined by the value of the stack pointer when the function started executing.
In this model, the compiler knows that the y for the current function call is at a particular offset relative to the current stack pointer, so it can access y using instructions with addresses such as “the contents of the stack pointer plus 84 bytes.” (This can be done with a stack pointer alone, but often we also have a frame pointer, which is a copy of the stack pointer at the moment the function was called. This provides a firmer base address for working with local data, one that might not change as much as the stack pointer does.)
In either of these models, the compiler deals with the address of an array the same way it deals with the address of a single int: It knows where the object is stored, relative to some base address for its data segment or stack frame, and it generates the same sorts of instruction addressing forms.
Beyond that, when you access an array, such as a[i], or possibly a multidimensional array, a[i][j][k], the compiler has to do more calculations. To do this, compiler takes the starting address of the array and does the arithmetic necessary to add the offsets for each of the subscripts. Many processors have instructions that help with these calculations—a processor may have an addressing form that says “Take a base address from one register, add a fixed offset, and add the contents of another register multiplied by a fixed size.” This will help access arrays of one dimension. For multiple dimensions, the compiler has to write extra instructions to do some of the calculations.
If, instead of using an array element, like a[i], you take its address, as with &a[i], the compiler handles it similarly. It will get a base address from some register (the base address for the data segment or the current stack pointer or frame pointer), add the offset to where a is in that segment, and then add the offset required for i elements. All of the knowledge of where a[i] is is built into the instructions the compiler writes, plus the registers that help manage the program’s memory layout.
Yet one more point of view, a TL;DR answer if you will: When the compiler produces the binary, it stores the address everywhere where it is needed in the generated machine code.
The address may be just plain number in the machine code, or it may be a calculation of some sort, such as "stack frame base address register + a fixed offset number", but in either case it is duplicated everywhere in the machine code where it is needed.
In other words, it is not stored in any one location. Talking more technically, &some_array is not an lvalue, and trying to take the address of it, &(&some_array), will produce compiler error.
This actually applies to all variables, array is not special in any way here. The address of a variable can be used in the machine code directly (and if compiler actually generates code which does store the address somewhere, you have no way to know that from C code, you have to look at the assembly code).
The one thing special about arrays, which seems to be the source of your confusion is, that some_array is bascially a more convenient syntax for &(some_array[0]), while &some_array means something else entirely.
Another way to look at it:
The address of the first element doesn't have to be stored anywhere.
An array is a chunk of memory. It has an address simply because it exists somewhere in memory. That address may or may not have to be stored somewhere depending on a lot of things that others have already mentioned.
Asking where the address of the array has to be stored is like asking where reality stores the location of your car. The location doesn't have to be stored - your car is located where your car happens to be - it's a property of existing. Sure, you can make a note that you parked your car in row 97, spot 114 of some huge lot, but you don't have to. And your car will be wherever it is regardless of your note-taking.

Where, and why, is the x64 frame pointer supposed to point? (Windows x64 ABI)

I've been reading a long catalog of very good articles on the Windows x64 ABI. A very minor aspect of these articles is the description of the frame pointer. The general gist is that, because the Windows x64 call stack rules are so rigid, a dedicated frame pointer is typically not needed, although it is optional.
The one exception I have seen consistently noted is when alloca() is used to dynamically allocate memory on the stack. Functions doing so apparently require a frame pointer. For example, to quote from Microsoft's documentation on "Stack Allocation" (italics and bold added by me):
If space is dynamically allocated (alloca) in a function, then a nonvolatile register must be used as a frame pointer to mark the base of the fixed part of the stack and that register must be saved and initialized in the prolog. Note that when alloca is used, calls to the same callee from the same caller may have different home addresses for their register parameters.
To this, Microsoft's x64 ABI alloca() documentation cryptically adds:
_alloca is required to be 16-byte aligned and additionally required to use a frame pointer.
First of all, why must it be used? I assume for call stack unwinding on exception but I haven't yet found a satisfactory explanation.
Next question: where must it point? In the first of the two above quotations, it says it "must" be used to mark the base of the "fixed part of the stack". What's the "fixed part of the stack"? I get the impression this term denotes, in a given frame, the range of addresses that comprises (higher addresses to lower ones):
the caller return address (if you consider it part of the current function's frame);
the addresses to which non-volatile registers were saved by the function prologue; and
the addresses where local variables are being stored.
Again, I haven't found a satisfactory definition for this "fixed part". The "Stack Allocation" page I linked to above contains the diagram below along with the words "if used, the stack pointer will generally point here":
This very nifty blog post is equally vague, including a diagram stating the frame pointer "points somewhere in here", where "here" is the addresses for the saved non-volatile registers and the locals.
One last bit of crypticness, from Microsoft's MSDN article entitled "Dynamic Parameter Stack Area Construction", which contains only this:
If a frame pointer is used, the option exists to dynamically create the parameter stack area. This is not currently done in the x64 compiler.
What does "generally" mean? Where is "somewhere in here"? What's the option that exists? Is there a rule? Who cares?
Or, tl;dr: What the title asks. Any answer containing annotated assembly gratefully accepted.
The diagram makes it quite clear that the frame pointer points to the bottom of the fixed portion of the local stack frame. The "fixed portion" is the part whose size does not change and whose location is fixed relative to the initial stack pointer. In the diagram it is labelled "Local variables and saved nonvolatile registers."[1]
The precise location of the frame pointer doesn't matter to the operating system because from an information-theoretical point of view, local variables are indistinguishable from memory allocated by alloca immediately upon entry to a function.
void function1()
{
int a;
int *b = (int*)alloca(sizeof(int));
...
}
void function2()
{
int& a = *(int*)alloca(sizeof(int));
int *b = (int*)alloca(sizeof(int));
...
}
The operating system has no way of distinguishing between these two functions. They both store a on the stack directly below the nonvolatile registers.
This equivalence is why the diagram says "generally". In practice, compilers point it where indicated, but in theory they could point it anywhere inside the local frame, as long as the distance from the frame pointer to the return address is a constant.
The function needs to inform the operating system where the frame pointer is so that the stack can be unwound during exception handling. Without this information, it would not be possible to walk the stack because the frame is variable-sized.
[1] You can infer this from the fact that the text says that the frame pointer points to "the base of the fixed part of the stack" and the diagram says "The frame pointer will generally point here", and it's pointing at the base of the local variables and saved nonvolatile registers. Assuming the text and diagram are in agreement, this implies that the fixed part of the stack is the same as the local variables and saved nonvolatile registers. This is the same sort of inference you make every day without even realizing it. For example, if a story says
Sally called out to her brother. "Billy, where are you?"
You can infer that Billy is Sally's brother.
alloca is intended to be used with a size available only at runtime. As such, it will change the stack pointer by an amount that's not known at compilation time. You can normally address your local variables and arguments on the stack relative to the stack pointer due to the fixed layout, but alloca messes that up hence the need for another register that is stable. This frame pointer may point anywhere you want as long as you know the relation to the fixed area.
The frame pointer is also handy when the time comes to free the alloca memory because you can simply restore the stack pointer to a known location without having to worry about how much the stack pointer changed.
I don't think the ABI requires the frame pointer as such, or that it must be rbp or that it must point at any particular place (disclaimer: I don't use windows).

How is the destination that an uninitialized pointer in c points to determined?

I know that if a pointer is declared in C (and not initialized), it will be pointing to a "random" memory address which could contain anything.
How is where it actually points to determined though? Presumably it's not truly random, since this would be inefficient and illogical.
If this pointer is defined outside of all functions (or is static), it will be initialized to NULL before main() gets control.
If this pointer is created in the heap via malloc(sizeof(sometype*)), it will contain whatever happens to be at its location. It can be the data from the previously allocated and freed buffer of memory. Or it can be some old control information that malloc() and free() use to manage the lists of the free and allocated blocks. Or it can be garbage if the OS (if any) does not clear program's memory or if its system calls for memory allocation return uninitialized memory and so those can contain code/data from previously run programs or just some garbage that your RAM chips had when the system was powered on.
If this pointer is local to a function (and is not static), it will contain whatever has been at its place on the stack. Or if a CPU register is allocated to this pointer instead of a memory cell, it will contain whatever value the preceding instructions have left in this register.
So, it won't be totally random, but you rarely have full control here.
Uninitialized is undefined. Generally speaking, when the pointer is allocated the memory space is not cleared, so whatever the memory contained is now a pointer. It is random, but it is also efficient in the sense that the memory location is not being changed in the operation.
http://en.wikipedia.org/wiki/Uninitialized_variable
Languages such as C use stack space for variables, and the collection
of variables allocated for a subroutine is known as a stack frame.
While the computer will set aside the appropriate amount of space for
the stack frame, it usually does so simply by adjusting the value of
the stack pointer, and does not set the memory itself to any new state
(typically out of efficiency concerns). Therefore, whatever contents
of that memory at the time will appear as initial values of the
variables which occupy those addresses.
Although I would imagine this is implementation specific.
Furthermore you should probably always initialize your pointers, see the answer provided at How do you check for an invalid pointer? and the link given on the first answer: -
http://www.lysator.liu.se/c/c-faq/c-1.html
As far as the C standard is concerned, an uninitialized pointer doesn't point anywhere. It is illegal to dereference it. Thus it is impossible in principle to observe its target, and thus the target simply doesn't exist for all intents and purposes.
If you want a trite analogy, asking for the value of an uninitialized pointer is like asking for the value of the smallest positive real number, or the value of the last digit of π.
(The corolloary is that only Chuck Norris can dereference an uninitialized pointer.)
It is implementation specific / undefined behavior. It's possible the pointer was automatically initialized to NULL... or it is just whatever value is in memory at the time.
A pointer is an address in memory that points to another address in memory. The fact that it's not initialized does not mean that the pointer itself doesn't have an address, it only means that the address of the thing it points to is not known. So, either the compiler initialized it to NULL by default, or the address is whatever was in memory in the pointer's variable space at the time.
In large part, it is like an unerased whiteboard when you enter class. If you draw a box on part of the board but do not erase what is in the box, then what is in the box?
It is whatever was left there previously.
Similarly, if you allocate space for a pointer but do not erase the space, then what is in the space?
Data may be left over from earlier parts of your program or from special code that runs before any normal part of your program (such as the main function) starts running. (The special code may load and link certain libraries, set up a stack, and otherwise prepare the environment needed by a C program.) On embedded systems, rather than typical multi-user systems, data might be left over from previous processes. Quite likely, the previous use of the space was for something other than a pointer, so the value in it does not make sense when interpreted as a pointer. Its value as a pointer might point somewhere but not be meaningful in any way.
However, you cannot rely on this. The C standard does not define the behavior when you use an uninitialized object with automatic storage duration. In many C implementations, an uninitialized pointer will simply contain left over data. But, in some C implementations, the system may detect that you are using an uninitialized object and cause your program to crash. (Other behaviors are possible too.)

need explanation of how memory address work in this C program

I have a very simple C program where I am (out of my own curiosity) investigating which memory addresses are used to allocate local variables. My program is:
#include <stdio.h>
int main()
{
char buffer_1[8], buffer_2[8], buffer_3[8];
printf("address of buffer_1 %p\n", buffer_1);
printf("address of buffer_2 %p\n", buffer_2);
printf("address of buffer_3 %p\n", buffer_3);
return 0;
}
output is as follows:
address of buffer_1 0x7fff5fbfec30
address of buffer_2 0x7fff5fbfec20
address of buffer_3 0x7fff5fbfec10
my question is: why do the address seem to be getting smaller? Is there some logic to this? thank you.
The compiler is allowed to do whatever it wants with your automatic variables. In this case it just looks like it's putting them consecutively on the stack. On most popular systems in use today, stacks grow downwards.
Most compilers allocate stack memory for local variables in one step, at the very beginning pf the function. The memory is allocated as a single continuous block. Under these circumstances, the compiler, obviously, is free to use absolutely any memory layout for local variables inside that block. If can put them there so that the addresses increase in the order of declaration. Or decrease. Or arranged randomly. It is an implementation detail. And there's not much logic behind it.
It is quite possible that in your case the compiler tried to "pretend" that the memory for the arrays was allocated in the stack sequentially and independently (even though that was not the case). If on your platform stack grows downwards (as it does on many platforms), then it is expected that object declared later will have smaller addresses.
But again, functions don't allocate local objects individually. And on top of that the language makes no guarantees about any relationships between local object addresses. So, there's no real reason to prefer one ordering over the other.
The output of your C program is platform-dependent, compiler-dependent.
There cannot be just one perfect answer because the address arrangements vary based on:
Whether the system is little or big endian.
What kind of OS you are compiling on.
What kind of memory architecture you are compiling for.
What kind of compiler you are using(and compilers might have bugs too)
Whether you are on 64-bit or 32-bit platform.
And so much more.
But most important of all, is the type of processor architecture. :)
Here is a list of stack growth strategies per processor:
x86,PDP11 Downwards
System z In a linked list fashion, downwards, mostly.
ARM Select-able and can grow in either up or downward.
Mostek6502 Downwards (but only 256 bytes).
SPARC In a circular fashion with a sliding window, a limited depth stack.
RCA1802A Subject to SCRT(Standard Call and Return Technique) implementation.
But, in general, your compiler, at compile-time should map those addresses into the binary file generated. Then at the run-time, the binary file may occupy(or may pretend to occupy) a sequential set of memory addresses. And in your case the addresses printed by your C source, show that the stack is growing downward.
Basically compiler has responsibility to allocate memory to all the variables .
Array gets address on stack. but it has nothing to do with the o/p you are getting.
Basically The thing is compiler found the contiguous space(or chunk of memory) empty at that time and hence it allocated it to your program.

Resources