int x = 3. Will x be in cache? - c

I was recently asked this question by a friend.
In a C program if I declare an integer
int x = 3;
then will it be fetched into the cache?
My opinion:
Yes. As the processor will allocate sizeof(int) amount of space in memory. Then to write 3 to that memory location it will get x in its registers and then add 3 to it. So as x is stored in CPU registers (this is how I think it works) it will also be fetched in cache.
Whereas if we only declare the integer and do not initialize it.
Eg.
int x;
Then the cpu just allocates the memory and does not write anything in that memory so in this case x won't be in cache.
This can be generalized to when is a variable fetched in cache.
Let me know if my thinking is correct.
Thanks

there is no definitive answer to this.... but yes more than likely. There is even a very good chance it will be put into a register also. If it can, it will avoid memory and just keep it in a register!

If there's optimization at all, it's fairly likely that 3 will never even make it into a register. Rather, the compiler will recognize that x has a value of 3 and will substitute 3 for the next use of x, possibly with, eg, an add-immediate instruction that doesn't first place the value in a register.
Or the compiler may optimize x into a register and so the value of x will never be stored in memory and hence never go through cache.
And some processors have what's known as a "store through" cache, meaning that if x is assigned a storage location the value may be placed into that location without first/simultaneously being placed in storage cache.
So we can most definitely say that the value 3 might possibly appear somewhere in cache. Sometimes.

First, the CPU does not allocate memory for your integer variable, at least not by itself. Memory allocation is the combined task of the compiler and OS. The compiler generates code either for the CPU or for the OS to allocate memory either on the stack or in the heap. Then that code executes and reserves memory.
Depending on your code, compiler optimizations and operation of the various caches there are multiple possible fates for the variable:
it doesn't get into the cache at all because there's no cache or it's disabled
it doesn't get into the cache because no code uses this variable or any data immediately adjacent to it, so there's no chance of sucking this variable into the cache
it can get into the instruction cache instead of the data cache if the compiler finds that this variable is a constant and can be directly encoded in an instruction (example: int x=3; y+=x; Here the compiler may simply generate code for y+=3 and that 3 can be an immediate operand of a mov or add instruction)
similarly to the above the compiler may find out how to generate optimized code without ever needing the value of the variable (3) anywhere (example: int x=3; while(x--) printf("*"); Here the compiler may just generate 3 calls to printf("*") or even a single call to printf("***"))
it gets into the cache temporarily and after use is squeezed out of the cache by other data

In addition to being implementation-dependent, it also depends on where that declaration is written. If it's a global variable, the 3 will probably just be stored in the executable's data segment so that it's mapped into the process's address space when the program starts running. The assignment doesn't "happen" at runtime in that case.

If x is allocated on the stack instead of being in a register or optimized entirely away, then it will certainly be in cache. The stack is almost always in cache because the stack is always being used.

Related

When does load, store, and alloca get used in LLVM

I am looking at LLVM to see how they use load, store, and alloca. In the first slide below, there is no use of them. In the second, there is use of alloca.
I am not familiar with C so going to have to bring myself up to speed in order to run an example and figure this out myself, but wanted to ask if anyone knew already. Not sure the kind of example C code to write in order to determine the output that uses load, store, and alloca in LLVM.
The question is, when LLVM uses load, store, and alloca.
Wondering if load/store are necessary as well, or LLVM can do without it.
Figure 1 ↓
Figure 2 ↓
Without optimizations, clang will produce LLVM code where there's one alloca for each local variable, one read for each use of that variable as an r-value and one store for each assignment to that variable (including its initialization).
With optimizations, clang will try to minimize the number of reads and store and will often eliminate the alloca completely if possible (using only registers).
One way to ensure that the variable is stored in memory, even with optimizations, would be to take its address (since registers don't have an address).
Wondering if load/store are necessary as well, or LLVM can do without it.
You need store / load whenever you write to memory locations. So the question becomes whether you can do without memory, storing everything in registers. Since LLVM (unlike real machines) supports an infinite amount of registers, that's a valid question.
However, as I mentioned, registers don't have addresses. So any code that takes the address of a variable, needs to use memory. So does any code that performs arithmetic on addresses, such as code that indexes arrays.
alloca allocates memory in the function's local frame. It is necessary to create a variable whose address is taken, like in this example:
void foo(int* ptr) {
*ptr = 4;
}
int main() {
int value = 0;
foo(&value);
printf("%i\n", value); // 4
}
If it doesn't inline foo, then LLVM will need an alloca instruction in main to create the memory that backs the value variable. foo needs to use store to put 4 at the address that ptr points to, and main then needs to use load to load the contents of value after it's been modified by foo.
Compilers for C-family languages typically prefer to start off using alloca for every variable in the function's frame, and then let LLVM optimize the allocas into SSA values. In many cases, the compiler is able to promote allocated variables to SSA values, as the ssa2 function shows. The SSA form is capable of representing variables that meet the following two conditions:
their addresses aren't taken
their size is fixed
"Taking the address" of a variable is an operation that doesn't exist in Javascript/Ruby, so you may need to get up to speed on C to understand what it means. It is extremely common in C and C++.
"Fixed size" means that the compiler knows ahead of time how much memory it needs for a specific data structure. It always knows for simple integers, for instance, but arrays often have a variable size. Arrays of a size that isn't known at runtime can be allocated with alloca or malloc, and then you need to access their contents with load and store.
Finally, note that your second example is broken: it reads from an uninitialized value, and if you compile it at higher optimization levels, you'll just get ret i32 undef.

How to undeclare (delete) variable in C?

Like we do with macros:
#undef SOMEMACRO
Can we also undeclare or delete the variables in C, so that we can save a lot of memory?
I know about malloc() and free(), but I want to delete the variables completely so that if I use printf("%d", a); I should get error
test.c:4:14: error: ‘a’ undeclared (first use in this function)
No, but you can create small minimum scopes to achieve this since all scope local variables are destroyed when the scope is exit. Something like this:
void foo() {
// some codes
// ...
{ // create an extra minimum scope where a is needed
int a;
}
// a doesn't exist here
}
It's not a direct answer to the question, but it might bring some order and understanding on why this question has no proper answer and why "deleting" variables is impossible in C.
Point #1 What are variables?
Variables are a way for a programmer to assign a name to a memory space. This is important, because this means that a variable doesn't have to occupy any actual space! As long as the compiler has a way to keep track of the memory in question, a defined variable could be translated in many ways to occupy no space at all.
Consider: const int i = 10; A compiler could easily choose to substitute all instances of i into an immediate value. i would occupy 0 data memory in this case (depending on architecture it could increase code size). Alternatively, the compiler could store the value in a register and again, no stack nor heap space will be used. There's no point in "undefining" a label that exists mostly in the code and not necessarily in runtime.
Point #2 Where are variables stored?
After point #1 you already understand that this is not an easy question to answer as the compiler could do anything it wants without breaking your logic, but generally speaking, variables are stored on the stack. How the stack works is quite important for your question.
When a function is being called the machine takes the current location of the CPU's instruction pointer and the current stack pointer and pushes them into the stack, replacing the stack pointer to the next location on stack. It then jumps into the code of the function being called.
That function knows how many variables it has and how much space they need, so it moves the frame pointer to capture a frame that could occupy all the function's variables and then just uses stack. To simplify things, the function captures enough space for all it's variables right from the start and each variable has a well defined offset from the beginning of the function's stack frame*. The variables are also stored one after the other.
While you could manipulate the frame pointer after this action, it'll be too costly and mostly pointless - The running code only uses the last stack frame and could occupy all remaining stack if needed (stack is allocated at thread start) so "releasing" variables gives little benefit. Releasing a variable from the middle of the stack frame would require a defrag operation which would be very CPU costly and pointless to recover few bytes of memory.
Point #3: Let the compiler do its job
The last issue here is the simple fact that a compiler could do a much better job at optimizing your program than you probably could. Given the need, the compiler could detect variable scopes and overlap memory which can't be accessed simultaneously to reduce the programs memory consumption (-O3 compile flag).
There's no need for you to "release" variables since the compiler could do that without your knowledge anyway.
This is to complement all said before me about the variables being too small to matter and the fact that there's no mechanism to achieve what you asked.
* Languages that support dynamic-sized arrays could alter the stack frame to allocate space for that array only after the size of the array was calculated.
There is no way to do that in C nor in the vast majority of programming languages, certainly in all programming languages that I know.
And you would not save "a lot of memory". The amount of memory you would save if you did such a thing would be minuscule. Tiny. Not worth talking about.
The mechanism that would facilitate the purging of variables in such a way would probably occupy more memory than the variables you would purge.
The invocation of the code that would reclaim the code of individual variables would also occupy more space than the variables themselves.
So if there was a magic method purge() that purges variables, not only the implementation of purge() would be larger than any amount of memory you would ever hope to reclaim by purging variables in your program, but also, in int a; purge(a); the call to purge() would occupy more space than a itself.
That's because the variables that you are talking about are very small. The printf("%d", a); example that you provided shows that you are thinking of somehow reclaiming the memory occupied by individual int variables. Even if there was a way to do that, you would be saving something of the order of 4 bytes. The total amount of memory occupied by such variables is extremely small, because it is a direct function of how many variables you, as a programmer, declare by hand-typing their declarations. It would take years of typing on a keyboard doing nothing but mindlessly declaring variables before you would declare a number of int variables occupying an amount of memory worth speaking of.
Well, you can use blocks ({ }) and defining a variable as late as possible to limit the scope where it exists.
But unless the variable's address is taken, doing so has no influence on the generated code at all, as the compiler's determination of the scope where it has to keep the variable's value is not significantly impacted.
If the variable's address is taken, failure of escape-analysis, mostly due to inlining-barriers like separate compilation or allowing semantic interpositioning, can make the compiler assume it has to keep it alive till later in the block than strictly neccessary. That's rarely significant (don't worry about a handful of ints, and most often a few lines of code longer keeping it alive are insignificant), but best to keep it in mind for the rare case where it might matter.
If you are that concerned about the tiny amount of memory that is on the stack, then you're probably going to be interested in understanding the specifics of your compiler as well. You'll need to find out what it does when it compiles. The actual shape of the stack-frame is not specified by the C language. It is left to the compiler to figure out. To take an example from the currently accepted answer:
void foo() {
// some codes
// ...
{ // create an extra minimum scope where a is needed
int a;
}
// a doesn't exist here
}
This may or may not affect the memory usage of the function. If you were to do this in a mainstream compiler like gcc or Visual Studio, you would find that they optimize for speed rather than stack size, so they pre-allocate all of the stack space they need at the start of the function. They will do analysis to figure out the minimum pre-allocation needed, using your scoping and variable-usage analysis, but those algorithms literally wont' be affected by extra scoping. They're already smarter than that.
Other compilers, especially those for embedded platforms, may allocate the stack frame differently. On these platforms, such scoping may be the trick you needed. How do you tell the difference? The only options are:
Read the documentation
Try it, and see what works
Also, make sure you understand the exact nature of your problem. I worked on a particular embedded project which eschewed the stack for everything except return values and a few ints. When I pressed the senior developers about this silliness, they explained that on this particular application, stack space was at more of a premium than space for globally allocated variables. They had a process they had to go through to prove that the system would operate as intended, and this process was much easier for them if they allocated everything up front and avoided recursion. I guarantee you would never arrive at such a convoluted solution unless you first knew the exact nature of what you were solving.
As another solution you could look at, you could always build your own stack frames. Make a union of structs, where each struct contains the variables for one stack frame. Then keep track of them yourself. You could also look at functions like alloca, which can allow for growing the stack frame during the function call, if your compiler supports it.
Would a union of structs work? Try it. The answer is compiler dependent. If all variables are stored in memory on your particular device, then this approach will likely minimize stack usage. However, it could also substantially confuse register coloring algorithms, and result in an increase in stack usage! Try and see how it goes for you!

When is memory allocated and used in a C program?

If I type int x is it using sizeof(int) bytes of memory now? Is it not until x has a value?
What if x = b + 6...is x given a spot in memory before b is?
Yes, as soon as you declare a variable like:
int x;
memory is, generally, allocated on the stack. That being said, the compiler is very smart. If it notices you never use that variable, it may optimize it away.
If I type int x is it using sizeof(int) bytes of memory now? Is it not until x has a value?
Once you declare a variable like int x; it will be taking up space in memory (4 bytes in the case of an int). Giving it a value like x = 5 will just modify the memory that is already being taken up.
What if x = b + 6...is x given a spot in memory before b is?
For this statement to be valid, both x and b must have been declared before this statement. As for which one was allocated in memory first, that depends on what you did before this statement.
Example:
int x = 5;
int b = 6;
x = b + 6; //your code
In this case, x was allocated in memory before b.
Actually when you make a function call, the space required for all locally declared variables are already allocated.
When you compile your C function and convert it to assembly compiler adds procedure prolog which actually relocates the stack pointer for opening space for function parameters, return value, local variables and couple of more values to manage function call.
The order of allocation of local variables is compiler dependent and doesn't necessarily have to be in the order of declaration or usage.
When you use a variable before assigning any value. CPU just uses what was in the already allocated memory. It may be 0, it may be some garbage value or it may even be a value that your previous function call left there. Totally depends on your programs execution, operating system and compiler.
So it is one of the best exercises that always initialize what you have declared as soon as possible. Because you may use the variable by mistake before assigning a value. And if it contains the correct value that you intended to assign (lets say 0, which is more probable) than it will work for some time. But later all of a sudden you program may change the behavior that you didn't expect though it was working perfectly before. And debugging might be a pain because of your assumptions.
Depending on where the variable is defined it may be assigned space
for global and static variables at compile time;
for local variables at run time when the function is entered (not when the definition is encounterd -- #Faruxx was pointing that out);
for objects (not variables) which are allocated dynamically via malloc obviously at run time when malloc is executed. Malloc will typically request a lot more memory from the system (a page? 4k?) for the program's address space and slice it up into byte sized bits (cough) during subsequent calls.
A declaration like extern int size; will not allocate or occupy any memory but refers to a memory location which will be resolved only by the linker (if the actual definition is in another translation unit) to some memory reserved at compile time there.
Peformance:
Global variables will in principle impact startup performance because global memory is zero initialized. This is obviously a one time penalty and negligible except for extreme cases. A larger executable will also need longer to start.
Variables on the stack are completely performance penalty free. They are not initialized (exactly for these performance reasons) and the assembler code for incrementing the stack pointer couldn't care less about the increment. The maximum stack size is fixed at startup to a few kB or MB (and the program will typically crash with, you guess it, when it tries to increase the stack beyond its limit); so there is no potentially expensive interaction with the operating system when the stack grows.
Allocations on the heap carry a comparatively large performance penalty because they always involve a function call (malloc) which then actually needs to do some work, plus a potential operating system call (put a memory segment into the program's address space), plus the need to pair each malloc with a delete in long-running programs which must not leak. (When I say "comparatively large" I mean "compared to local variables"; on your average PC you don't need to think about it except in the most inner loop.)

C variable allocation time and space

If i have a test.c file with the following
#include ...
int global = 0;
int main() {
int local1 = 0;
while(1) {
int local2 = 0;
// Do some operation with one of them
}
return 0;
}
So if I had to use one of this variables in the while loop, which one would be preferred?
Maybe I'm being a little vague here, but I want to know if the difference in time/space allocation is actually relevant.
If you are wondering whether declaring a variable inside a for loop causes it to be created/destroyed at every iteration, there is nothing really to worry about. These variables are not dynamically allocated at runtime, nothing is being malloced here - just some memory is being set aside for use inside the loop. So having the variable inside is just the same as having it outside the loop in terms of performance.
The real difference here is scope not performance. Whether you use a global or local variable only affects where you want this variable to be visible.
In case you're wondering about performance differences: most likely there aren't any. If there are theoretical performance differences, you'll find it hard to actually devise a test to measure them.
A decision like this should not be based on performance but semantics. Unless the semantic behavior of a global variable is required, you should always use automatic (local non-static) variables.
As others have said and surely will say, there are unlikely to be any differences in performance. If there are, the automatic variable will be faster.
The C compiler will have an easier time making optimizations on the variables declared local to the function. The global variable would require an optimizer to perform "Inter-Procedural Data Flow Analysis", which isn't that commonly done.
As an example of the difference, consider that all your declarations initialize the variable to zero. However, in the case of the global variable, the compiler cannot use that information unless it verifies that no flow of control in your program can change the global prior to using it in your example function. In the case of the locally declared ("automatic") variables, there is no way the initial value can be changed by another function (in particular, the compiler verifies that their address is never passed to a sub-function) and the compiler can perform "killed definitions" and "value liveness" analysis to determine whether the zero value can be assumed in some code paths.
Of the two local variables, as a guideline, the optimizer will always have an easier time optimizing access to the variable with the smaller (more limited) scope.
Having stated the above, I would suggest that other answers concerning a bias toward semantics over optimizer-meta-optimization is correct. Use the variable which causes the code to read best, and you will be rewarded with more time returned to you than assisting the def-use optimization calculation.
In general, avoid using a global variable, or any variable which can be accessed more broadly than absolutely necessary. Limited scoping of variables helps prevent bugs from being introduced during later program maintenance.
There are three broad classes of variables: static (global), stack (auto), and register.
Register variables are stored in CPU registers. Registers are very fast word-sized memories, which are integrated in the CPU pipeline. They are free to access, but there are a very limited number of them (typically between 8 and 32 depending on your processor and what operations you're doing).
Stack variables are stored in an area of RAM called the stack. The stack is almost always going to be in the cache, so stack variables typically take 1-4 cycles to access.
Generally, local variables can be either in registers or on the stack. It doesn't matter whether they are allocated at the top of a function or in a loop; they will only be allocated once per function call, and allocation is basically free. The compiler will put variables in registers if at all possible, but if you have more active variables than registers, they won't all fit. Also, if you take the address of a variable, it must be stored on the stack since registers don't have addresses.
Global and static variables are a different beast. Since they are not usually accessed frequently, they may not be in cache, so it could take hundreds of cycles to access them. Also, since the compiler may not know the address of a global variable ahead of time, it may need to be looked up, which is also expensive.
As others have said, don't worry too much about this stuff. It's definitely good to know, but it shouldn't affect the way you write your programs. Write code that makes sense, and let the compiler worry about optimization. If you get into compiler development, then you can start worrying about it. :)
Edit: more details on allocation:
Register variables are allocated by the compiler, so there is no runtime cost. The code will just put a value in a register as soon as the value is produced.
Stack variables are allocated by your program at runtime. Typically, when a function is called, the first thing it will do is reserve enough stack space for all of its local variables. So there is no per-variable cost.

Why compilers creates one variable "twice"?

I know this is more "heavy" question, but I think its interesting too. It was part of my previous questions about compiler functions, but back than I explained it very badly, and many answered just my first question, so ther it is:
So, if my knowledge is correct, modern Windows systems use paging as a way to switch tasks and secure that each task has propriate place in memory. So, every process gets its own place starting from 0.
When multitasking goes into effect, Kernel has to save all important registers to the task´s stack i believe than save the current stack pointer, change page entry to switch to another proces´s physical adress space, load new process stack pointer, pop saved registers and continue by call to poped instruction pointer adress.
Becouse of this nice feature (paging) every process thinks it has nice flat memory within reach. So, there is no far jumps, far pointers, memory segment or data segment. All is nice and linear.
But, when there is no more segmentation for the process, why does still compilers create variables on the stack, or when global directly in other memory space, than directly in program code?
Let me give an example, I have a C code:int a=10;
which gets translated into (Intel syntax):mov [position of a],#10
But than, you actually ocupy more bytes in RAM than needed. Becouse, first few bytes takes the actuall instruction, and after that instruction is done, there is new byte containing the value 10.
Why, instead of this, when there is no need to switch any segment (thus slowing the process speed) isn´t just a value of 10 coded directly into program like this:
xor eax,eax //just some instruction
10 //the value iserted to the program
call end //just some instruction
Becouse compiler know the exact position of every instruction, when operating with that variable, it would just use it´s adress.
I know, that const variables do this, but they are not really variables, when you cannot change them.
I hope I eplained my question well, but I am still learning English, so forgive my sytactical and even semantical errors.
EDIT:
I have read your answers, and it seems that based on those I can modify my question:
So, someone told here that global variable is actually that piece of values attached directly into program, I mean, when variable is global, is it atached to the end of program, or just created like the local one at the time of execution, but instead of on stack on heap directly?
If the first case - attached to the program itself, why is there even existence of local variables? I know, you will tell me becouse of recursion, but that is not the case. When you call function, you can push any memory space on stack, so there is no program there.
I hope you do understand me, there always is ineficient use of memory, when some value (even 0) is created on stack from some instruction, becouse you need space in program for that instruction and than for the actual var. Like so: push #5 //instruction that says to create local variable with integer 5
And than this instruction just makes number 5 to be on stack. Please help me, I really want to know why its this way. Thanks.
Consider:
local variables may have more than one simultaneous existence if a routine is called recursively (even indirectly in, say, a recursive decent parser) or from more than one thread, and these cases occur in the same memory context
marking the program memory non-writable and the stack+heap as non-executable is a small but useful defense against certain classes of attacks (stack smashing...) and is used by some OSs (I don't know if windows does this, however)
Your proposal doesn't allow for either of these cases.
So, there is no far jumps, far pointers, memory segment or data segment. All is nice and linear.
Yes and no. Different program segments have different purposes - despite the fact that they reside within flat virtual memory. E.g. data segment is readable and writable, but you can't execute data. Code segment is readable and executable, but you can't write into it.
why does still compilers create variables on the stack, [...] than directly in program code?
Simple.
Code segment isn't writable. For safety reasons first. Second,
most CPUs do not like to have code segment being written into as it
breaks many existing optimization used to accelerate execution.
State of the function has to be private to the function due to
things like recursion and multi-threading.
isn´t just a value of 10 coded directly into program like this
Modern CPUs prefetch instructions to allow things like parallel execution and out-of-order execution. Putting the garbage (to CPU that is the garbage) into the code segment would simply diminish (or flat out cancel) the effect of the techniques. And they are responsible for the lion share of the performance gains CPUs had showed in the past decade.
when there is no need to switch any segment
So if there is no overhead of switching segment, why then put that into the code segment? There are no problems to keep it in data segment.
Especially in case of read-only data segment, it makes sense to put all read-only data of the program into one place - since it can be shared by all instances of the running application, saving physical RAM.
Becouse compiler know the exact position of every instruction, when operating with that variable, it would just use it´s adress.
No, not really. Most of the code is relocatable or position independent. The code is patched with real memory addresses when OS loads it into the memory. Actually special techniques are used to actually avoid patching the code so that the code segment too could be shared by all running application instances.
The ABI is responsible for defining how and what compiler and linker supposed to do for program to be executable by the complying OS. I haven't seen the Windows ABI, but the ABIs used by Linux are easy to find: search for "AMD64 ABI". Even reading the Linux ABI might answer some of your questions.
What you are talking about is optimization, and that is the compiler's business. If nothing ever changes that value, and the compiler can figure that out, then the compiler is perfectly free to do just what you say (unless a is declared volatile).
Now if you are saying that you are seeing that the compiler isn't doing that, and you think it should, you'd have to talk to your compiler writer. If you are using VisualStudio, their address is One Microsoft Way, Redmond WA. Good luck knocking on doors there. :-)
Why isn´t just a value of 10 coded directly into program like this:
xor eax,eax //just some instruction
10 //the value iserted to the program
call end //just some instruction
That is how global variables are stored. However, instead of being stuck in the middle of executable code (which is messy, and not even possible nowadays), they are stored just after the program code in memory (in Windows and Linux, at least), in what's called the .data section.
When it can, the compiler will move variables to the .data section to optimize performance. However, there are several reasons it might not:
Some variables cannot be made global, including instance variables for a class, parameters passed into a function (obviously), and variables used in recursive functions.
The variable still exists in memory somewhere, and still must have code to access it. Thus, memory usage will not change. In fact, on the x86 ("Intel"), according to this page the instruction to reference a local variable:
mov eax, [esp+8]
and the instruction to reference a global variable:
mov eax, [0xb3a7135]
both take 1 (one!) clock cycle.
The only advantage, then, is that if every local variable is global, you wouldn't have to make room on the stack for local variables.
Adding a variable to the .data segment may actually increase the size of the executable, since the variable is actually contained in the file itself.
As caf mentions in the comments, stack-based variables only exist while the function is running - global variables take up memory during the entire execution of the program.
not quite sure what your confusion is?
int a = 10; means make a spot in memory, and put the value 10 at the memory address
if you want a to be 10
#define a 10
though more typically
#define TEN 10
Variables have storage space and can be modified. It makes no sense to stick them in the code segment, where they cannot be modified.
If you have code with int a=10 or even const int a=10, the compiler cannot convert code which references 'a' to use the constant 10 directly, because it has no way of knowing whether 'a' may be changed behind its back (even const variables can be changed). For example, one way 'a' can be changed without the compiler knowing is, if you have a pointer which points 'a'. Pointers are not fixed at runtime, so the compiler cannot determine at compile time whether there will be a pointer which will point to and modify 'a'.

Resources