I read about function pointers in C.
And everyone said that will make my program run slow.
Is it true?
I made a program to check it.
And I got the same results on both cases. (measure the time.)
So, is it bad to use function pointer?
Thanks in advance.
To response for some guys.
I said 'run slow' for the time that I have compared on a loop.
like this:
int end = 1000;
int i = 0;
while (i < end) {
fp = func;
fp ();
}
When you execute this, i got the same time if I execute this.
while (i < end) {
func ();
}
So I think that function pointer have no difference of time
and it don't make a program run slow as many people said.
You see, in situations that actually matter from the performance point of view, like calling the function repeatedly many times in a cycle, the performance might not be different at all.
This might sound strange to people, who are used to thinking about C code as something executed by an abstract C machine whose "machine language" closely mirrors the C language itself. In such context, "by default" an indirect call to a function is indeed slower than a direct one, because it formally involves an extra memory access in order to determine the target of the call.
However, in real life the code is executed by a real machine and compiled by an optimizing compiler that has a pretty good knowledge of the underlying machine architecture, which helps it to generate the most optimal code for that specific machine. And on many platforms it might turn out that the most efficient way to perform a function call from a cycle actually results in identical code for both direct and indirect call, leading to the identical performance of the two.
Consider, for example, the x86 platform. If we "literally" translate a direct and indirect call into machine code, we might end up with something like this
// Direct call
do-it-many-times
call 0x12345678
// Indirect call
do-it-many-times
call dword ptr [0x67890ABC]
The former uses an immediate operand in the machine instruction and is indeed normally faster than the latter, which has to read the data from some independent memory location.
At this point let's remember that x86 architecture actually has one more way to supply an operand to the call instruction. It is supplying the target address in a register. And a very important thing about this format is that it is normally faster than both of the above. What does this mean for us? This means that a good optimizing compiler must and will take advantage of that fact. In order to implement the above cycle, the compiler will try to use a call through a register in both cases. If it succeeds, the final code might look as follows
// Direct call
mov eax, 0x12345678
do-it-many-times
call eax
// Indirect call
mov eax, dword ptr [0x67890ABC]
do-it-many-times
call eax
Note, that now the part that matters - the actual call in the cycle body - is exactly and precisely the same in both cases. Needless to say, the performance is going to be virtually identical.
One might even say, however strange it might sound, that on this platform a direct call (a call with an immediate operand in call) is slower than an indirect call as long as the operand of the indirect call is supplied in a register (as opposed to being stored in memory).
Of course, the whole thing is not as easy in general case. The compiler has to deal with limited availability of registers, aliasing issues etc. But is such simplistic cases as the one in your example (and even in much more complicated ones) the above optimization will be carried out by a good compiler and will completely eliminate any difference in performance between a cyclic direct call and a cyclic indirect call. This optimization works especially well in C++, when calling a virtual function, since in a typical implementation the pointers involved are fully controlled by the compiler, giving it full knowledge of the aliasing picture and other relevant stuff.
Of course, there's always a question of whether your compiler is smart enough to optimize things like that...
I think when people say this they're referring to the fact that using function pointers may prevent compiler optimizations (inlining) and processor optimizations (branch prediction). However, if function pointers are an effective way to accomplish something that you're trying to do, chances are that any other method of doing it would have the same drawbacks.
And unless your function pointers are being used in tight loops in a performance critical application or on a very slow embedded system, chances are the difference is negligible anyway.
And everyone said that will make my
program run slow. Is it true?
Most likely this claim is false. For one, if the alternative to using function pointers are something like
if (condition1) {
func1();
} else if (condition2)
func2();
} else if (condition3)
func3();
} else {
func4();
}
this is most likely relatively much slower than just using a single function pointer. While calling a function through a pointer does have some (typically neglectable) overhead, it is normally not the direct-function-call versus through-pointer-call difference that is relevant to compare.
And secondly, never optimize for performance without any measurements. Knowing where the bottlenecks are is very difficult (read impossible) to know and sometimes this can be quite non-intuitively (for instance the linux kernel developers have started removing the inline keyword from functions because it actually hurt performance).
A lot of people have put in some good answers, but I still think there's a point being missed. Function pointers do add an extra dereference which makes them several cycles slower, that number can increase based on poor branch prediction (which incidentally has almost nothing to do with the function pointer itself). Additionally functions called via a pointer cannot be inlined. But what people are missing is that most people use function pointers as an optimization.
The most common place you will find function pointers in c/c++ APIs is as callback functions. The reason so many APIs do this is because writing a system that invokes a function pointer whenever events occur is much more efficient than other methods like message passing. Personally I've also used function pointers as part of a more-complex input processing system, where each key on the keyboard has a function pointer mapped to it via a jump table. This allowed me to remove any branching or logic from the input system and merely handle the key press coming in.
Calling a function via a function pointer is somewhat slower than a static function call, since the former call includes an extra pointer dereferencing. But AFAIK this difference is negligible on most modern machines (except maybe some special platforms with very limited resources).
Function pointers are used because they can make the program much simpler, cleaner and easier to maintain (when used properly, of course). This more than makes up for the possible very minor speed difference.
A lot of good points in earlier replies.
However take a look at C qsort comparison function. Because the comparison function cannot be inlined and needs to follow standard stack based calling conventions, the total running time for the sort can be an order of magnitude (more exactly 3-10x) slower for integer keys, than otherwise same code with a direct, inlineable, call.
A typical inlined comparison would be a sequence of simple CMP and possibly CMOV/SET instruction. A function call also incurs the overhead of a CALL, setting up stack frame, doing the comparison, tearing down stack frame and returning the result. Note, that the stack operations can cause pipeline stalls due to CPU pipeline length and virtual registers. For example if value of say eax is needed before the instruction that last modified eax has finished executing (which typically takes about 12 clock cycles on the newest processors). Unless the CPU can execute other instructions out of order to wait for that, a pipeline stall will occur.
Using a function pointer is slower that just calling a function as it is another layer of indirection. (The pointer needs to be dereferenced to get the memory address of the function). While it is slower, compared to everything else your program may do (Read a file, write to the console) it is negligible.
If you need to use function pointers, use them because anything that tries to do the same thing but avoids using them will be slower and less maintainable that using function pointers.
Possibly.
The answer depends on what the function pointer is being used for and hence what the alternatives are. Comparing function pointer calls to direct function calls is misleading if a function pointer is being used to implement a choice that's part of our program logic and which can't simply be removed. I'll go ahead and nonetheless show that comparison and come back to this thought afterwards.
Function pointer calls have the most opportunity to degrade performance compared to direct function calls when they inhibit inlining. Because inlining is a gateway optimization, we can craft wildly pathological cases where function pointers are made arbitrarily slower than the equivalent direct function call:
void foo(int* x) {
*x = 0;
}
void (*foo_ptr)(int*) = foo;
int call_foo(int *p, int size) {
int r = 0;
for (int i = 0; i != size; ++i)
r += p[i];
foo(&r);
return r;
}
int call_foo_ptr(int *p, int size) {
int r = 0;
for (int i = 0; i != size; ++i)
r += p[i];
foo_ptr(&r);
return r;
}
Code generated for call_foo():
call_foo(int*, int):
xor eax, eax
ret
Nice. foo() has not only been inlined, but doing so has allowed the compiler to eliminate the entire preceding loop! The generated code simply zeroes out the return register by XORing the register with itself and then returns. On the other hand, compilers will have to generate code for the loop in call_foo_ptr() (100+ lines with gcc 7.3) and most of that code effectively does nothing (so long as foo_ptr still points to foo()). (In more typical scenarios, you can expect that inlining a small function into a hot inner loop might reduce execution time by up to about an order of magnitude.)
So in a worst case scenario, a function pointer call is arbitrarily slower than a direct function call, but this is misleading. It turns out that if foo_ptr had been const, then call_foo() and call_foo_ptr() would have generated the same code. However, this would require us to give up the opportunity for indirection provided by foo_ptr. Is it "fair" for foo_ptr to be const? If we're interested in the indirection provided by foo_ptr, then no, but if that's the case, then a direct function call is not a valid option either.
If a function pointer is being used to provide useful indirection, then we can move the indirection around or in some cases swap out function pointers for conditionals or even macros, but we can't simply remove it. If we've decided that function pointers are a good approach but performance is a concern, then we typically want to pull indirection up the call stack so that we pay the cost of indirection in an outer loop. For example, in the common case where a function takes a callback and calls it in a loop, we might try moving the innermost loop into the callback (and changing the responsibility of each callback invocation accordingly).
Related
I know that defining function pointer in this way
struct handler_index {
const char *name;
int (*handler)();
};
allow to use handler pointer for any function with unspecified (but non-variable) number and types of parameters.
but I m wondering if this definition could affect the optimization of code or memory or execution time comparing to this:
struct handler_index {
const char *name;
int (*handler)(int a, int b);
};
If you're wondering how adding more parameters to a function pointer affects specifically the function pointer -- it doesn't. Function pointers are all the same size regardless of the number of parameters.
If you're wondering about efficiency of calling such a function pointer: adding more parameters will result in code being generated to pass arguments. So yea, it will affect code size of the call slightly, and maybe execution time depending on how much ILP your CPU can pull off while passing those args.
Modern calling conventions often pass some number of arguments in registers, so you may or may not have an increase in stack usage.
To see exactly what the difference is between code generated for each call, read up on calling conventions (there are too many to list here!) and check the asm generated from your code. But really, adding more parameters (within reason) is probably going to have such a vanishingly small effect that it simply doesn't matter.
As Cory says, it's not really relevant whether it is a function pointer or just a regular basic function [except in cases where a regular function gets inlined, which of course, function pointers generally can't - although if the situation is specific enough I have seen the compiler actually figure out "Ah, we're always calling function X here, so lets inline X" - typically when the function pointer is an argument to a function, rather than say in a structure).
What WILL make a difference is adding arguments to function calls in general. The processor will have to place those arguments somewhere, and even if they go in registers, it may require extra instructions to get the value into the RIGHT register.
However, your first example is very bad because there's no check that your code is doing the right thing.
Further, you need some pretty pathological cases to make the overhead of passing arguments to a function to be much of the time of calling a function pointer - hopefully your function does more than add one number to another.
Having said that, passing LOTS of arguments, especially "hard to get" arguments can be really bad. I was working on a graphics chip simulator, and part of the pixelshader processing unit had a debug print in the middle of it, which rarely got printed, but it took something like 7 or 8 arguments (aside from the debug level of 1000 or whatever it was). Fishing out those arguments from the respective structures and sticking them on the stack took quite some time, and putting an "if (debuglevel >= 1000) ..." so that the call was only made when it was actually needed, made the code some 40% faster in that function.
I saw some code that was something like this
int *func2(int *var) {
//Do some actual work
return var;
}
int *func1(int *var) {
return func2(var);
}
int main() {
int var;
var = func1(&var);
return 0;
}
This seems like an incredible waste to me but I figured the intermediate function might have previously had two function that it could call or there are some plans for expansion in the future. I was just wondering if compilers like gcc can detect this sort of thing and eliminate the useless function in the actual program or if this sort of thing actually wastes CPU cycles at runtime?
Don't do premature optimization. Focus on writing readable code. Even without optimization, the extra function call probably has a minimal effect on performance. The compiler may choose to inline it.
If you have performance issues later, you can test and profile to find the bottlenecks.
In most cases, if you turn up the compiler optimizations high enough, such trivial functions will be inlined. Thus there is no overhead.
So the answer to your question is: Yes, the compiler is usually smart enough to eliminate the call.
So don't worry about it unless you need to.
You can also use the inline keyword to make it more explicit: (although the compiler is still free to ignore it)
inline int *func1(int *var) {
return func2(var);
}
The super quick answer: Yes, maybe.
The quick answer: Yes, but usually not enough that you should care, and sometimes not at all.
The full answer: If the functions are all in the same translation unit and the compiler doesn't suck, the extra layer of function call will just get optimized out, and there will be zero impact on performance. Otherwise, if you're making external function calls, expect a small but nonzero performance cost. Most of the time it doesn't matter, but on functions that are super-short where every cycle counts, it could make your program twice as slow, or worse. Some worst-case examples:
A function like getc that just pulls the next byte from a buffer, advances the position, and returns (in the common case where the buffer is non-empty).
A function that advances a state machine via a trivial operation and returns, for example processing a single byte of a UTF-8 character.
Locking/synchronization primitives. This is a bit of a special case because the actual atomic memory access should dominate the execution time, making overhead seem insignificant. But if your intended usage case is just to hold locks for a single trivial operation (e.g. lock(); a++; unlock();) then even a small amount of added time with the lock held could have drastic effects on contention performance if the lock is highly contended.
Finally, the what-you-should-do answer: Write your code in the most natural way possible until testing/measurement shows you there's a performance problem. Only then should you think about uglifying your code for the sake of performance.
It would depend on the compiler and runtime environment. If the method call is on a stack there will be slight extra overhead from incrementing the stack. But in this case everything is by pointer so this might as well be a tail call and will probably be inlined. Inlining/tail call will cause the function to operate similar to a jump instead of a stack being incremented; meaning it would be similar to a loop or a goto when running.
I have a function and i'm accessing a struct's members a lot of times in it.
What I was wondering about is what is the good practice to go about this?
For example:
struct s
{
int x;
int y;
}
and I have allocated memory for 10 objects of that struct using malloc.
So, whenever I need to use only one of the object in a function, I usually create (or is passed as argument) pointer and point it to the required object (My superior told me to avoid array indexing because it adds a calculation when accessing any member of the struct)
But is this the right way? I understand that dereferencing is not as expensive as creating a copy, but what if I'm dereferencing a number of times (like 20 to 30) in the function.
Would it be better if i created temporary variables for the struct variables (only the ones I need, I certainly don't use all the members) and copy over the value and then set the actual struct's value before returning?
Also, is this unnecessary micro optimization? Please note that this is for embedded devices.
This is for an embedded system. So, I can't make any assumptions about what the compiler will do. I can't make any assumptions about word size, or the number of registers, or the cost of accessing off the stack, because you didn't tell me what the architecture is. I used to do embedded code on 8080s when they were new...
OK, so what to do?
Pick a real section of code and code it up. Code it up each of the different ways you have listed above. Compile it. Find the compiler option that forces it to print out the assembly code that is produced. Compile each piece of code with every different set of optimization options. Grab the reference manual for the processor and count the cycles used by each case.
Now you will have real data on which to base a decision. Real data is much better that the opinions of a million highly experience expert programmers. Sit down with your lead programmer and show him the code and the data. He may well show you better ways to code it. If so, recode it his way, compile it, and count the cycles used by his code. Show him how his way worked out.
At the very worst you will have spent a weekend learning something very important about the way your compiler works. You will have examined N ways to code things times M different sets of optimization options. You will have learned a lot about the instruction set of the machine. You will have learned how good, or bad, the compiler is. You will have had a chance to get to know your lead programmer better. And, you will have real data.
Real data is the kind of data that you must have to answer this question. With out that data nothing anyone tells you is anything but an ego based guess. Data answers the question.
Bob Pendleton
First of all, indexing an array is not very expensive (only like one operation more expensive than a pointer dereference, or sometimes none, depending on the situation).
Secondly, most compilers will perform what is called RVO or return value optimisation when returning structs by value. This is where the caller allocates space for the return value of the function it calls, and secretly passes the address of that memory to the function for it to use, and the effect is that no copies are made. It does this automatically, so
struct mystruct blah = func();
Only constructs one object, passes it to func for it to use transparently to the programmer, and no copying need be done.
What I do not know is if you assign an array index the return value of the function, like this:
someArray[0] = func();
will the compiler pass the address of someArray[0] and do RVO that way, or will it just not do that optimisation? You'll have to get a more experienced programmer to answer that. I would guess that the compiler is smart enough to do it though, but it's just a guess.
And yes, I would call it micro optimisation. But we're C programmers. And that's how we roll.
Generally, the case in which you want to make a copy of a passed struct in C is if you want to manipulate the data in place. That is to say, have your changes not be reflected in the struct it self but rather only in the return value. As for which is more expensive, it depends on a lot of things. Many of which change implementation to implementation so I would need more specific information to be more helpful. Though, I would expect, that in an embedded environment you memory is at a greater premium than your processing power. Really this reads like needless micro optimization, your compiler should handle it.
In this case creating temp variable on the stack will be faster. But if your structure is much bigger then you might be better with dereferencing.
My function will be called thousands of times. If i want to make it faster, will changing the local function variables to static be of any use? My logic behind this is that, because static variables are persistent between function calls, they are allocated only the first time, and thus, every subsequent call will not allocate memory for them and will become faster, because the memory allocation step is not done.
Also, if the above is true, then would using global variables instead of parameters be faster to pass information to the function every time it is called? i think space for parameters is also allocated on every function call, to allow for recursion (that's why recursion uses up more memory), but since my function is not recursive, and if my reasoning is correct, then taking off parameters will in theory make it faster.
I know these things I want to do are horrible programming habits, but please, tell me if it is wise. I am going to try it anyway but please give me your opinion.
The overhead of local variables is zero. Each time you call a function, you are already setting up the stack for the parameters, return values, etc. Adding local variables means that you're adding a slightly bigger number to the stack pointer (a number which is computed at compile time).
Also, local variables are probably faster due to cache locality.
If you are only calling your function "thousands" of times (not millions or billions), then you should be looking at your algorithm for optimization opportunities after you have run a profiler.
Re: cache locality (read more here):
Frequently accessed global variables probably have temporal locality. They also may be copied to a register during function execution, but will be written back into memory (cache) after a function returns (otherwise they wouldn't be accessible to anything else; registers don't have addresses).
Local variables will generally have both temporal and spatial locality (they get that by virtue of being created on the stack). Additionally, they may be "allocated" directly to registers and never be written to memory.
The best way to find out is to actually run a profiler. This can be as simple as executing several timed tests using both methods and then averaging out the results and comparing, or you may consider a full-blown profiling tool which attaches itself to a process and graphs out memory use over time and execution speed.
Do not perform random micro code-tuning because you have a gut feeling it will be faster. Compilers all have slightly different implementations of things and what is true on one compiler on one environment may be false on another configuration.
To tackle that comment about fewer parameters: the process of "inlining" functions essentially removes the overhead related to calling a function. Chances are a small function will be automatically in-lined by the compiler, but you can suggest a function be inlined as well.
In a different language, C++, the new standard coming out supports perfect forwarding, and perfect move semantics with rvalue references which removes the need for temporaries in certain cases which can reduce the cost of calling a function.
I suspect you're prematurely optimizing, however, you should not be this concerned with performance until you've discovered your real bottlenecks.
Absolutly not! The only "performance" difference is when variables are initialised
int anint = 42;
vs
static int anint = 42;
In the first case the integer will be set to 42 every time the function is called in the second case ot will be set to 42 when the program is loaded.
However the difference is so trivial as to be barely noticable. Its a common misconception that storage has to be allocated for "automatic" variables on every call. This is not so C uses the already allocated space in the stack for these variables.
Static variables may actually slow you down as its some aggresive optimisations are not possible on static variables. Also as locals are in a contiguous area of the stack they are easier to cache efficiently.
There is no one answer to this. It will vary with the CPU, the compiler, the compiler flags, the number of local variables you have, what the CPU's been doing before you call the function, and quite possibly the phase of the moon.
Consider two extremes; if you have only one or a few local variables, it/they might easily be stored in registers rather than be allocated memory locations at all. If register "pressure" is sufficiently low that this may happen without executing any instructions at all.
At the opposite extreme there are a few machines (e.g., IBM mainframes) that don't have stacks at all. In this case, what we'd normally think of as stack frames are actually allocated as a linked list on the heap. As you'd probably guess, this can be quite slow.
When it comes to accessing the variables, the situation's somewhat similar -- access to a machine register is pretty well guaranteed to be faster than anything allocated in memory can possible hope for. OTOH, it's possible for access to variables on the stack to be pretty slow -- it normally requires something like an indexed indirect access, which (especially with older CPUs) tends to be fairly slow. OTOH, access to a global (which a static is, even though its name isn't globally visible) typically requires forming an absolute address, which some CPUs penalize to some degree as well.
Bottom line: even the advice to profile your code may be misplaced -- the difference may easily be so tiny that even a profiler won't detect it dependably, and the only way to be sure is to examine the assembly language that's produced (and spend a few years learning assembly language well enough to know say anything when you do look at it). The other side of this is that when you're dealing with a difference you can't even measure dependably, the chances that it'll have a material effect on the speed of real code is so remote that it's probably not worth the trouble.
It looks like the static vs non-static has been completely covered but on the topic of global variables. Often these will slow down a programs execution rather than speed it up.
The reason is that tightly scoped variables make it easy for the compiler to heavily optimise, if the compiler has to look all over your application for instances of where a global might be used then its optimising won't be as good.
This is compounded when you introduce pointers, say you have the following code:
int myFunction()
{
SomeStruct *A, *B;
FillOutSomeStruct(B);
memcpy(A, B, sizeof(A);
return A.result;
}
the compiler knows that the pointer A and B can never overlap and so it can optimise the copy. If A and B are global then they could possibly point to overlapping or identical memory, this means the compiler must 'play it safe' which is slower. The problem is generally called 'pointer aliasing' and can occur in lots of situations not just memory copies.
http://en.wikipedia.org/wiki/Pointer_alias
Using static variables may make a function a tiny bit faster. However, this will cause problems if you ever want to make your program multi-threaded. Since static variables are shared between function invocations, invoking the function simultaneously in different threads will result in undefined behaviour. Multi-threading is the type of thing you may want to do in the future to really speed up your code.
Most of the things you mentioned are referred to as micro-optimizations. Generally, worrying about these kind of things is a bad idea. It makes your code harder to read, and harder to maintain. It's also highly likely to introduce bugs. You'll likely get more bang for your buck doing optimizations at a higher level.
As M2tM suggests, running a profiler is also a good idea. Check out gprof for one which is quite easy to use.
You can always time your application to truly determine what is fastest. Here is what I understand: (all of this depends on the architecture of your processor, btw)
C functions create a stack frame, which is where passed parameters are put, and local variables are put, as well as the return pointer back to where the caller called the function. There is no memory management allocation here. It usually a simple pointer movement and thats it. Accessing data off the stack is also pretty quick. Penalties usually come into play when you're dealing with pointers.
As for global or static variables, they're the same...from the standpoint that they're going to be allocated in the same region of memory. Accessing these may use a different method of access than local variables, depends on the compiler.
The major difference between your scenarios is memory footprint, not so much speed.
Using static variables can actually make your code significantly slower. Static variables must exist in a 'data' region of memory. In order to use that variable, the function must execute a load instruction to read from main memory, or a store instruction to write to it. If that region is not in the cache, you lose many cycles. A local variable that lives on the stack will most surely have an address that is in the cache, and might even be in a cpu register, never appearing in memory at all.
I agree with the others comments about profiling to find out stuff like that, but generally speaking, function static variables should be slower. If you want them, what you are really after is a global. Function statics insert code/data to check if the thing has been initialized already that gets run every time your function is called.
Profiling may not see the difference, disassembling and knowing what to look for might.
I suspect you are only going to get a variation as much as a few clock cycles per loop (on average depending on the compiler, etc). Sometimes the change will be dramatic improvement or dramatically slower, and that wont necessarily be because the variables home has moved to/from the stack. Lets say you save four clock cycles per function call for 10000 calls on a 2ghz processor. Very rough calculation: 20 microseconds saved. Is 20 microseconds a lot or a little compared to your current execution time?
You will likely get more a performance improvement by making all of your char and short variables into ints, among other things. Micro-optimization is a good thing to know but takes lots of time experimenting, disassembling, timing the execution of your code, understanding that fewer instructions does not necessarily mean faster for example.
Take your specific program, disassemble both the function in question and the code that calls it. With and without the static. If you gain only one or two instructions and this is the only optimization you are going to do, it is probably not worth it. You may not be able to see the difference while profiling. Changes in where the cache lines hit could show up in profiling before changes in the code for example.
very quick question for you. When I store some automatic variable in C, asm output is like this: MOV ESP+4,#25h , and I just want to know why canĀ“t compiler calculate that ESP+4 adress itself.
I thought this through, and I really cant find reason for this. I mean, isnt compiler aware of the esp value? It should be. And when using another object file, this should not be problem either, since variables could just be represent by adress and linked later, when all automatic variables are known, and therefore proper adress could be assigned. Thanks.
No, it cannot know the value of esp in advance.
Take for example a recursive function, ie. a function that calls itself. Assume such a function has several parameters that are passed in via the stack. This means that each argument takes some space on the stack, thereby changing the value of the esp register.
Now, when the function is entered, the exact value of esp will depend on how many times the function has called itself previously, and there is no way the compiler could know this at compile time. If you doubt this, take a function such as this:
void foobar(int n)
{
if (rand() % n != 17)
foobar(n + 1);
}
There's no way the compiler would be smart enough in advance to figure out if the function will call itself once more.
If the compiler wanted to determine esp in advance, it would effectively have to create a version of the function for each possible value for esp.
The above explanation only takes into account one function. In a real-world scenario, a program has many functions which interdepend on one another, which results in fairly complex "call graphs". This together with (among other things) unpredicable program logic means the compiler would have to create a huge array of versions of each function, just to optimise on esp -- which clearly doesn't make sense.
P.S.: Now something else. You don't actually need to optimise [esp+N] at all, because it should not take any more CPU time than the simpler [esp]... at least not on Intel Pentium CPUs. You could say that they already contain optimizations for exactly this and even more complicated scenarios. If you're interested in the Intel CPUs, I suggest you look up the documentation for something called the MOD R/M and the SIB byte of a machine instruction, e.g. here for the SIB byte or here or, of course, in Intel's official CPU developer documentation.
No, the compiler is not aware of the value of ESP at runtime - it's the stack pointer. It is potentially different every time the function is called. Perhaps the simplest example to think about is a recursive function - every time it calls itself, the stack gets a little bit deeper to accommodate the local variables for the new call. Every stack frame has its own local variable, every stack frame is at a different position on the stack, and therefore has its own address (in ESP, normally).
The Stack Pointer cannot be calculated at compile time. For a simple example why this is not possible, just think of a recursive function: The same variable has a different address for each call, but it's always the same code that is run.
No, the compiler doesn't know the value ahead of time. In a few extremely basic programs (where there's only one possible "route" from main to any other particular function being called) it could, but I don't know of a compiler that attempts to compute this. If you have any recursion, or a function is called from more than one place, the the stack pointer will have different values depending on where it was called from.
There's not much point to doing so in any case -- since the stack pointer is so heavily used, most CPUs are designed to make indirect addressing from the stack pointer extremely efficient. In fact, it's often more efficient than supplying an absolute address would be.
This is really rather fundamental to the way the stack works. To reason it out for yourself, imagine how you'd implement a recursive function.