Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code.
As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant.
FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code.
What coding practices are available that may enable the compiler/optimizer to generate faster code?
Identifying the platform and compiler you use, would be appreciated.
Why does the technique seem to work?
Sample code is encouraged.
Here is a related question
[Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code?
[Edit] Offset related link
Here's a coding practice to help the compiler create fast code—any language, any platform, any compiler, any problem:
Do not use any clever tricks which force, or even encourage, the compiler to lay variables out in memory (including cache and registers) as you think best. First write a program which is correct and maintainable.
Next, profile your code.
Then, and only then, you might want to start investigating the effects of telling the compiler how to use memory. Make 1 change at a time and measure its impact.
Expect to be disappointed and to have to work very hard indeed for small performance improvements. Modern compilers for mature languages such as Fortran and C are very, very good. If you read an account of a 'trick' to get better performance out of code, bear in mind that the compiler writers have also read about it and, if it is worth doing, probably implemented it. They probably wrote what you read in the first place.
Write to local variables and not output arguments! This can be a huge help for getting around aliasing slowdowns. For example, if your code looks like
void DoSomething(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut)
{
for (int i=0; i<numFoo, i++)
{
barOut.munge(foo1, foo2[i]);
}
}
the compiler doesn't know that foo1 != barOut, and thus has to reload foo1 each time through the loop. It also can't read foo2[i] until the write to barOut is finished. You could start messing around with restricted pointers, but it's just as effective (and much clearer) to do this:
void DoSomethingFaster(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut)
{
Foo barTemp = barOut;
for (int i=0; i<numFoo, i++)
{
barTemp.munge(foo1, foo2[i]);
}
barOut = barTemp;
}
It sounds silly, but the compiler can be much smarter dealing with the local variable, since it can't possibly overlap in memory with any of the arguments. This can help you avoid the dreaded load-hit-store (mentioned by Francis Boivin in this thread).
The order you traverse memory can have profound impacts on performance and compilers aren't really good at figuring that out and fixing it. You have to be conscientious of cache locality concerns when you write code if you care about performance. For example two-dimensional arrays in C are allocated in row-major format. Traversing arrays in column major format will tend to make you have more cache misses and make your program more memory bound than processor bound:
#define N 1000000;
int matrix[N][N] = { ... };
//awesomely fast
long sum = 0;
for(int i = 0; i < N; i++){
for(int j = 0; j < N; j++){
sum += matrix[i][j];
}
}
//painfully slow
long sum = 0;
for(int i = 0; i < N; i++){
for(int j = 0; j < N; j++){
sum += matrix[j][i];
}
}
Generic Optimizations
Here as some of my favorite optimizations. I have actually increased execution times and reduced program sizes by using these.
Declare small functions as inline or macros
Each call to a function (or method) incurs overhead, such as pushing variables onto the stack. Some functions may incur an overhead on return as well. An inefficient function or method has fewer statements in its content than the combined overhead. These are good candidates for inlining, whether it be as #define macros or inline functions. (Yes, I know inline is only a suggestion, but in this case I consider it as a reminder to the compiler.)
Remove dead and redundant code
If the code isn't used or does not contribute to the program's result, get rid of it.
Simplify design of algorithms
I once removed a lot of assembly code and execution time from a program by writing down the algebraic equation it was calculating and then simplified the algebraic expression. The implementation of the simplified algebraic expression took up less room and time than the original function.
Loop Unrolling
Each loop has an overhead of incrementing and termination checking. To get an estimate of the performance factor, count the number of instructions in the overhead (minimum 3: increment, check, goto start of loop) and divide by the number of statements inside the loop. The lower the number the better.
Edit: provide an example of loop unrolling
Before:
unsigned int sum = 0;
for (size_t i; i < BYTES_TO_CHECKSUM; ++i)
{
sum += *buffer++;
}
After unrolling:
unsigned int sum = 0;
size_t i = 0;
**const size_t STATEMENTS_PER_LOOP = 8;**
for (i = 0; i < BYTES_TO_CHECKSUM; **i = i / STATEMENTS_PER_LOOP**)
{
sum += *buffer++; // 1
sum += *buffer++; // 2
sum += *buffer++; // 3
sum += *buffer++; // 4
sum += *buffer++; // 5
sum += *buffer++; // 6
sum += *buffer++; // 7
sum += *buffer++; // 8
}
// Handle the remainder:
for (; i < BYTES_TO_CHECKSUM; ++i)
{
sum += *buffer++;
}
In this advantage, a secondary benefit is gained: more statements are executed before the processor has to reload the instruction cache.
I've had amazing results when I unrolled a loop to 32 statements. This was one of the bottlenecks since the program had to calculate a checksum on a 2GB file. This optimization combined with block reading improved performance from 1 hour to 5 minutes. Loop unrolling provided excellent performance in assembly language too, my memcpy was a lot faster than the compiler's memcpy. -- T.M.
Reduction of if statements
Processors hate branches, or jumps, since it forces the processor to reload its queue of instructions.
Boolean Arithmetic (Edited: applied code format to code fragment, added example)
Convert if statements into boolean assignments. Some processors can conditionally execute instructions without branching:
bool status = true;
status = status && /* first test */;
status = status && /* second test */;
The short circuiting of the Logical AND operator (&&) prevents execution of the tests if the status is false.
Example:
struct Reader_Interface
{
virtual bool write(unsigned int value) = 0;
};
struct Rectangle
{
unsigned int origin_x;
unsigned int origin_y;
unsigned int height;
unsigned int width;
bool write(Reader_Interface * p_reader)
{
bool status = false;
if (p_reader)
{
status = p_reader->write(origin_x);
status = status && p_reader->write(origin_y);
status = status && p_reader->write(height);
status = status && p_reader->write(width);
}
return status;
};
Factor Variable Allocation outside of loops
If a variable is created on the fly inside a loop, move the creation / allocation to before the loop. In most instances, the variable doesn't need to be allocated during each iteration.
Factor constant expressions outside of loops
If a calculation or variable value does not depend on the loop index, move it outside (before) the loop.
I/O in blocks
Read and write data in large chunks (blocks). The bigger the better. For example, reading one octect at a time is less efficient than reading 1024 octets with one read.
Example:
static const char Menu_Text[] = "\n"
"1) Print\n"
"2) Insert new customer\n"
"3) Destroy\n"
"4) Launch Nasal Demons\n"
"Enter selection: ";
static const size_t Menu_Text_Length = sizeof(Menu_Text) - sizeof('\0');
//...
std::cout.write(Menu_Text, Menu_Text_Length);
The efficiency of this technique can be visually demonstrated. :-)
Don't use printf family for constant data
Constant data can be output using a block write. Formatted write will waste time scanning the text for formatting characters or processing formatting commands. See above code example.
Format to memory, then write
Format to a char array using multiple sprintf, then use fwrite. This also allows the data layout to be broken up into "constant sections" and variable sections. Think of mail-merge.
Declare constant text (string literals) as static const
When variables are declared without the static, some compilers may allocate space on the stack and copy the data from ROM. These are two unnecessary operations. This can be fixed by using the static prefix.
Lastly, Code like the compiler would
Sometimes, the compiler can optimize several small statements better than one complicated version. Also, writing code to help the compiler optimize helps too. If I want the compiler to use special block transfer instructions, I will write code that looks like it should use the special instructions.
The optimizer isn't really in control of the performance of your program, you are. Use appropriate algorithms and structures and profile, profile, profile.
That said, you shouldn't inner-loop on a small function from one file in another file, as that stops it from being inlined.
Avoid taking the address of a variable if possible. Asking for a pointer isn't "free" as it means the variable needs to be kept in memory. Even an array can be kept in registers if you avoid pointers — this is essential for vectorizing.
Which leads to the next point, read the ^#$# manual! GCC can vectorize plain C code if you sprinkle a __restrict__ here and an __attribute__( __aligned__ ) there. If you want something very specific from the optimizer, you might have to be specific.
On most modern processors, the biggest bottleneck is memory.
Aliasing: Load-Hit-Store can be devastating in a tight loop. If you're reading one memory location and writing to another and know that they are disjoint, carefully putting an alias keyword on the function parameters can really help the compiler generate faster code. However if the memory regions do overlap and you used 'alias', you're in for a good debugging session of undefined behaviors!
Cache-miss: Not really sure how you can help the compiler since it's mostly algorithmic, but there are intrinsics to prefetch memory.
Also don't try to convert floating point values to int and vice versa too much since they use different registers and converting from one type to another means calling the actual conversion instruction, writing the value to memory and reading it back in the proper register set.
The vast majority of code that people write will be I/O bound (I believe all the code I have written for money in the last 30 years has been so bound), so the activities of the optimiser for most folks will be academic.
However, I would remind people that for the code to be optimised you have to tell the compiler to to optimise it - lots of people (including me when I forget) post C++ benchmarks here that are meaningless without the optimiser being enabled.
use const correctness as much as possible in your code. It allows the compiler to optimize much better.
In this document are loads of other optimization tips: CPP optimizations (a bit old document though)
highlights:
use constructor initialization lists
use prefix operators
use explicit constructors
inline functions
avoid temporary objects
be aware of the cost of virtual functions
return objects via reference parameters
consider per class allocation
consider stl container allocators
the 'empty member' optimization
etc
Attempt to program using static single assignment as much as possible. SSA is exactly the same as what you end up with in most functional programming languages, and that's what most compilers convert your code to to do their optimizations because it's easier to work with. By doing this places where the compiler might get confused are brought to light. It also makes all but the worst register allocators work as good as the best register allocators, and allows you to debug more easily because you almost never have to wonder where a variable got it's value from as there was only one place it was assigned.
Avoid global variables.
When working with data by reference or pointer pull that into local variables, do your work, and then copy it back. (unless you have a good reason not to)
Make use of the almost free comparison against 0 that most processors give you when doing math or logic operations. You almost always get a flag for ==0 and <0, from which you can easily get 3 conditions:
x= f();
if(!x){
a();
} else if (x<0){
b();
} else {
c();
}
is almost always cheaper than testing for other constants.
Another trick is to use subtraction to eliminate one compare in range testing.
#define FOO_MIN 8
#define FOO_MAX 199
int good_foo(int foo) {
unsigned int bar = foo-FOO_MIN;
int rc = ((FOO_MAX-FOO_MIN) < bar) ? 1 : 0;
return rc;
}
This can very often avoid a jump in languages that do short circuiting on boolean expressions and avoids the compiler having to try to figure out how to handle keeping
up with the result of the first comparison while doing the second and then combining them.
This may look like it has the potential to use up an extra register, but it almost never does. Often you don't need foo anymore anyway, and if you do rc isn't used yet so it can go there.
When using the string functions in c (strcpy, memcpy, ...) remember what they return -- the destination! You can often get better code by 'forgetting' your copy of the pointer to destination and just grab it back from the return of these functions.
Never overlook the oppurtunity to return exactly the same thing the last function you called returned. Compilers are not so great at picking up that:
foo_t * make_foo(int a, int b, int c) {
foo_t * x = malloc(sizeof(foo));
if (!x) {
// return NULL;
return x; // x is NULL, already in the register used for returns, so duh
}
x->a= a;
x->b = b;
x->c = c;
return x;
}
Of course, you could reverse the logic on that if and only have one return point.
(tricks I recalled later)
Declaring functions as static when you can is always a good idea. If the compiler can prove to itself that it has accounted for every caller of a particular function then it can break the calling conventions for that function in the name of optimization. Compilers can often avoid moving parameters into registers or stack positions that called functions usually expect their parameters to be in (it has to deviate in both the called function and the location of all callers to do this). The compiler can also often take advantage of knowing what memory and registers the called function will need and avoid generating code to preserve variable values that are in registers or memory locations that the called function doesn't disturb. This works particularly well when there are few calls to a function. This gets much of the benifit of inlining code, but without actually inlining.
I wrote an optimizing C compiler and here are some very useful things to consider:
Make most functions static. This allows interprocedural constant propagation and alias analysis to do its job, otherwise the compiler needs to presume that the function can be called from outside the translation unit with completely unknown values for the paramters. If you look at the well-known open-source libraries they all mark functions static except the ones that really need to be extern.
If global variables are used, mark them static and constant if possible. If they are initialized once (read-only), it's better to use an initializer list like static const int VAL[] = {1,2,3,4}, otherwise the compiler might not discover that the variables are actually initialized constants and will fail to replace loads from the variable with the constants.
NEVER use a goto to the inside of a loop, the loop will not be recognized anymore by most compilers and none of the most important optimizations will be applied.
Use pointer parameters only if necessary, and mark them restrict if possible. This helps alias analysis a lot because the programmer guarantees there is no alias (the interprocedural alias analysis is usually very primitive). Very small struct objects should be passed by value, not by reference.
Use arrays instead of pointers whenever possible, especially inside loops (a[i]). An array usually offers more information for alias analysis and after some optimizations the same code will be generated anyway (search for loop strength reduction if curious). This also increases the chance for loop-invariant code motion to be applied.
Try to hoist outside the loop calls to large functions or external functions that don't have side-effects (don't depend on the current loop iteration). Small functions are in many cases inlined or converted to intrinsics that are easy to hoist, but large functions might seem for the compiler to have side-effects when they actually don't. Side-effects for external functions are completely unknown, with the exception of some functions from the standard library which are sometimes modeled by some compilers, making loop-invariant code motion possible.
When writing tests with multiple conditions place the most likely one first. if(a || b || c) should be if(b || a || c) if b is more likely to be true than the others. Compilers usually don't know anything about the possible values of the conditions and which branches are taken more (they could be known by using profile information, but few programmers use it).
Using a switch is faster than doing a test like if(a || b || ... || z). Check first if your compiler does this automatically, some do and it's more readable to have the if though.
In the case of embedded systems and code written in C/C++, I try and avoid dynamic memory allocation as much as possible. The main reason I do this is not necessarily performance but this rule of thumb does have performance implications.
Algorithms used to manage the heap are notoriously slow in some platforms (e.g., vxworks). Even worse, the time that it takes to return from a call to malloc is highly dependent on the current state of the heap. Therefore, any function that calls malloc is going to take a performance hit that cannot be easily accounted for. That performance hit may be minimal if the heap is still clean but after that device runs for a while the heap can become fragmented. The calls are going to take longer and you cannot easily calculate how performance will degrade over time. You cannot really produce a worse case estimate. The optimizer cannot provide you with any help in this case either. To make matters even worse, if the heap becomes too heavily fragmented, the calls will start failing altogether. The solution is to use memory pools (e.g., glib slices ) instead of the heap. The allocation calls are going to be much faster and deterministic if you do it right.
A dumb little tip, but one that will save you some microscopic amounts of speed and code.
Always pass function arguments in the same order.
If you have f_1(x, y, z) which calls f_2, declare f_2 as f_2(x, y, z). Do not declare it as f_2(x, z, y).
The reason for this is that C/C++ platform ABI (AKA calling convention) promises to pass arguments in particular registers and stack locations. When the arguments are already in the correct registers then it does not have to move them around.
While reading disassembled code I've seen some ridiculous register shuffling because people didn't follow this rule.
Two coding technics I didn't saw in the above list:
Bypass linker by writing code as an unique source
While separate compilation is really nice for compiling time, it is very bad when you speak of optimization. Basically the compiler can't optimize beyond compilation unit, that is linker reserved domain.
But if you design well your program you can can also compile it through an unique common source. That is instead of compiling unit1.c and unit2.c then link both objects, compile all.c that merely #include unit1.c and unit2.c. Thus you will benefit from all the compiler optimizations.
It's very like writing headers only programs in C++ (and even easier to do in C).
This technique is easy enough if you write your program to enable it from the beginning, but you must also be aware it change part of C semantic and you can meet some problems like static variables or macro collision. For most programs it's easy enough to overcome the small problems that occurs. Also be aware that compiling as an unique source is way slower and may takes huge amount of memory (usually not a problem with modern systems).
Using this simple technique I happened to make some programs I wrote ten times faster!
Like the register keyword, this trick could also become obsolete soon. Optimizing through linker begin to be supported by compilers gcc: Link time optimization.
Separate atomic tasks in loops
This one is more tricky. It's about interaction between algorithm design and the way optimizer manage cache and register allocation. Quite often programs have to loop over some data structure and for each item perform some actions. Quite often the actions performed can be splitted between two logically independent tasks. If that is the case you can write exactly the same program with two loops on the same boundary performing exactly one task. In some case writing it this way can be faster than the unique loop (details are more complex, but an explanation can be that with the simple task case all variables can be kept in processor registers and with the more complex one it's not possible and some registers must be written to memory and read back later and the cost is higher than additional flow control).
Be careful with this one (profile performances using this trick or not) as like using register it may as well give lesser performances than improved ones.
I've actually seen this done in SQLite and they claim it results in performance boosts ~5%: Put all your code in one file or use the preprocessor to do the equivalent to this. This way the optimizer will have access to the entire program and can do more interprocedural optimizations.
Most modern compilers should do a good job speeding up tail recursion, because the function calls can be optimized out.
Example:
int fac2(int x, int cur) {
if (x == 1) return cur;
return fac2(x - 1, cur * x);
}
int fac(int x) {
return fac2(x, 1);
}
Of course this example doesn't have any bounds checking.
Late Edit
While I have no direct knowledge of the code; it seems clear that the requirements of using CTEs on SQL Server were specifically designed so that it can optimize via tail-end recursion.
Don't do the same work over and over again!
A common antipattern that I see goes along these lines:
void Function()
{
MySingleton::GetInstance()->GetAggregatedObject()->DoSomething();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingElse();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingCool();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingReallyNeat();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingYetAgain();
}
The compiler actually has to call all of those functions all of the time. Assuming you, the programmer, knows that the aggregated object isn't changing over the course of these calls, for the love of all that is holy...
void Function()
{
MySingleton* s = MySingleton::GetInstance();
AggregatedObject* ao = s->GetAggregatedObject();
ao->DoSomething();
ao->DoSomethingElse();
ao->DoSomethingCool();
ao->DoSomethingReallyNeat();
ao->DoSomethingYetAgain();
}
In the case of the singleton getter the calls may not be too costly, but it is certainly a cost (typically, "check to see if the object has been created, if it hasn't, create it, then return it). The more complicated this chain of getters becomes, the more wasted time we'll have.
Use the most local scope possible for all variable declarations.
Use const whenever possible
Dont use register unless you plan to profile both with and without it
The first 2 of these, especially #1 one help the optimizer analyze the code. It will especially help it to make good choices about what variables to keep in registers.
Blindly using the register keyword is as likely to help as hurt your optimization, It's just too hard to know what will matter until you look at the assembly output or profile.
There are other things that matter to getting good performance out of code; designing your data structures to maximize cache coherency for instance. But the question was about the optimizer.
Align your data to native/natural boundaries.
I was reminded of something that I encountered once, where the symptom was simply that we were running out of memory, but the result was substantially increased performance (as well as huge reductions in memory footprint).
The problem in this case was that the software we were using made tons of little allocations. Like, allocating four bytes here, six bytes there, etc. A lot of little objects, too, running in the 8-12 byte range. The problem wasn't so much that the program needed lots of little things, it's that it allocated lots of little things individually, which bloated each allocation out to (on this particular platform) 32 bytes.
Part of the solution was to put together an Alexandrescu-style small object pool, but extend it so I could allocate arrays of small objects as well as individual items. This helped immensely in performance as well since more items fit in the cache at any one time.
The other part of the solution was to replace the rampant use of manually-managed char* members with an SSO (small-string optimization) string. The minimum allocation being 32 bytes, I built a string class that had an embedded 28-character buffer behind a char*, so 95% of our strings didn't need to do an additional allocation (and then I manually replaced almost every appearance of char* in this library with this new class, that was fun or not). This helped a ton with memory fragmentation as well, which then increased the locality of reference for other pointed-to objects, and similarly there were performance gains.
A neat technique I learned from #MSalters comment on this answer allows compilers to do copy elision even when returning different objects according to some condition:
// before
BigObject a, b;
if(condition)
return a;
else
return b;
// after
BigObject a, b;
if(condition)
swap(a,b);
return a;
If you've got small functions you call repeatedly, i have in the past got large gains by putting them in headers as "static inline". Function calls on the ix86 are surprisingly expensive.
Reimplementing recursive functions in a non-recursive way using an explicit stack can also gain a lot, but then you really are in the realm of development time vs gain.
Here's my second piece of optimisation advice. As with my first piece of advice this is general purpose, not language or processor specific.
Read the compiler manual thoroughly and understand what it is telling you. Use the compiler to its utmost.
I agree with one or two of the other respondents who have identified selecting the right algorithm as critical to squeezing performance out of a program. Beyond that the rate of return (measured in code execution improvement) on the time you invest in using the compiler is far higher than the rate of return in tweaking the code.
Yes, compiler writers are not from a race of coding giants and compilers contain mistakes and what should, according to the manual and according to compiler theory, make things faster sometimes makes things slower. That's why you have to take one step at a time and measure before- and after-tweak performance.
And yes, ultimately, you might be faced with a combinatorial explosion of compiler flags so you need to have a script or two to run make with various compiler flags, queue the jobs on the large cluster and gather the run time statistics. If it's just you and Visual Studio on a PC you will run out of interest long before you have tried enough combinations of enough compiler flags.
Regards
Mark
When I first pick up a piece of code I can usually get a factor of 1.4 -- 2.0 times more performance (ie the new version of the code runs in 1/1.4 or 1/2 of the time of the old version) within a day or two by fiddling with compiler flags. Granted, that may be a comment on the lack of compiler savvy among the scientists who originate much of the code I work on, rather than a symptom of my excellence. Having set the compiler flags to max (and it's rarely just -O3) it can take months of hard work to get another factor of 1.05 or 1.1
When DEC came out with its alpha processors, there was a recommendation to keep the number of arguments to a function under 7, as the compiler would always try to put up to 6 arguments in registers automatically.
For performance, focus first on writing maintenable code - componentized, loosely coupled, etc, so when you have to isolate a part either to rewrite, optimize or simply profile, you can do it without much effort.
Optimizer will help your program's performance marginally.
You're getting good answers here, but they assume your program is pretty close to optimal to begin with, and you say
Assume that the program has been
written correctly, compiled with full
optimization, tested and put into
production.
In my experience, a program may be written correctly, but that does not mean it is near optimal. It takes extra work to get to that point.
If I can give an example, this answer shows how a perfectly reasonable-looking program was made over 40 times faster by macro-optimization. Big speedups can't be done in every program as first written, but in many (except for very small programs), it can, in my experience.
After that is done, micro-optimization (of the hot-spots) can give you a good payoff.
i use intel compiler. on both Windows and Linux.
when more or less done i profile the code. then hang on the hotspots and trying to change the code to allow compiler make a better job.
if a code is a computational one and contain a lot of loops - vectorization report in intel compiler is very helpful - look for 'vec-report' in help.
so the main idea - polish the performance critical code. as for the rest - priority to be correct and maintainable - short functions, clear code that could be understood 1 year later.
One optimization i have used in C++ is creating a constructor that does nothing. One must manually call an init() in order to put the object into a working state.
This has benefit in the case where I need a large vector of these classes.
I call reserve() to allocate the space for the vector, but the constructor does not actually touch the page of memory the object is on. So I have spent some address space, but not actually consumed a lot of physical memory. I avoid the page faults associated the associated construction costs.
As i generate objects to fill the vector, I set them using init(). This limits my total page faults, and avoids the need to resize() the vector while filling it.
One thing I've done is try to keep expensive actions to places where the user might expect the program to delay a bit. Overall performance is related to responsiveness, but isn't quite the same, and for many things responsiveness is the more important part of performance.
The last time I really had to do improvements in overall performance, I kept an eye out for suboptimal algorithms, and looked for places that were likely to have cache problems. I profiled and measured performance first, and again after each change. Then the company collapsed, but it was interesting and instructive work anyway.
I have long suspected, but never proved that declaring arrays so that they hold a power of 2, as the number of elements, enables the optimizer to do a strength reduction by replacing a multiply by a shift by a number of bits, when looking up individual elements.
Put small and/or frequently called functions at the top of the source file. That makes it easier for the compiler to find opportunities for inlining.
Related
optimizeMe (const char* string0, const char* string1)
{
int i0;
int i1 = strlen(string1) - 1;
int count = 0;
for (i0 = 0; i0 < strlen(string0); i0++)
{
if (toupper(string0[i0]) == toupper(string1[i1]))
count++;
count++;
if ((i0%32)==0)
i1--;
}
return(count / 8);
}
I know I can optimize this code by using register, gcc -o2, reduction in strength i0%32=0x10000, and common expression count/8 = count >> 3, etc;
However, how can I optimize them by code motion? Specifically for if statement and il--.
Any hints are appreciated !
As Lundin pointed out in the comments, these are premature optimisations. They're premature because you're clearly just guessing, and not using a profiler to test your theories on real-world programs with real-world bottlenecks.
They're also micro-optimisations, and we haven't really needed to micro-optimise in C since the 80s, thanks to significant breakthroughs in technology that allow us to do amazing things like mocking up three-dimensional realms in real-time, for example.
gcc already has support for various feature such as dead code elimination, code hoisting (even into compile-time in some cases) and profile-guided optimisation which you might be interested in. I think we take for granted the fact that we have a compiler which can statically deduce when code is unreachable, all by its lonesome; that's a quite complex optimisation for a machine to perform.
By profiling as you test, and then recompiling, feeding the profiler data back into the compiler, the compiler obtains information about how to rearrange branches to be better predicted. That's profile-guided optimisation, and it's semi-automated. I wonder what the authors of Wolfenstein 3D would've done for this kind of technology...
Speaking of profilers, if I may suggest that you test these in some realistic usecases (i.e. actual programs that have active development and a large community):
using register
reduction in strength i%32=0x10000
count/8 = count >> 3
That last optimisation isn't even correct for you (see exercise 5), by the way... This might be a good time to mention the other debugging tools we have in our suite. I can highly recommend checking out ASan (and the other UBSans) for example, will likely save you hours of debugging one day.
It might be best to use size_t since that's what strlen returns for a start, size_t is more portable for use with strlen and quite possibly faster too (due to the fact that size_t has no sign bit and so no potential sign handling overhead when you write things like for (size_t x = 0; x < y; x++))...
... or, if you want provide to your architecture-specific hints to your compiler (which presumably has no profile-guided optimisation, or else you wouldn't need to manually do that), you could also use uint_fast32_t or something else that isn't really suitable for the task, but is still vastly more suitable than int.
I gather you must be getting input from somewhere, or else your program is "pure", in the (functional) sense that it has no side-effects that change the way the user interacts with the system at all (in that case, your compiler might even hoist all of your logic into compile-time evaluation)... Have you considered adjusting the buffer sizes of whichever files and/or streams (i.e. stdin and stdout) you're using? You ought to be able to use setvbuf to do that... If you have many streams open at once, you might want to try choosing a smaller stream buffer size in order to keep all of your stuff in cache. I like to start off with a buffer size of 1, and work my way up from there, that way you'll see precisely where the bottleneck for your system is... To be clear, if you were to unroll loops (which gcc will happily do automatically if it's beneficial)...
If you're using a really primitive compiler (which you're not, though profilers are honestly likely to guide you straight to the heftiest optimisations in any case, optimising your time as a developer), you might be able to suggest to the compiler to emit non-branching code for these lines:
// consider `count += (toupper((unsigned char) string[i0]) == toupper((unsigned char) string1[i1]));`?
if (toupper(string0[i0]) == toupper(string1[i1]))
count++;
The casts are necessary to prevent crashes in certain circumstances, by the way... you absolutely need to make sure the only values you pass to a <ctype.h> function are unsigned char values, or EOF.
// consider using `i -= !(i0 % 32);`?
if ((i0%32)==0)
i1--;
I have some legacy code where a timewasting loop has been included to allow time for an eeprom read to complete (bad practice):
for(i = 0; i < 50; i++);
However, peculiar things happen when compiler optimizations are switched on for speed. It is not necessarily connected with that statement, but I would like to know if the compiler might just optimize the time delay away
It depends on the type of i. If it is just a plain integer type that isn't used apart from inside the loop, there are no side effects and the compiler is free to optimize away the whole thing.
If you declare i as volatile however, the compiler is forced to generate code that increments the variable and reads it, at each lap of the loop.
This is one of many reasons why you should not use "burn-away" loops like these in embedded systems. You also occupy 100% CPU and consume 100% current. And you create a tight coupling between your system clock and the loop, which isn't necessarily linear.
The professional solution is always to use an on-chip hardware timer instead of "burn-away" loops.
Lundin answer explains why it happens properly, so no need to paraphrase.
That said, if you really need to keep the old behaviour in your loop but optimize the rest, the easiest way would be to put this active delay loop in one function in one file:
#include <active_delay.h> // the corresponding header file
void active_delay(int d)
{
// do not build with optimize flags on!
int i;
for(i = 0; i < d; i++);
}
and build this file without any optimizing flags on.
Build the rest of your code with optimizing flags on to benefit from optimizer on "normal" code.
Note that because of function call overhead and the very short execution time of the loop, the delay slightly increases when moving from inline call to a function in a separate object file.
You may want to reduce d value to match previous timing (if it's necessary)
I am working on a an embedded architecture where ASM is predominent. I would like to refactor most of our legacy ASM code in C in order to increase readability and modularity.
So I am still puzzling with minor details which causes my hopes to vanish. The real problem is far more complex that this following example, but I would like to share this as an entry point to the discussion.
My goal is to find a optimal workaround.
Here is the original example (do not worry about what the code does. I wrote this randomly just to show the issue I would like to talk about).
int foo;
int bar;
int tmp;
int sum;
void do_something() {
tmp = bar;
bar = foo + bar;
foo = foo + tmp;
}
void compute_sum() {
for(tmp = 1; tmp < 3; tmp++)
sum += foo * sum + bar * sum;
}
void a_function() {
compute_sum();
do_something();
}
With this dummy code, anyone would immediately remove all the global variables and replace them with local ones:
void do_something(int *a, int *b) {
int tmp = *b;
*b = *a + *b;
*b = tmp + *a;
}
void compute_sum(int *sum, int foo, int bar) {
int tmp;
for(tmp = 1; tmp < 3; tmp++)
sum += *foo * sum + *bar * sum;
}
void a_function(int *sum, int *foo, int *bar) {
compute_sum(sum, foo, bar);
do_something(foo, bar);
}
Unfortunately this rework is worse than the original code because all the parameters are pushed into the stack which leads to latencies and larger code size.
The everything globals solution is both the best the ugliest solution. Especially when the source code is about 300k lines long with almost 3000 global variables.
Here we are not facing a compiler problem, but a structural issue. Writing beautiful, portable, readable, modular and robust code will never pass the ultimate performance test because compilers are dumb, even is 2015.
An alternative solution is to rather prefer inline functions. Unfortunately these functions have to be located into a header file which is also ugly.
A compiler cannot see further the file it is working on. When a function is marked as extern it will irrevocably lead to performance issues. The reason is the compiler cannot make any assumptions regarding the external declarations.
In the other way, the linker could do the job and ask the compiler to rebuild objects files by givin additionnal information to the compiler. Unfortunately not many compilers offer such features and when they do, they considerably slow down the build process.
I eventually came accross this dilemma:
Keep the code ugly to preserve performances
Everything's global
Functions without parameters (same as procedures)
Keeping everything in the same file
Follow standards and write clean code
Think of modules
Write small but numerous functions with well defined parameters
Write small but numerous source files
What to do when the target architecture has limited ressources. Going back to the assembly is my last option.
Additional Information
I am working on a SHARC architecture which is a quite powerful Harvard CISC architecture. Unfortunately one code instruction takes 48bits and a long only takes 32bits. With this fact it is better to keep to version of a variable rather than evaluating the second value on the fly:
The optimized example:
int foo;
int bar;
int half_foo;
void example_a() {
write(foo);
write(half_foo + bar);
}
The bad one:
void example_a(int foo, int bar) {
write(foo);
write(bar + (foo >> 1));
}
Ugly C code is still a lot more readable than assembler. In addition, it's likely that you'll net some unexpected free optimizations.
A compiler cannot see further the file it is working on. When a function is marked as extern it will irrevocably lead to performance issues. The reason is the compiler cannot make any assumptions regarding the external declarations.
False and false. Have you tried "Whole Program Optimization" yet? The benefits of inline functions, without having to organize into headers. Not that putting things in headers is necessarily ugly, if you organize the headers.
In your VisualDSP++ compiler, this is enabled by the -ipa switch.
The ccts compiler has a capability called interprocedural analysis (IPA), a
mechanism that allows the compiler to optimize across translation units
instead of within just one translation unit. This capability effectively
allows the compiler to see all of the source files that are used in a final link
at compilation time and make use of that information when optimizing.
All of the -ipa optimizations are invoked after the initial link, whereupon
a special program called the prelinker reinvokes the compiler to perform
the new optimizations.
I'm used to working in performance-critical core/kernel-type areas with very tight needs, often being beneficial to accept the optimizer and standard library performance with some grain of salt (ex: not getting too excited about the speed of malloc or auto-generated vectorization).
However, I've never had such tight needs so as to make the number of instructions or the speed of pushing more arguments to the stack be a considerable concern. If it is, indeed, a major concern for the target system and performance tests are failing, one thing to note is that performance tests modeled at a micro level of granularity often do have you obsessed with smallest of micro-efficiencies.
Micro-Efficiency Performance Tests
We made the mistake of writing all kinds of superficial micro-level tests in a former workplace I was at where we made tests to simply time something as basic as reading one 32-bit float from a file. Meanwhile, we made optimizations that significantly sped up the broad, real-world test cases associated with reading and parsing the contents of entire files while, at the same time, some of those uber-micro tests actually got slower for some unbeknownst reason (they weren't even directly modified, but changes to the code around them may have had some indirect impact relating to dynamic factors like caches, paging, etc., or merely how the optimizer treated such code).
So the micro-level world can get a bit more chaotic when you work with a higher-level language than assembly. The performance of the teeny things can shift under your feet a bit, but you have to ask yourself what's more important: a slight decrease in the performance of reading one 32-bit float from a file, or having real-world operations that read from entire files go significantly faster. Modeling your performance tests and profiling sessions at a higher level will give you room to selectively and productively optimize the parts that really matter. There you have many ways to skin a cat.
Run a profiler on an ultra-granular operation being executed a million times repeatedly and you would have already backed yourself into an assembly-type micro-corner for everything performing such micro-level tests just by the nature of how you are profiling the code. So you really want to zoom out a bit there, test things at a coarser level so that you can act like a disciplined sniper and hone in on the micro-efficiency of very select parts, dispatching the leaders behind inefficiencies rather than trying to be a hero taking out every little insignificant foot soldier that might be a performance obstacle.
Optimizing Linker
One of your misconceptions is that only the compiler can act as an optimizer. Linkers can perform a variety of optimizations when linking object files together, including inlining code. So there should rarely, if ever, be a need to jam everything into a single object file as an optimization. I'd try looking more into the settings of your linker if you find otherwise.
Interface Design
With these things aside, the key to a maintainable, large-scale codebase lies more in interface (i.e., header files) than implementation (source files). If you have a car with an engine that goes a thousand miles per hour, you might peer under the hood and find that there are little fire-breathing demons dancing around to allow that to happen. Perhaps there was a pact involved with demons to get such speed. But you don't have to expose that fact to the people driving the car. You can still give them a nice set of intuitive, safe controls to drive that beast.
So you might have a system that makes uninlined function calls 'expensive', but expensive relative to what? If you are calling a function that sorts a million elements, the relative cost of pushing a few small arguments to the stack like pointers and integers should be absolutely trivial no matter what kind of hardware you're dealing with. Inside the function, you might do all sorts of profiler-assisted things to boost performance like macros to forcefully inline code no matter what, perhaps even some inlined assembly, but the key to keeping that code from cascading its complexity throughout your system is to keep all that demon code hidden away from the people who are using your sort function and to make sure it's well-tested so that people don't have to constantly pop the hood trying to figure out the source of a malfunction.
Ignoring that 'relative to what?' question and only focusing on absolutes is also what leads to the micro-profiling which can be more counter-productive than helpful.
So I'd suggest looking at this more from a public interface design level, because behind an interface, if you look behind the curtains/under the hood, you might find all kinds of evil things going on to get that needed edge in performance in hotspot areas shown in a profiler. But you shouldn't need to pop the hood very often if your interfaces are well-designed and well-tested.
Globals become a bigger problem the wider their scope. If you have globals defined statically with internal linkage inside a source file that no one else can access, then those are actually rather 'local' globals. If thread-safety isn't a concern (if it is, then you should avoid mutable globals as much as possible), then you might have a number of performance-critical areas in your codebase where if you peer under the hood, you find file scope-static variables a lot to mitigate the overhead of function calls. That's still a whole lot easier to maintain than assembly, especially when the visibility of such globals are reduced with smaller and smaller source files dedicated to performing more singular, clear responsibilities.
I have designed/written/tested/documented many many real time embedded systems.
Both 'soft' real time and 'hard' real time.
I can tell you from hard earned experience that the algorithm used to implement the application is the place to make the biggest gains in speed.
Little stuff like a function call compared to in-line is trivial unless performed thousands (or even hundreds of thousands) of times
Due to the advances in x86 C compilers (namely GCC and Clang), many coding practices that were believed to improve efficiency are no longer used since the compilers can do a better job optimizing the code than humans (e.g. bit shift vs. multiplication).
Which specific practices are these?
Of the optimizations which are commonly recommended, a couple which are basically never fruitful given modern compilers include:
Mathematical transformations
Modern compilers understand mathematics, and will perform transformations on mathematical expressions where appropriate.
Optimizations such as conversion of multiplication to addition, or constant multiplication or division to bit shifting, are already performed by modern compilers, even at low optimization levels. Examples of these optimizations include:
x * 2 -> x + x
x * 2 -> x << 1
Note that some specific cases may differ. For instance, x >> 1 is not the same as x / 2; it is not appropriate to substitute one for the other!
Additionally, many of these suggested optimizations aren't actually any faster than the code they replace.
Stupid code tricks
I'm not even sure what to call this, but tricks like XOR swapping (a ^= b; b ^= a; a ^= b;) are not optimizations at all. They're just party tricks — they are slower, and more fragile, than the obvious approach. Don't use them.
The register keyword
This keyword is ignored by many modern compilers, as it its intended meaning (forcing a variable to be stored in a register) is not meaningful given current register allocation algorithms.
Code transformations
Compilers will automatically perform a wide variety of code transformations where appropriate. Several such transformations which are frequently recommended for manual application, but which are rarely useful when applied thus, include:
Loop unrolling. (This is often actually harmful when applied indiscriminately, as it bloats code size.)
Function inlining. (Tag a function as static, and it'll usually be inlined where appropriate when optimization is enabled.)
One such practice is to avoid multiplications by using arrays of array pointers instead of real 2D arrays.
Old practice:
int width = 1234, height = 5678;
int* buffer = malloc(width*height*sizeof(*buffer));
int** image = malloc(height*sizeof(*image));
for(int i = height; i--; ) image[i] = &buffer[i*width];
//Now do some heavy computations with image[y][x].
This used to be faster, because multiplications used to be very expensive (on the order of 30 CPU cycles), while memory accesses were virtually free (it was only in the 1990s that caches were added because memory couldn't keep up with full CPU speed).
But multiplications became fast, some CPUs being able to do them in one CPU cycle, while memory accesses did not keep pace at all. So, now this code is likely to be more performant:
int width = 1234, height = 5678;
int (*image)[width] = malloc(height*sizeof(*image));
//Now do some heavy computations with image[y][x],
//which will invoke pointer arithmetic to calculate the offset as (y*width + x)*sizeof(int).
Currently, there are still some CPUs around, where the second code is not faster, but the big multiplication penalty is not with us anymore.
Due to the plurality of platforms you would at best optimize for a given platform (or CPU Architecture/Model) and compiler!! If your code is running on many platforms it's a waste of time. (I'm talking about micro opts, it's always worth considering better algorithms)
This said optimizing for a given platform, DSP makes sense if the need for it arises. Then the best first helper is IMHO the judicious use of restrict keyword if the compiler/optimizer supports it well. Avoid algorithms involving conditions and jumpy code (breaks, goto, if, while, ...) This favors streaming and avoids too many bad branch predictions. I would agree these hints are common sense now.
Generally speaking I would say: Any manipulation that modifies the code by making assumptions on how the compiler optimizes shall be IMHO avoided at all.
Rather, then switch to assembly (common practice for some really important algorithms in DSPs where the compilers while being really great still miss the last few % of CPU/Mem cycles performance increase...)
One optimization that really shouldn't be used much more is #define (expanding on duskwuff's answer a bit).
The C preprocessor is a wonderful thing, and it can do some amazing code transformations, and it can make certain really complex code much simpler — but using #define just to cause a small operation to be inlined isn't usually appropriate anymore. Most modern compilers have a real inline keyword (or equivalent, like __inline__), and they're smart enough to inline most static functions anyway, which means that code like this:
#define sum(x, y) ((x) + (y))
is really better written as the equivalent function:
static int sum(int x, int y)
{
return x + y;
}
You avoid dangerous multiple-evaluation problems and side-effects, you get compiler type-checking and you end up with cleaner code, too. If it's worth inlining, the compiler will do it.
In general, save the preprocessor for the circumstances where it's needed: Emitting a lot of complex, variant code or partial code quickly. Using the preprocessor for inlining small functions and defining constants is mostly an antipattern now.
This is probably going to language dependent, but in general, what is the performance difference between accessing and writing to an array?
For example, if I am trying to write a prime sieve and am representing the primes as a boolean array.
Upon finding a prime, I can say
for(int i = 2; n * i < end; i++)
{
prime[n * i] = false;
}
or
for(int i = 2; n * i < end; i++)
{
if(prime[n * i])
{
prime[n * i] = false;
}
}
The intent in the latter case is to check the value before writing it to avoid having to rewrite many values that have already been checked. Is there any realistic gain in performance here, or are access and write mostly equivalent in speed?
Impossible to answer such a generic question without the specifics of the machine/OS this is running on, but in general the latter is going to be slower because:
The second example you have to get the value from RAM to L2/L1 cache and read it to a register, make a chance on the value and write it back. In the first case you might very well get away with simply writing a value to the L1/L2 caches. It can written to RAM from the caches later while your program is doing something else.
The second form has much more code to execute per iteration. For large enough number of iterations, the difference gets big real fast.
In general this depends much more on the machine than the programing language. The writes often will take a few more clock cycles because, depending on the machine, more cache values need to be updated in memory.
However, your second segment of code will be WAY slower, and it's not just because there's "more code". The big reason is that anytime you use an if-statement on most machines the CPU uses a branch predictor. The CPU literally predicts which way the if-statement will run ahead of time, and if it's wrong it has to backtrack. See http://en.wikipedia.org/wiki/Pipeline_%28computing%29 and http://en.wikipedia.org/wiki/Branch_predictor to understand why.
If you want to do some optimization, I would recommend the following:
Profile! See what's really taking up time.
Multiplication is much harder than addition. Try rewriting the loop so that i += n, and use this for your array index.
The loop condition "should" be totally reevaluated at every iteration unless the compiler optimizes it away. So try avoiding multiplication in there.
Use -O2 or -O3 as a compiler option
You might find that some values of n are faster than others because of cache locality. You might think of some clever ways to rewrite your code to take advantage of this.
Disassemble the code and look at what it's actually doing on your processor
It's a hard question and it heavily depends on your hardware, OS and complier. But for sake of theory, you should consider two things: branching and memory access. As branching is generally evil, you want to avoid it. I wouldn't even surprise if some compiler optimization took place and your second snippet would be reduced to the first one (compilers love avoiding branches, they probably consider it as a hobby, but they have a reason). So in these terms the first example is much cleaner and easier to deal with.
There're also CPU caches and other memory related issues. I believe that in both examples you have to actually load the memory into the CPU cache, so you can either read it or update. While reading is not a problem, writing have to propagate the changes up. I wouldn't be worried if you use the function in a single thread (as #gby pointed out, OS can push the changes a little bit later).
There is only one scenario I can come up with, that would make me consider solution from your second example. If I shared the table between threads to work on it in parallel (without locking) and had separate caches for different CPUs. Then, every time you amend the cache line from one thread, the other thread have to update it's copy before reading or writing to the same memory block. It's known as a cache coherence and it actually may hurt your performance badly; in such a case I could consider conditional writes. But wait, it's probably far away from your question...