I wrote the well known swap function in C and watched the assembly output using gcc S and once again did the same but with optimizations of O2
The difference was pretty big as I saw only 5 lines compared to 20 lines.
My question is, If optimisation really helps what's the reason of not using it all the time? Why we non non-optimized compilation of code?
An extra question to those working in the industry, when you release the final version of your program after testing it do you compile with optimizations on?
I am responding to all your comments, please read them.
There are a few reasons.
1. Compilation takes longer time
For small and even medium sized projects, this is rarely an issue today. Modern computers are VERY fast. If it takes five or ten seconds usually does not matter. But for larger projects it does matter. Especially if the build process is not setup properly. I remember when I was trying to add a feature to the game The Battle for Wesnoth. Compilation took around ten minutes. It's easy to see how much you would want to reduce that to five minutes or lower if you could.
2. Optimized code is harder to debug
The reason that it makes code harder to debug is that the debugger does not run the program line by line. That's just an illusion. Here is an example where it might be a problem:
int main(void) {
char str[] = "Hello, World!";
int number_of_capital_letters = 0;
for(int i=0; i<strlen(str); i++) {
if(isupper(str[i]))
number_of_capital_letters++;
}
printf("%s\n", str);
// Outcommented for debugging reasons
// printf("%d\n", number_of_capital_letters);
}
You fire up your debugger and wonders why it does not keep track of number_of_capital_letters. And then you find out that since you have commented out the last printf statement, the variable is not used for any observable behavior so the optimizer changes your code to:
int main(void) {
puts("Hello, World!");
}
One could argue that you then just turn off optimizer for a debug build. And that's true in the world when a cow is a sphere. But a third reason is
3. Sometimes bugs only show up at higher optimization levels.
Imagine that you have a big code base. When you upgrade the compiler, a bug suddenly emerges. And it seems to vanish when you remove optimization. What's the problem here? Well, it could be a bug in the optimizer. But it could also be a bug in your code that manifested itself with the new version of the optimizer. Very often, code with undefined behavior behaves different in code compiled with optimization.
So what do you do? You could try to figure out if the bug is in the optimizer or your code. That can be a VERY time consuming task. Let's assume it's a bug in the optimizer. What to do? You could downgrade your compiler, which is not optimal for several reasons. Especially if it's an open source project. Imagine downloading the source and then run the build script and scratching your head for hours to figure out what's wrong, and then you see in some documentation (provided that the author documented it) that you need a specific version of a specific compiler.
Let's instead assume it's a bug in your code. The ideal thing is of course to fix it. But maybe you don't have the resources to do so. This time you can also require anyone who compiles it to use a certain version of a specific compiler.
But if you could just edit a Makefile and replace -O3 with -O2, you can clearly see that it's a viable option sometimes in our non-ideal world where time is not an endless resource. With a bit of bad luck, such a bug can take a week to track down. Or more. That's time you can spend somewhere else.
Here is an example of such a bug:
#include <stdio.h>
int main(void) {
char str[] = "Hello";
str[5] = '!';
puts(str);
}
When I compiled this with gcc 10.2 I got different results depending on optimization level.
Without optimization:
Hello!
With optimization:
Hello!`#
Try it out yourself:
https://godbolt.org/z/5dcKKrEW1
https://godbolt.org/z/48bz5ae1d
And here I found a forum thread where the debug build works but not release: https://developer.apple.com/forums/thread/15112
4. Sometimes bugs only show up at LOWER optimization levels.
Yep, that may also happen. In this case, you could just increase the optimization if you don't care that much about correctness. But if you do care, this can be a way to find bugs. If your code runs correctly both with and without optimization, it's more likely to not contain bugs that will haunt you in the future compared to if you only have compiled with optimization.
I did not find an example that worked, but this might theoretically do.
int main(void) {
if(1/0) // Division by zero
puts("An error has occurred");
else
puts("Everything is fine");
}
If this is compiled without optimization, it's a high probability that it will crash. But the optimizer might assume that undefined behavior (like division by zero) never occurs, so it optimizes the code to just:
int main(void) {
puts("Everything is fine");
}
Assume that 1/0 is some kind of error check that is very unlikely to evaluate to true, so you would normally assume the program prints "Everything is fine". Here, the optimizer hides a bug.
5. The optimizer might produce a binary that's bigger in size, or is using more memory. Or something else that's not desirable.
This sometimes matters. Especially in embedded systems. Usually (always) -O0 produces very big code, but you might want to use -Os (optimize for size instead of speed) instead of -O3 to get a small binary. And sometimes also to get faster code. See below.
6. The optimizer might produce slower code
Yep, really. It's not often, but it may happen. A related but not equivalent example is illustrated in this question where the compiler generates faster code when optimizing for size of executable than speed.
If you never use a source level debugger you probably could. But if you never use a source level debugger, you probably should.
Unoptimized code has a direct one-to-one correspondence to statements expressions and variables in the source code, so when stepping through the code, it all makes sense - all the lines are executed in the order you would expect and all variables have a valid state when you would expect them to do so.
Optimised code on the other hand can eliminate code and variables, and reorder execution and generally render source level debugging a nonsense. Sometimes you get a bug that only appears in an optimised build, so you may have to deal with it, but generally such things are a result of undefined behaviour, and it is generally better to avoid that in the first instance.
One thing to consider is that in development you have performed all your testing and development on unoptimized code; so you could debug it. If, on the day you release it you crank up the optimiser and ship it, you are essentially shipping a whole lot of untested code. Testing is hard, and you really should test what you release, so between building and releasing you may have a lot of work to do to eliminate the risk. Releasing to the same build spec that you have been testing every day throughout development may be lower risk.
For code running on a desktop responding to and waiting for user input, or which disk or network I/O bound, making the code faster or smaller often serves little purpose. There may be specific parts of a large application that will benefit such as sorting or searching algorithms on large data sets, or image or audio processing, and for those you might use targeted rather then whole application optimisation.
In embedded systems where often you are using processors much slower than Desktop systems with much smaller memory resources, optimisation for both speed and size may be critical, but even there the code normally has to both fit and meet real-time deadlines in its debug build in order to support test and debugging. If it only works optimised, it will be much harder to debug.
Apart from optimising your code, it should perhaps be noted that in order to do that job, the optimiser has to perform a much deeper analysis the code through techniques such as abstract execution, and in doing so can find bugs and issue warnings that normal compilation will not detect. For example the optimiser is rather good at detecting variables that may be used before they are initialised. To that end, I would recommend switching on and max'ing the optimiser as a kind of "poor man's" static analysis, even if you use a lower optimisation level for release - for the reasons given earlier.
The optimiser is also the most complex part of any compiler; if the compiler is going to have a bug, it is likely to be in the optimiser. That said I have only ever encountered one such confirmed bug, in Microsoft C v6.0 1989! More often what at first appears to be a compiler bug turns out to be undefined behaviour or latent bugs in the source being compiled that manifest themselves with different code generation options.
Personally I usually have optimisation turned on.
My reasons are:
The shipped code is built with optimisation as we need the -- especially numerical -- performance. Since you can't ship what you haven't tested the test version must also be optimised. It would I suppose be possible to build without optimisation during development but I begrudge the extra time to then build with optimisation, and test, prior to release to test. Moreover performance is sometimes part of the spec, so some development test has to be done with optimised code.
I don't find using a debugger so very tough with optimised code. Mind you, given the kind of programs I mostly write -- fancy filters without user interfaces and numerical libraries -- printf and valgrind (which works fine with optimised code) are my preferred tools.
In recent versions of gcc, at least, more and better diagnostics are produced with optimisation on rather than off.
This, like so much else in programming, will of course vary with circumstances.
One reason is probably just: tradition. The first C compiler was written for the DEC PDP-11, which had a 64k address space. (That's right, a tenth of that famous but mythical old IBM PC quote about "640k should be enough for anybody".) The first C compiler ran as quite a number of separate programs or passes: there was the preprocessor cpp, the parser c0, the code generator c1, the assembler as, and the linker ld. If you asked for optimization, it ran as a separate pass c2 which was a "peephole optimizer" operating on c1's output, before passing it to as.
Compilation was much slower in those days than it is today (because of course the processors were much slower). People didn't routinely request optimization for everyday work, because it really did cost you something significant in your edit/compile/debug cycle.
And although a whole lot has changed since then, the fact that optimization is something extra, something special, that you have to request explicitly, lives on.
Related
I need a better way of profiling numerical code. Assume that I'm using GCC in Cygwin on 64 bit x86 and that I'm not going to purchase a commercial tool.
The situation is this. I have a single function running in one thread. There are no code dependencies or I/O beyond memory accesses, with the possible exception of some math libraries linked in. But for the most part, it's all table look-ups, index calculations, and numerical processing. I've cache aligned all arrays on the heap and stack. Due to the complexity of the algorithm(s), loop unrolling, and long macros, the assembly listing can become quite lengthy -- thousands of instructions.
I have been resorting to using either, the tic/toc timer in Matlab, the time utility in the bash shell, or using the time stamp counter (rdtsc) directly around the function. The problem is this: the variance (which might be as much as 20% of the runtime) of the timing is larger than the size of the improvements I'm making, so I have no way of knowing if the code is better or worse after a change. You might think then it's time to give up. But I would disagree. If you are persistent, many incremental improvements can lead to a two or three times performance increase.
One problem I have had multiple times that is particularly maddening is that I make a change and the performance seems to improve consistently by say 20%. The next day, the gain is lost. Now it's possible I made what I thought was an innocuous change to the code and then completely forgot about it. But I'm wondering if it's possible something else is going on. Like maybe GCC doesn't yield a 100% deterministic output as I believe it does. Or maybe it's something simpler, like the OS moved my process to a busier core.
I have considered the following, but I don't know if any of these ideas are feasible or make any sense. If yes, I would like explicit instructions on how to implement a solution. The goal is to minimize the variance of the runtime so I can meaningfully compare different versions of optimized code.
Dedicate a core of my processor to run only my routine.
Direct control over the cache(s) (load it up or clear it out).
Ensuring my dll or executable always loads to the same place in memory. My thinking here is that maybe the set-associativity of the cache interacts with the code/data location in RAM to alter performance on each run.
Some kind of cycle accurate emulator tool (not commercial).
Is it possible to have a degree of control over context switches? Or does it even matter? My thinking is the timing of the context switches is causing variability, maybe by causing the pipeline to be flushed at an inopportune time.
In the past I have had success on RISC architectures by counting instructions in the assembly listing. This only works, of course, if the number of instructions is small. Some compilers (like TI's Code Composer for the C67x) will give you a detailed analysis of how it's keeping the ALU busy.
I haven't found the assembly listings produced by GCC/GAS to be particularly informative. With full optimization on, code is moved all over the place. There can be multiple location directives for a single block of code dispersed about the assembly listing. Further, even if I could understand how the assembly maps back into my original code, I'm not sure there's much correlation between instruction count and performance on a modern x86 machine anyway.
I made a weak attempt at using gcov for line-by-line profiling, but due to an incompatibility between the version of GCC I built and the MinGW compiler, it wouldn't work.
One last thing you can do is average over many, many trial runs, but that takes forever.
EDIT (RE: Call Stack Sampling)
The first question I have is, practically, how do I do this? In one of your power point slides, you showed using Visual Studio to pause the program. What I have is a DLL compiled by GCC with full optimizations in Cygwin. This is then called by a mex DLL compiled by Matlab using the VS2013 compiler.
The reason I use Matlab is because I can easily experiment with different parameters and visualize the results without having to write or compile any low level code. Further, I can compare my optimized DLL to the high level Matlab code to ensure my optimizations have not broken anything.
The reason I use GCC is that I have a lot more experience with it than with Microsoft's compiler. I'm familiar with many flags and extensions. Further, Microsoft has been reluctant, at least in the past, to maintain and update the native C compiler (C99). Finally, I've seen GCC kick the pants off commercial compilers, and I've looked at the assembly listing to see how it's actually done. So I have some intuition of how the compiler actually thinks.
Now, with regards to making guesses about what to fix. This isn't really the issue; it's more like making guesses about how to fix it. In this example, as is often the case in numerical algorithms, there is really no I/O (excluding memory). There are no function calls. There's virtually no abstraction at all. It's like I'm sitting on top of a piece of saran wrap. I can see the computer architecture below, and there's really nothing in-between. If I re-rolled up all the loops, I could probably fit the code on about one page or so, and I could almost count the resultant assembly instructions. Then I could do a rough comparison to the theoretical number of operations a single core is capable of doing to see how close to optimal I am. The trouble then is I lose the auto-vectorization and instruction level parallelization I got from unrolling. Unrolled, the assembly listing is too long to analyze in this way.
The point is that there really isn't much to this code. However, due to the incredible complexity of the compiler and modern computer architecture, there is quite a bit of optimization to be had even at this level. But I don't know how small changes are going to affect the output of the compiled code. Let me give a couple of examples.
This first one is somewhat vague, but I'm sure I've seen it happen a few times. You make a small change and get a 10% improvement. You make another small change and get another 10% improvement. You undo the first change and get another 10% improvement. Huh? Compiler optimizations are neither linear, nor monotonic. It's possible, the second change required an additional register, which broke the first change by forcing the compiler to alter its register allocation algorithm. Maybe, the second optimization somehow occluded the compiler's ability to do optimizations which was fixed by undoing the first optimization. Who knows. Unless the compiler is introspective enough to dump its full analysis at every level of abstraction, you'll never really know how you ended up with the final assembly.
Here is a more specific example which happened to me recently. I was hand coding AVX intrinsics to speed up a filter operation. I thought I could unroll the outer loop to increase instruction level parallelism. So I did, and the result was that the code was twice as slow. What happened was there were not enough 256 bit registers to go around. So the compiler was temporarily saving results on the stack, which killed performance.
As I was alluding to in this post, which you commented on, it's best to tell the compiler what you want, but unfortunately, you often have no choice and are forced to hand tweak optimizations, usually via guess and check.
So I guess my question would be, in these scenarios (the code is effectively small until unrolled, each incremental performance change is small, and you're working at a very low level of abstraction), would it be better to have "precision of timing" or is call stack sampling better at telling me which code is superior?
I've faced a similar problem some time ago but that was on Linux which made it easier to tweak. Basically the noise introduced by OS (called "OS jitter") was as big as 5-10% in SPEC2000 tests (I can imagine it's much higher on Windows due to much bigger amount of bloatware).
I was able to bring deviation to below 1% by combination of the following:
disable dynamic frequency scaling (better do this both in BIOS and in Linux kernel as not all kernel versions do this reliably)
disable memory prefetching and other fancy settings like "Turbo boost", etc. (BIOS, again)
disable hyperthreading
enable high-performance process scheduler in kernel
bind process to core to prevent thread migration (use core 0 - for some reason it was more reliable on my kernel, go figure)
boot to single-user mode (in which no services are running) - this isn't as easy in modern systemd-based distros
disable ASLR
disable network
drop OS pagecache
There may be more to it but 1% noise was good enough for me.
I might put detailed instructions to github later today if you need them.
-- EDIT --
I've published my benchmarking script and instructions here.
Am I right that what you're doing is making an educated guess of what to fix, fixing it, and then trying to measure to see if it made any difference?
I do it a different way, which works especially well as the code gets large.
Rather than guess (which I certainly can) I let the program tell me how the time is spent, by using this method.
If the method tells me that roughly 30% is spent doing such-and-so, I can concentrate on finding a better way to do that.
Then I can run it and just time it.
I don't need a lot of precision.
If it's better, that's great.
If it's worse, I can undo the change.
If it's about the same, I can say "Oh well, maybe it didn't save much, but let's do it all again to find another problem,"
I need not worry.
If there's a way to speed up the program, this will pinpoint it.
And often the problem is not just a simple statement like "line or routine X spends Y% of the time", but "the reason it's doing that is Z in certain cases" and the actual fix may be elsewhere.
After fixing it, the process can be done again, because a different problem, which was small before, is now larger (as a percent, because the total has been reduced by fixing the first problem).
Repetition is the key, because each speedup factor multiplies all the previous, like compound interest.
When the program no longer points out things I can fix, I can be sure it is nearly optimal, or at least nobody else is likely to beat it.
And at no point in this process did I need to measure the time with much precision.
Afterwards, if I want to brag about it in a powerpoint, maybe I'll do multiple timings to get smaller standard error, but even then, what people really care about is the overall speedup factor, not the precision.
I have a code that works really fine with an -O1 optimization, but that crashes if I don't optimize the code. The last lines that are executing are the following :
OSCCTRL_CRITICAL_SECTION_ENTER();
((Oscctrl *)hw)->DFLLCTRL.reg = data;
If I put a breakpoint on this last line, and then go to the next instruction, then the debugger will lose track of the execution pointer.
This code is called as part of the chip initialization which is the following succession of functions :
void _init_chip(void)
{
hri_nvmctrl_set_CTRLB_RWS_bf(NVMCTRL, CONF_NVM_WAIT_STATE);
_set_performance_level(2);
OSC32KCTRL->RTCCTRL.bit.RTCSEL = 0x4;
_osc32kctrl_init_sources();
_oscctrl_init_sources();
_mclk_init();
_gclk_init_generators();
_oscctrl_init_referenced_generators();
}
The buggy line is called by the _oscctrl_init_referenced_generators(); line.
I would like to know the differences between optimized and non-optimized code, and if you guys any known issues with non-optimized embedded code.
I am developping on a SAML21J18B MCU, embedding a Cortex-M0+ CPU.
I'm going a different direction than the other answer and the comments. Looking at your code, it looks like you're playing with the oscillator control, and so I'm thinking that you are not using the correct process for configuring or adjusting the oscillator.
Depending on what you are trying to do, you may need to switch to a different clock before adjusting oscillator parameters, and by breaking and stepping, you may be losing your clock. When you don't optimize, there are probably some extra instructions that are causing the same result.
Consult the part's reference manual and make sure you're doing everything correctly. For this line of thinking, though, your question needs more of the code in that area and the model of microcontroller (not just the core type).
The most obvious effect of optimizations will be the debuggers ability to display execution state. The act of debugging can interfere with program execution. Specifically for this chip certain oscillator configurations can cause problems.
The debugger is probably not your problem however. If you step into _oscctrl_init_referenced_generators(); you will likely find that one of your configured oscillators is not starting and that the code is waiting for the DFLL or FDPLL to obtain a stable frequency lock. There can be a variety of reasons for this. Check that the upstream oscillators are configured and running.
In short, the difference is that the optimization depending on its type may simplify some code constructions, as well as change the location of the data in the memory. Thus, in most cases such a behavior signals about the code design is not made well. Most typical reasons are the use of non-initialized variables, hanging pointers, out of boundary access or the having similar issues. Thus, you should avoid code constructions which depend on assumptions which might become wrong due to optimization. Depending on the compiler and optimization level, the use of volatile might also help in some cases.
Also, if you perform at least a tight code review + static code analysis, and ensure there are no compiller warnings the behavior should remain the same independently from optimization.
I am working on a (quite large) existing monothreaded C application. In this context I modified the application to perform some very few additional work consisting in incrementing a counter each time we call a special function (this function is called ~ 80.000 times). The application is compiled on an Ubuntu 12.04 running a 64 bits Linux kernel 3.2.0-31-generic with -O3 option.
Surprisingly the instrumented version of the code is running faster and I am investigating why.I measure execution time with clock_gettime(CLOCK_PROCESS_CPUTIME_ID) and to get representative results, I am reporting an average execution time value over 100 runs. Moreover, to avoid interference from outside world, I tried as much as possible to launch the application in a system without any other applications running (on a side note, because CLOCK_PROCESS_CPUTIME_ID returns process time and not wall clock time, other applications "should" in theory only affect cache and not directly the process execution time)
I was suspecting "instruction cache effects", maybe the instrumented code that is a little bit larger (few bytes) fits differently and better in the cache, is this hypothesis conceivable ? I tried to do some cache investigations with valegrind --tool=cachegrind but unfortunately, the instrumented version has (as it seems logical) more cache misses than the initial version.
Any hints on this subject and ideas that may help to find why instrumented code is running faster are welcomes (some GCC optimizations available in one case and not in the other, why ?, ...)
Since there are not many details in the question, I can only recommend some factors to consider while investigating the problem.
Very few additional work (such as incrementing a counter) might alter compiler's decision on whether to apply some optimizations or not. Compiler has not always enough information to make perfect choice. It may try to optimize for speed where bottleneck is code size. It may try to auto-vectorize computations when there is not too much data to process. Compiler may not know what kind of data is to be processed or what is the exact model of CPU, that will execute the code.
Incrementing a counter may increase size of some loop and prevent loop unrolling. This may decrease code size (and improve code locality, which is good for instruction or microcode caches or for loop buffer and allows CPU to fetch/decode instructions quickly).
Incrementing a counter may increase size of some function and prevent inlining. This also may decrease code size.
Incrementing a counter may prevent auto-vectorization, which again may decrease code size.
Even if this change does not affect compiler optimization, it may alter the way how the code is executed by CPU.
If you insert counter-incrementing code in place, full of branch targets, this may make branch targets less dense and improve branch prediction.
If you insert counter-incrementing code in front of some particular branch target, this may make branch target's address better aligned and make code fetch faster.
If you place counter-incrementing code after some data is written but before the same data is loaded again (and store-to-load forwarding did not work for some reason), the load operation may be completed earlier.
Insertion of counter-incrementing code may prevent two conflicting load attempts to the same bank in L1 data cache.
Insertion of counter-incrementing code may alter some CPU scheduler decision and make some execution port available just in time for some performance-critical instruction.
To investigate effects of compiler optimization, you can compare generated assembler code before and after addition of counter-incrementing code.
To investigate CPU effects, use a profiler allowing to inspect processor performance counters.
Just guessing from my experience with embedded compilers, Optimization tools in compilers look for recursive tasks. Perhaps the additional code forced the compiler to see something more recursive and it structured the machine code differently. Compilers do some weird things for optimization. In some languages (Perl I think?) a "not not" conditional is faster to execute than a "true" conditional. Does your debugging tool allow you to single step through a code/assembly comparison? This could add some insight as to what the compiler decided to do with the extra tasks.
In C:
Lets say function "Myfuny()" has 50 line of codes in which other smaller functions also get called. Which one of the following code would be more efficient?
void myfunction(long *a, long *b);
int i;
for(i=0;i<8;i++)
myfunction(&a, &b);
or
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
myfunction(&a, &b);
any help would be appreciated.
That's premature optimization, you just shouldn't care...
Now, from a code maintenance point of view the first form (with the loop) is definitely better.
From a run-time point of view and if the function is inline and defined in the same compilation unit, and with a compiler that does not unroll the loop itself, and if code is already in instruction cache (I don't know for moon phases, I still believe it shouldn't have any noticable effect) the second one may be marginally fastest.
As you can see, there is many conditions for it to be fastest, so you shouldn't do that. There is probably many other parameters to optimize in your program that would have a much greater effect for code speed than this one. Any change that would affect algorithmic complexity of the program will have a much greater effect. More generally speaking any code change that does not affect algorithmic complexity is probably premature optimization.
If you really want to be sure, measure. On x86 you can use the kind of trick I used in this question to get a fairly accurate measure. The trick is to read a processor register that count the number of cycles spent. The question also illustrate how code optimization questions can become tricky, even for very simple problems.
I'd assume the compiler will translate the first variant into the second.
The first. Any have half-decent compiler will optimize that for you. It's easier to read/understand and easier to write.
Secondly, write first, optimize second. Even if your compiler was completely brain dead and retarded, it at best would only save you a few nano/ms seconds on a modern CPU. Chances are there are bigger bottlenecks in your applications that could/should be optimized first.
It depends on so many things your best bet is to do it both ways and measure.
It would take less (of your) time to write out the for loop. I'd also say it's clearer to read with the loop. It would probably save a few instructions to write them out, but with modern processors and compilers it may amount to exactly the same result...
The first. It is easier to read.
First, are you sure you have a code execution performance problem? If you don't, then you're talking about making your code less readable and writable for no reason at all.
Second, have you profiled your program to see if this is in a place where it will take a significant amount of time? Humans are very bad at guessing the hot spots in programs, and without profiling you're likely to spend time and effort fiddling with things that don't make a difference.
Third, are you going to check the assembler code produced to see if there's a difference? If you're using an optimizing compiler with optimizations on, it's likely to produce what it sees fit for either. If you aren't, and you have a performance problem, get a better computer or turn on more optimizations.
Fourth, if there is a difference, are you going to test both ways to see which is better? On at least a representative sample of the systems your users will be running on?
And, to give you my best answer to which is more efficient: it depends. If they're in fact compiled to different code, the unrolled version might be faster because it doesn't have the loop overhead (which includes a conditional branch), and the rolled-up version might be faster because it's shorter code and will work better in the instruction cache. The usual wisdom was to unroll, but I once sped up a long-running section by rolling the execution up as tightly as I could.
On modern processors the size of compiled code becomes very importand. If this loop could run entirly from processor's cache it would be the fastest solution. As n8wrl said test yourself.
I created a short test for this, with surprising results. At least for me, anyway, I would've thought it was the other way round.
So, I wrote two versions of a program iterating over a function nothing(), that did nothing interesting (inc on a variable).
The first used proper loops (a million iterations of 1000 iterations, two nested fors), the second one did a million iterations of 1000 consecutive calls to nothing().
I used the time command to measure. The version with the proper loop took about 3.5 seconds on average, and the consecutive calling version took about 2.5 seconds on average.
I then tried to compile with optimization flags, but gcc detected that the program did essentially nothing and execution was instantaneous on both versions =P. Didn't bother fixing that.
Edit: if you were actually thinking of writing 8 consecutive calls in your code, please don't. Remember the famous quote: "Programs must be written for people to read, and only incidentally for machines to execute.".
Also note that my tests did nothing except nothing() (=P) and are no proper benchmarks to consider in any actual program.
Loop unrolling can make execution faster (otherwise Duff's Device wouldn't have been invented), but that's a function of so many variables (processor, cache size, compiler settings, what myfunction is actually doing, etc.) that you can't rely on it to always be true, or for whatever improvement to be worth the cost in readability and maintainability. The only way to know for sure if it makes a difference for your particular platform is to code up both versions and profile them.
Depending on what myfunction actually does, the difference could be so far down in the noise as to be undetectable.
This kind of micro-optimization should only be done if all of the following are true:
You're failing to meet a hard performance requirement;
You've already picked the proper algorithm and data structure for the problem at hand (e.g., in the average case a poorly optimized Quicksort will beat the pants off of a highly optimized bubble sort, and in the worst case they'll be equally bad);
You're compiling with the highest level of optimization that the compiler offers;
How much does myFunction(long *a, long *b) do?
If it does much more than *a = *b + 1; the cost of calling the function can be so small compared to what goes on inside the function that you are really focussing in the wrong place.
On the other hand, in the overall picture of your application program, what percent of time is spent in these 8 calls? If it's not very much, then it won't make much difference no matter how tightly you optimize it.
As others say, profile, but that's not necessarily as simple as it sounds. Here's the method I and some others use.
I will break down this question in to sub questions. I am confused if I should ask them separately or in one question. So I will just stick to one SO question.
What are generally the steps to analyze and improve performance of C applications?
Do these steps change if I am developing for an embedded system?
What tools are out there which can help me?
Recently I have been given a task to improve the performance of our product on ARM11 platform. I am relatively new to this field of embedded systems and need gurus here on SO to help me out.
simply changing compilers can improve your C performance for the same source code by many times over. GCC has not necessarily gotten better for performance over the years, for some programs gcc 3.x produces much tighter code than 4.x. Back when I had access to the tools, ARMs compiler produced significantly better code than gcc. As much as 3 or 4 times faster. LLVM has caught up to GCC 4.x and I suspect will pass gcc by in terms of performance and overall use for cross compiling embedded code. Try different versions of gcc, 3.x and 4.x if you are using gcc. Metaware's compiler and arms adt ran circles around gcc3.x, gcc3.x will give gcc4.x a run for its money with arm code, for thumb code gcc4.x is better and for thumb2 (which doesnt apply to you) gcc4.x also better. Remember I have not said a word about changing a single line of code (yet).
LLVM is capable of full program optimization in addition to infinitely more tuning knobs than gcc. Despite that the code generated (ver 27) is only just catching up to the current gcc 4.x in terms of performance for the few programs I tried. And I didnt try the n factoral number of optimization combinations (optimize on the compile step, different options for each file, or combine two files or three files or all files and optimize those bundles, my theory is do no optimization on the C to bc steps, link all the bc together then do a single optimization pass on the whole program, the allow the default optimization when llc takes it to the target).
By the same token simply knowing your compiler and the optimizations can greatly improve the performance of the code without having to change any of it. You have an ARM11 arr you compiling for arm11 or generic arm? You can gain a few to a dozen percent by telling the compiler specifically which architecture/family (armv6 for example) over the generic armv4 (ARM7) that is often chosen as the default. Knowing to use -O2 or -O3 if you are brave.
It is often not the case but switching to thumb mode can improve performance for specific platforms. Doesnt apply to you but the gameboy advance is a perfect example, loaded with non-zero wait state 16 bit busses. Thumb has a handful of a percent overhead because it takes more instructions to do the same thing, but by increasing the fetch times, and taking advantage of some of the sequential read features of the gba thumb code can run significantly faster than arm code for the same source code.
having an arm11 you probably have an L1 and maybe L2 cache, are they on? Are they configured? Do you have an mmu and is your heavy use memory cached? or are you running zero wait state memory and dont need a cache and should turn it off? In addition to not realizing that you can take the same source code and make it run many times faster by changing compilers or options, folks often dont realize that when you use a cache simply adding a single up to a few nops in your startup code (as a trick to adjust where code lands in memory by one, two, a few words) you can change your codes execution speed by as much as 10 to 20 percent. Where those cache line reads hit in heavily used functions/loops makes a big difference. Even saving one cache line read by adjusting where the code lands is noticeable (cutting it from 3 to 2 or 2 to 1 for example).
Knowing your architecture, both the processor and your memory environment is where the tuning if any would start. Most C libraries if you are high level enough to use one (I often dont use a C library as I run without an operating system and with very limited resources) both in their C code and sometimes add some assembler to make bottleneck routines like memcpy, much faster. If your programs are operating on aligned 32 or even better 64 bit addresses, and you adjust even if it means using a handful of bytes more memory for every structure/array/memcpy to be an integral multiple of 32 bits or 64 bits you will see noticeable improvements (if your code uses structs or copies data in other ways). In addition to getting your structures (if you use them, I certainly dont with embedded code) size aligned, even if you waste memory, getting elements aligned, consider using 32 bit integers for every element instead of bytes or halfwords. Depending on your memory system this can help (it can hurt too btw). As with the GBA example above looking at specific functions that either by profiling or intuition you know are not being implemented in a manner that takes advantage of your processor or platform or libraries you may want to turn to assembler either from scratch or compiling from C initially then disassembling and hand tuning. Memcpy is a good example you may know your systems memory performance and may chose to create your own memcpy specifically for aligned data, copying 64 or 128 or more bits per instruction.
Likewise mixing global and local variables can make a noticeable performance difference. Traditionally folks are told never to use globals, but in embedded this isnt necessarily true, depends on how deeply embedded and how much tuning and speed and other factors you are interested in. This is a touchy subject and I may get flamed for it, so I will leave it at that.
The compiler has to burn and evict registers in order to make function calls, plus if you use local variables a stack frame may be required, so function calls are expensive, but at the same time, depending on the code within a function that has now grown in size by avoiding functions, you may create the problem you were trying to avoid, evicting registers to re-use them. Even a single line of C code can make the difference between all the variables in a function fits in registers to having to start evicting a bunch of registers. For functions or segments of code where you know you need some performance gain compile and disassemble (and look at register usage, how often it fetches memory or writes to memory). You can and will find places where you need to take a well used loop and make it its own function even though the function call has a penalty because by doing that the compiler can better optimize the loop and not evict/reuse registers and you get an overall net gain. Even a single extra instruction in a loop that goes around hundreds of times is a measurable performance hit.
Hopefully you already know to absolutely not compile for debug, turn all of the compile for debug options off. You may already know that code compile for debug that runs without bugs doesnt mean it is debugged, compiling for debug and using debuggers hide bugs leaving them as time bombs in your code for your final compile for release. Learn to always compile for release and test with the release version both for performance and finding bugs in your code.
Most instruction sets do not have a divide function. Avoid using divides or modulo in your code as much as humanly possible they are performance killers. Naturally this is not the case for powers of two, to save the compiler and to mentally avoid divides and modulos try to use shifts and ands. Multplies are easier and more often found in instruction sets, but are still costly. This is a good case to write assembler to do your multiplies instead of letting the C copiler do it. The arm multiply is a 32bit * 32bit = 32 bit so to do accurate math without overflowing there has to be extra C code wrapped around the multiply, if you already know you wont overflow, burn the registers for a function call and do the multiply in assembler (for the arm).
Likewise most instruction sets do not have a floating point unit, with yours you might, even so avoid float if at all possible. If you have to use float that is a whole other pandora's box of performance issues. Most folks dont see the performance problems with code as simple as this:
float a,b;
...
a = b * 7.0;
The rest of the problem is not understanding floating point accuracy and how good or bad the C libraries are just trying to get your constants into floating point form. Again float is a whole other long discussion on performance problems.
I am a product of Michael Abrash (I actually have a print copy of zen of assembly language) and the bottom line is time your code. Come up with an accurate way to time the code, you may think you know where the bottlenecks are and you may think you know your architecture but trying different things even if you think they are wrong, and timing them you may find and eventually have to figure out the error in your thinking. Adding nops to start.S as a final tuning step is a good example of this, all the other work you have done for performance can be instantly erased by not having a good alignment with the cache, this also means re-arranging functions within your source code so that they land in different places in the binary image. I have seen 10 to 20 percent swings of speed increase and decrease as a result of cache line alignments.
Code Review:
What are good code review techniques ?
Static and dynamic analysis of the code.
Tools for static analysis: Sparrow, Prevent, Klockworks
Tools for dynamic analysis : Valgrind, purify
Gprof allows you to learn where your program spent its time and which functions called which other functions while it was executing.
Steps are same
Apart from what is listed is point 1, there are tools like memcheck etc.
There is a big list here based on platform
Phew!! Quite a big question!
What are generally the steps to
analyze and improve performance of C
applications?
As well as other static code analysers mentioned here there is a fairly cheap version called PC-Lint which has been around for ages. Sometimes throws up lots of errors and warnings for one error but by the end of it you'll be happy and know waaaaay more about C/C++ because of it.
With all code analysers some of the issues may be more structural to the code so best to start analysing it from day 1 of coding; running analysis on old software may swamp you with issues which may take a while to untangle, best to keep it clean from the beginning.
But code analysers will not catch all logical errors, i.e. it doesn't do what you want it to do! These are best done by code reviews first, then testing. Performance is often improved by by trying to keep the algorithms as simple as possible, keeping instructions in loops tight, possibly unrolling loops (your compiler optimisations may do this), use of fast caches when accessing data which is slow to get.
Code reviews can raise a lot of issues from lots of other peoples eyes looking at it. Don't get too many people, try to get 3 other people if possible, sometimes junior developers ask the most insightful questions like, "why are we doing this?".
Testing can be roughly split into two sections, automated and manual. Automated testing requires effort producing test handlers for functions/units but once run can be run again and again very quickly. Manual testing requires planning, self-discipline to perform them all to the required, imagination to think up of scenarios that may impair performance and you have to be observant (you may have passed the test but the 'scope trace has a bit of an anomaly before/after the test).
"Do these steps change if I am
developing for an embedded system?"
Performance ananlysis can be different on embedded systems to applications systems; with the very broad brush that "embedded" now covers it depends how hardware-centric you are. It can be done using profilers, if you want a more cheap and chearful method then use test output pins to measure sections of code, or measure them with breakpoints on simulators that come with the development environment.
Make sure that not just a typical length of task is measured but also a maximum, as that is where one task may start impeding on other tasks and your scheduled tasks are not completed in time.
What tools are out there which can
help me?
Simulators on the IDEs, static analysis tools, dynamic analysis tools, but most of all you and other humans getting the requirements right, decent reviewing (of code and testing) and thorough testing (automated and manual).
Good luck!
My experiences.
Function calls are slow, eliminate with macros or inlined methods. Look at the disassembler listing to see.
If using GCC, mark optimized sections with #pragma GCC optimize("O3") or compile them separately.
Play with different combinations of applying the inline attribute (basically find a balance between size and speed).
It is a difficult question to be answered shortly since various techniques have been proposed such as flowchart and state diagram,so you can take a look at some titles:
ARM System-on-Chip Architecture, 2nd Edition -- Steve Furber
ARM System Developer's Guide - Designing and Optimizing System Software -- Andrew N. Sloss, Dominic Symes, Chris Wright & John Rayfield
The Definitive Guide to the ARM Cortex-M3 --Joseph Yiu
C Programming for Embedded Systems --Kirk Zurell
Embedded C -- Michael J. Pont
Programming Embedded Systems in C and C++ --Michael Barr
An Embedded Software Primer --David E, Simon
Embedded Microprocessor Systems 3rd Edition --Stuart Ball
Global Specification and Validation of Embedded Systems - Integrating Heterogeneous Components --G. Nicolescu & A.A Jerraya
Embedded Systems: Modeling, Technology and Applications --Gunter Hommel & Sheng Huanye
Embedded Systems and Computer Architecture --Graham Wilson
Designing Embedded Hardware --John Catsoulis
You have to use a profiler. It will help you identify your application's bottleneck(s). Then focus on improving the functions you spend the most time in and the ones you call the most. Repeat this procedure until you're satisfied with your application performance.
No they don't.
Depending on the platform you're developing onto :
Windows : AMD Code Analyst, VTune, Sleepy
Linux : valgrind / callgrind / cachegrind
Mac : the Xcode profiler is quite good.
Try to find a profiler for the architecture you actually work on.