This question already has answers here:
Tell gcc to specifically unroll a loop
(3 answers)
Closed 9 years ago.
I have the following loop that I am running on an ARM processor.
// pin here is pointer to some part of an array
for (i = 0; i < v->numelements; i++)
{
pe = pptr[i];
peParent = pe->parent;
SPHERE *ps = (SPHERE *)(pe->data);
pin[0] = FLOAT2FIX(ps->rad2);
pin[1] = *peParent->procs->pe_intersect == &SphPeIntersect;
fixifyVector( &pin[2], ps->center ); // Is an inline function
pin = pin + 5;
}
By the slow performance of the loop, I can judge that the compiler was unable to unroll this loop, as when I manually do the unrolling, it becomes quite fast. I think the compiler is getting confused by the pin pointer. Can we use restrict keyword to help the compiler here, or is restrict only reserved for function parameters? In general how can we tell the compiler to unroll it and don't worry about the pin pointer.
To tell gcc to unroll all loops you can use the optimization flag -funroll-loops.
To unroll only a specific loop you can use:
__attribute__((optimize("unroll-loops")))
see this answer for more details.
Edit
If the compiler cannot determine the number of iterations of the loop upon entry you will need to use -funroll-all-loops. Note that from the documentation: "Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly."
If you extent pptr size by one, you can use the pld instruction.
__asm__ __volatile__("pld\t[%0]" :: "r" (pptr[i+1]));
Or alternatively you may need to pre-load the next peParent and SPHERE *ps. The loop overhead on an ARM is very small. It is unlikely that un-rolling the loop will be a significant benefit. There are no loop variable constants. It is more likely that the compiler's scheduler is able to fetch advanced data before it is used when you have un-rolled the loop.
You have not presented all of the code to see the data dependencies. There maybe other variables that would benefit from being pre-loaded. Giving a complete example would probably help everyone answer your question.
Related
Is one of these loops quicker than the other?
I've always used #2 thinking it was quicker to compare against zero as opposed to comparing against a value in assembly since the CMP instruction would be simpler to execute but checking some ARM manuals I don't see anything to confirm this. Does it depend on the instruction set and processor you're using? Is it ever true?
//#1
while(1)
{
static uint8_t counter = 0;
counter++;
if(counter == 4)
{
counter = 0;
//do something
}
}
//#2
while(1)
{
static uint8_t counter = 4;
counter--;
if(counter == 0)
{
counter = 4;
//do something
}
}
It's hard to tell. Focusing on the release mode build, it largely depends on the context, and you aren't giving everything, especially the missing loop break condition makes it impossible to figure out.
Usually, if the number of iterations is an immediate value, the compiler will convert the loop construct to a fast counting down to zero one as long as there is no loop counter dependency inside the loop.
Anyway, on modern, superscalar architectures such as the Cortex-A series, a simple ALU instruction such as cmp will be well "hidden" and thus, won't cost an extra cycle most of the time.
What actually hurts the performance more is the static declaration of counter that automatically translates to memory RW. Avoid this if possible.
Further, if you simply want do something to run every fourth iteration, if ((counter & 3) == 0) could be the better solution that makes it possible to remove the counter resetting. And again, it all depends on the context (the length of "do something") which you didn't provide.
As a side note, local variables better be 32bit ones unless you have a good reason to declare them otherwise since anything less may translate to additional modulo related instructions such as uxtb, and, etc.
Counting down the loop counter to zero is a no brainer, but there are many more things to consider if you want the maximum performance.
Example:
for (int i = 0; i < a[index]; i++) {
// do stuff
}
Would a[index] be read every time? If no, what if someone wanted to change the value at a[index] in the loop? I've never seen it myself, but does the compiler make such an assumption?
If the condition was instead i < val-2, would it be evaluated every time?
The compiler will perform optimizations normally when the system is not impacted by other parts of the program. So if you make changes inside the for loop on the condition parameter, the compiler will not optimize.
As mentioned, the compiler should read the array and check it before each iteration in your code snippet. You can optimize your code as follows, then it will read the array only once for loop condition checking.
int cond = a[index];
for (int i = 0; i < cond; i++) {
// do stuff
}
well, maybe.
A standards compliant compiler will produce code that behaves as-if it
is read every time.
If index and/or array are of storage class volatile the they will be re-evaluated every time.
If they are not and the loops content doesn't use them in a way that can be expected to modify their value the optimiser may decide to use a cached result instead.
Co does not store results of expressions in temporary variables. So, all expressions re evaluated in-place. Note that any for loop can be changed to a while loop:
for ( def_or_expr1 ; expr2 ; expr3 ) {
...
}
becomes:
def_or_expr1;
while ( expr2 ) {
...
cont:
expr3;
}
Update: continue in the for loop would be the same as goto cont; int the while loop above. I.e. expr3 is evaluated for every iteration.
The compiler can bascially apply any optimization it can proof not to change the program's essence. Describing full details would be too far for this, but in general, it can (and will) optimize:
a[index] is not changed in the loop: read once before loop and keep in a temp (e.g. register).
a[index] is changed in the loop: update the temp (register) with the new value, avoiding memory access (and the index calculations).
For this, the compiler must assume the array is not changed outside the visible control flow. This is typically the file being compiled (with all included files). For modern systems using link time optimization (LTO), this can be the whole final program - minus dynamic libraries.
Note this is a very brief description. Actually, the C standard defines pretty clear how a program has to be executed, so what/how the compiler may optimize.
If the array is changed, for example by an interrupt handler or another thread, things become complicated. Depending on your target, you need from volatile, atomic operations (stdatomic.h, since C11) up to thread locks/mutexes/semapores/etc. to control accesses to the share resource.
My experience with C is relatively modest, and I lack good understanding of its compiled output on modern CPUs. The context: I'm working on image processing for an Android app. I have read that branch-free machine code is preferred for inner loops, so I'd like to know whether there could be a significant performance difference between something like this:
if (p) { double for loop, computing f() }
else if (q) { double for loop, computing g() }
else { double for loop, computing h() }
Versus the less verbose version which does the condition checking within the loop:
for (int i = 0; i < xRes; i++)
{
for (int j = 0; j < yRes; j++)
{
image[i][j] = p ? f() : (q ? g() : h());
}
}
In this code, p and q are expressions like mode == 3, where mode is passed into the function and never changed within it. I have three simple questions:
(1) Would the first, more verbose version compile to more efficient code than the second version?
(2) For the second version, would performance improve if I evaluate and store the results of p and q above the loop, so I can replace the boolean expressions in the loop with variables?
(3) Should I even be worried about this, or will branch prediction (or some other optimization) ensure the boolean expressions in the loop(s) are almost never evaluated anyway?
Finally, I'd be delighted if someone can say whether the answers to these 3 questions depend on the architecture. I'm interested in the main Android NDK platforms: ARM, MIPS, x86 etc. My thanks in advance!
It looks like the question was already well-answered here: the compiler probably performs loop unswitching, removing the conditional from the loop and automatically generating 3 copies of the loop, just like stark suggested. Moreover, from comments given there and above, it seems branch prediction works very well for loops like these.
Is memset() more efficient than for loop.
Considering this code:
char x[500];
memset(x,0,sizeof(x));
And this one:
char x[500];
for(int i = 0 ; i < 500 ; i ++) x[i] = 0;
Which one is more efficient and why? Is there any special instruction in hardware to do block level initialization.
Most certainly, memset will be much faster than that loop. Note how you treat one character at a time, but those functions are so optimized that set several bytes at a time, even using, when available, MMX and SSE instructions.
I think the paradigmatic example of these optimizations, that go unnoticed usually, is the GNU C library strlen function. One would think that it has at least O(n) performance, but it actually has O(n/4) or O(n/8) depending on the architecture (yes, I know, in big O() will be the same, but you actually get an eighth of the time). How? Tricky, but nicely: strlen.
Well, why don't we take a look at the generated assembly code, full optimization under VS 2010.
char x[500];
char y[500];
int i;
memset(x, 0, sizeof(x) );
003A1014 push 1F4h
003A1019 lea eax,[ebp-1F8h]
003A101F push 0
003A1021 push eax
003A1022 call memset (3A1844h)
And your loop...
char x[500];
char y[500];
int i;
for( i = 0; i < 500; ++i )
{
x[i] = 0;
00E81014 push 1F4h
00E81019 lea eax,[ebp-1F8h]
00E8101F push 0
00E81021 push eax
00E81022 call memset (0E81844h)
/* note that this is *replacing* the loop,
not being called once for each iteration. */
}
So, under this compiler, the generated code is exactly the same. memset is fast, and the compiler is smart enough to know that you are doing the same thing as calling memset once anyway, so it does it for you.
If the compiler actually left the loop as-is then it would likely be slower as you can set more than one byte size block at a time (i.e., you could unroll your loop a bit at a minimum. You can assume that memset will be at least as fast as a naive implementation such as the loop. Try it under a debug build and you will notice that the loop is not replaced.
That said, it depends on what the compiler does for you. Looking at the disassembly is always a good way to know exactly what is going on.
It really depends on the compiler and library. For older compilers or simple compilers, memset may be implemented in a library and would not perform better than a custom loop.
For nearly all compilers that are worth using, memset is an intrinsic function and the compiler will generate optimized, inline code for it.
Others have suggested profiling and comparing, but I wouldn't bother. Just use memset. Code is simple and easy to understand. Don't worry about it until your benchmarks tell you this part of code is a performance hotspot.
The answer is 'it depends'. memset MAY be more efficient, or it may internally use a for loop. I can't think of a case where memset will be less efficient. In this case, it may turn into a more efficient for loop: your loop iterates 500 times setting a bytes worth of the array to 0 every time. On a 64 bit machine, you could loop through, setting 8 bytes (a long long) at a time, which would be almost 8 times quicker, and just dealing with the remaining 4 bytes (500%8) at the end.
EDIT:
in fact, this is what memset does in glibc:
http://repo.or.cz/w/glibc.git/blob/HEAD:/string/memset.c
As Michael pointed out, in certain cases (where the array length is known at compile time), the C compiler can inline memset, getting rid of the overhead of the function call. Glibc also has assembly optimized versions of memset for most major platforms, like amd64:
http://repo.or.cz/w/glibc.git/blob/HEAD:/sysdeps/x86_64/memset.S
Good compilers will recognize the for loop and replace it with either an optimal inline sequence or a call to memset. They will also replace memset with an optimal inline sequence when the buffer size is small.
In practice, with an optimizing compiler the generated code (and therefore performance) will be identical.
Agree with above. It depends. But, for sure memset is faster or equal to the for-loop. If you are uncertain of your environment or too lazy to test, take the safe route and go with memset.
Other techniques like loop unrolling which reduce the number of loops can also be used. The code of memset() can mimic the famous duff's device:
void *duff_memset(char *to, int c, size_t count)
{
size_t n;
char *p = to;
n = (count + 7) / 8;
switch (count % 8) {
case 0: do { *p++ = c;
case 7: *p++ = c;
case 6: *p++ = c;
case 5: *p++ = c;
case 4: *p++ = c;
case 3: *p++ = c;
case 2: *p++ = c;
case 1: *p++ = c;
} while (--n > 0);
}
return to;
}
Those tricks used to enhancing the execution speed in the past. But on modern architectures this tends to increase the code size and increase cache misses.
So, it is quite impossible to say which implementation is faster as it depends on the quality of the compiler optimizations, the ability of the C library to take advantage of special hardware instructions, the amount of data you are operating on and the features of the underlying operating system (page faults management, TLB misses, Copy-On-Write).
For example, in the glibc, the implementation of memset() as well as various other "copy/set" functions like bzero() or strcpy() are architecture dependent to take advantage of various optimized hardware instructions like SSE or AVX.
I'm writing a loop in C, and I am just wondering on how to optimize it a bit. It's not crucial here as I'm just practicing, but for further knowledge, I'd like to know:
In a loop, for example the following snippet:
int i = 0;
while (i < 10) {
printf("%d\n", i);
i++;
}
Does the processor check both (i < 10) and (i == 10) for every iteration? Or does it just check (i < 10) and, if it's true, continue?
If it checks both, wouldn't:
int i = 0;
while (i != 10) {
printf("%d\n", i);
i++;
}
be more efficient?
Thanks!
Both will be translated in a single assembly instruction. Most CPUs have comparison instructions for LESS THAN, for LESS THAN OR EQUAL, for EQUAL and for NOT EQUAL.
One of the interesting things about these optimization questions is that they often show why you should code for clarity/correctness before worrying about the performance impact of these operations (which oh-so often don't have any difference).
Your 2 example loops do not have the same behavior:
int i = 0;
/* this will print 11 lines (0..10) */
while (i <= 10) {
printf("%d\n", i);
i++;
}
And,
int i = 0;
/* This will print 10 lines (0..9) */
while (i != 10) {
printf("%d\n", i);
i++;
}
To answer your question though, it's nearly certain that the performance of the two constructs would be identical (assuming that you fixed the problem so the loop counts were the same). For example, if your processor could only check for equality and whether one value were less than another in two separate steps (which would be a very unusual processor), then the compiler would likely transform the (i <= 10) to an (i < 11) test - or maybe an (i != 11) test.
This a clear example of early optimization.... IMHO, that is something that programmers new to their craft are way to prone to worry about. If you must worry about it, learn to benchmark and profile your code so that your worries are based on evidence rather than supposition.
Speaking to your specific questions. First, a <= is not implemented as two operations testing for < and == separately in any C compiler I've met in my career. And that includes some monumentally stupid compilers. Notice that for integers, a <= 5 is the same condition as a < 6 and if the target architecture required that only < be used, that is what the code generator would do.
Your second concern, that while (i != 10) might be more efficient raises an interesting issue of defensive programming. First, no it isn't any more efficient in any reasonable target architecture. However, it raises a potential for a small bug to cause a larger failure. Consider this: if some line of code within the body of the loop modified i, say by making it greater than 10, what might happen? How long would it take for the loop to end, and would there be any other consequences of the error?
Finally, when wondering about this kind of thing, it often is worthwhile to find out what code the compiler you are using actually generates. Most compilers provide a mechanism to do this. For GCC, learn about the -S option which will cause it to produce the assembly code directly instead of producing an object file.
The operators <= and < are a single instruction in assembly, there should be no performance difference.
Note that tests for 0 can be a bit faster on some processors than to test for any other constant, therefore it can be reasonable to make a loop run backward:
int i = 10;
while (i != 0)
{
printf("%d\n", i);
i--;
}
Note that micro optimizations like these usually can gain you only very little more performance, better use your time to use efficient algorithms.
Does the processor check both (i < 10) and (i == 10) for every iteration? Or does it just check (i < 10) and, if it's true, continue?
Neither, it will most likely check (i < 11). The <= 10 is just there for you to give better meaning to your code since 11 is a magic number which actually means (10+1).
Depends on the architecture and compiler. On most architectures, there is a single instruction for <= or the opposite, which can be negated, so if it is translated into a loop, the comparison will most likely be only one instruction. (On x86 or x86_64 it is one instruction)
The compiler might unroll the loop into a sequence of ten times i++, when only constant expressions are involved it will even optimize the ++ away and leave only constants.
And Ira is right, the comparison does vanish if there is a printf involved, which execution time might be millions of clock cycles.
I'm writing a loop in C, and I am just wondering on how to optimize it a bit.
If you compile with optimizations turned on, the biggest optimization will be from unrolling that loop.
It's going to be hard to profile that code with -O2, because for trivial functions the compiler will unroll the loop and you won't be able to benchmark actual differences in compares. You should be careful when profiling test cases that use constants that might make the code trivial when optimized by the compiler.
disassemble. Depending on the processor, and optimization and a number of things this simple example code actually unrolls or does things that do not reflect your real question. Compiling with gcc -O1 though both example loops you provided resulted in the same assembler (for arm).
Less than in your C code often turns into a branch if greater than or equal to the far side of the loop. If your processor doesnt have a greater than or equal it may have a branch if greater than and a branch if equal, two instructions.
typically though there will be a register holding i. there will be an instruction to increment i. Then an instruction to compare i with 10, then equal to, greater than or equal, and less than are generally done in a single instruction so you should not normally see a difference.
// Case I
int i = 0;
while (i < 10) {
printf("%d\n", i);
i++;
printf("%d\n", i);
i++;
}
// Case II
int i = 0;
while (i < 10) {
printf("%d\n", i);
i++;
}
Case I code take more space but fast and Case II code is take less space but slow compare to Case I code.
Because in programming space complexity and time complexity always proportional to each other. It means you must compromise either space or time.
So in that way you can optimize your time complexity or space complexity but not both.
And your both code are same.