If and else, were should I put the more likely part? - c

I was wondering if there is a big performance difference in languages, whether you should put the more likely to be executed code in the if or in the else clause. Here is an example:
// x is a random number, or some key code from the user
if(!somespecific_keycode)
do the general stuff
else
do specific stuff
and the other solution
if(somespecific_keycode)
do the specific stuff
else
do general stuff

Prefer to put them in the order that makes the code clearer, which is usually having the more likely to be executed first.

As others said: in terms of performance you should best rely on your compiler and your hardware (branch prediction, speculative execution) to do the right thing.
In case you are really concerned that these two don't help you enough, GCC provides a builtin (__builtin_expect) with which you can explicitly indicate the expected outcome of a branch.
In terms of code readability, I personally like the more likely case to be on top.

Unless you experience a performance problem, don't worry about it.
If you do experience a performance problem, try switching them around and measure which variant is faster, if any of them.

The common rule is to put more likely case first, it's considered to be more readable.

branch prediction will cause one of those to be more likely and it will cause a performance difference if inside a loop. But mostly you can ignore that if you are not thinking at assembler level.

This isn't necessarily a performance concern, but I usually go from specific to general to prevent cases like this:
int i = 15;
if(i % 3 == 0)
System.out.println("fizz");
else if(i % 5 == 0)
System.out.println("buzz");
else if(i % 3 == 0 && i % 5 == 0)
System.out.println("fizzbuzz");
Here the above code will never say 'fizzbuzz', because 15 matches both the i % 3 == 0 and i % 5 == 0 conditions. If you re-order into something more specific:
int i = 15;
if(i % 3 == 0 && i % 5 == 0)
System.out.println("fizzbuzz");
else if(i % 3 == 0)
System.out.println("fizz");
else if(i % 5 == 0)
System.out.println("buzz");
Now the above code will reach "fizzbuzz" before getting stopped by the more general conditions

All answers have valid points. Here is an additional one:
Avoid double negations: if not this, then that, else something tends to be confusing for the reader. Hence for the example given, I would favor:
if (somespecific_keycode) {
do_the_specific_stuff();
} else {
do_general_stuff();
}

It mostly doesn't make a difference but sometimes it is easier to read and debug if your ifs are checking if something is true or equal and the else handles when that isn't the case.

As the others have said, it's not going to make a huge difference unless you are using this many many times (in a loop for example). In that case, put the most likely condition first as it will have the earliest opportunity to break out of the condition checking.
It become more apparent when you start having many 'else if's .

Any difference that may arise is more related to the context than inherently with if-else constructions. So the best you can do here is develop your own tests to detect any difference.
Unless you are optimizing an already finished system or software, what I'd recommend you is avoid premature optimizations. Probably you've already heard they are evil.

AFAIK with modern optimizing C compilers there is no direct relation between how you organize your if or loop and actual branching instructions in generated code. Moreover different CPUs have different branch prediction algorithms.
Therefore:
Don't optimize until you see bad performance related to this code
If you do optimize, measure and compare different versions
Use realistic data of varied characteristics for performance measurement
Look at assembly code generated by your compiler in both cases.

Related

Is there a difference in performance when swapping if/else condition?

Is there a difference in performance between
if(array[i] == -1){
doThis();
}
else {
doThat();
}
and
if(array[i] != -1){
doThat();
}
else {
doThis();
}
when I already now that there is only one element (or in general few elements) with the value -1 ?
That will depend entirely on how your compiler chooses to optimise it. You have no guarantee as to which is faster. If you really need to give hints to the compiler, look at the unlikely macro in the Linux kernel which are defined thus:
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
Which means you can use
if (likely(something)) { ... }
or
if (unlikely(something)) { ... }
Details here: http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
Moral: write your code for readability, and not how you think the compiler will optimise it, as you are likely to be wrong.
Performance is always implementation dependent. If it is sufficiently important to you, then you need to benchmark it in your environment.
Having said that: there is probably no difference, because modern compilers are likely to turn both versions into equally efficient machine code.
One thing that might cause a difference is if the different code order changes the compiler's branch prediction heuristics. This can occasionally make a noticeable difference.
The compiler wouldn't know about your actual data, so it will produce roughly the same low-level code.
However, given that if-statements generate assembly branches and jumps, your code may run a little faster in your second version because if your value is not -1 then your code will run the very next instruction. Whereas in your first version the code would need to jump to a new instruction address, which may be costly, especially when you deal with a large number of values (say millions).
That would depend on which condition is encountered first. As such there is not such a big diffrence.
-----> If You have to test a lot of statements, rather than using nested if-else a switch statement would be faster.

avr if statement optimization for speed or size

Hy there everyone!
I'm doing good progress on my AVR for my DIY sprinkler and fish tank automatization, but I've come across a question, that bugs me.
Which if statement runs on the AVR faster?(in less clock cycles)
By how much?
if(temp_sensor[0] < -20)
{
OCR1A--;
}
else if(tempout > tempset)
{
OCR1A--;
}
Or
if((temp_sensor[0] < -20) || (tempout > tempset))
{
OCR1A--;
}
On second thought, my second question is:
Which one uses less space?
My conclusion:
First of all, thanks everyone for your answers and comments!
The primary objective should be to write a clean code, that is easy to understand.
You could try for a (seemingly) jumpless approach:
const int8_t delta = temp_sensor < -20 || tempout > tempset;
OCR1A -= delta;
That can sometimes give shorter code. Of course it's very CPU-dependent, not sure how well the AVR likes code like this. It very well might generate a jump for the short-circuiting of the || operator, too. It's also totally possible for the compiler to optimize itself out of the jumps all on its own.
Write code for readability, not for speed. Especially in those very trivial cases where the compiler can easily figure out what is happening and optimize it.
You should avoid the 1st way because you have some duplicated code in it, which isn't ideal.
Also I must point out that unless optimized, an || or an &&, are compiled to branch instructions the same way an if statement is, so they do improve readability in the code but don't really bring any performance advantage.
this way
if((temp_sensor[0] < -20) || (tempout > tempset))
{
OCR1A--;
}
is better way to write these kind of conditions. it is more understandable. as to if it takes less ticks it's not really significant, unless you do it hundred of thousand of times, and in this case just check them both and see which is better
Adding to unwind, you can also use it this way:
OCR1A -= (uint8_t)((temp_sensor < -20) | (tempout > tempset));
| is bitwise OR.
This will remove the JUMP code required for Logical OR (||).

what is the fastest algorithm for permutations of three condition?

Can anybody help me regarding quickest method for evaluating three conditions in minimum steps?
I have three conditions and if any of the two comes out to be true,then whole expression becomes true else false.
I have tried two methods:
if ((condition1 && condition2) ||
(condition1 && condition3) ||
(condition2 && condition3))
Another way is to by introducing variable i and
i = 0;
if (condition1) i++;
if (condition2) i++;
if (condition3) i++;
if (i >= 2)
//do something
I want any other effective method better than the above two.
I am working in a memory constrained environment (Atmeta8 with 8 KB of flash memory) and need a solution that works in C.
This can be reduced to:
if((condition1 && (condition2 || condition3)) || (condition2 && condition3))
//do something
Depending on the likelihood of each condition, you may be able to optimize the ordering to get faster short-circuits (although this would probably be premature optimization...)
It is always hard to give a just "better" solution (better in what regard -- lines of code, readability, execution speed, number of bytes of machine code instructions, ...?) but since you are asking about execution speed in this case, we can focus on that.
You can introduce that variable you suggest, and use it to reduce the conditions to a simple less-than condition once the answer is known. Less-than conditions trivially translate to two machine code instructions on most architectures (for example, CMP (compare) followed by JL (jump if less than) or JNL (jump if not less than) on Intel IA-32). With a little luck, the compiler will notice (or you can do it yourself, but I prefer the clarity that comes with having the same pattern everywhere) that trues < 2 will always be true in the first two if() statements, and optimize it out.
int trues = 0;
if (trues < 2 && condition1) trues++;
if (trues < 2 && condition2) trues++;
if (trues < 2 && condition3) trues++;
// ...
if (trues >= 2)
{
// do something
}
This, once an answer is known, reduces the possibly complex evaluation of conditionN to a simple less-than comparison, because of the boolean short-circuiting behavior of most languages.
Another possible variant, if your language allows you to cast a boolean condition to an integer, is to take advantage of that to reduce the number of source code lines. You will still be evaluating each condition, however.
if( (int)(condition1)
+ (int)(condition2)
+ (int)(condition3)
>= 2)
{
// do something
}
This works based on the assumption that casting a boolean FALSE value to an integer results in 0, and casting TRUE results in 1. You can also use the conditional operator for the same effect, although be aware that it may introduce additional branching.
if( ((condition1) ? 1 : 0)
+ ((condition2) ? 1 : 0)
+ ((condition3) ? 1 : 0)
>= 2)
{
// do something
}
Depending on how smart the compiler's optimzer is, it may be able to determine that once any two conditions have evaluated to true the entire condition will always evaluate to true, and optimize based on that.
Note that unless you have actually profiled your code and determined this to be the culprit, this is likely a case of premature optimization. Always strive for code to be readable by human programmers first, and fast to execute by the computer second, unless you can show definitive proof that the particular piece of code you are looking at is an actual performance bottleneck. Learn how that profiler works and put it to good use. Keep in mind that in most cases, programmer time is an awful lot more expensive than CPU time, and clever techniques take longer for the maintenance programmer to parse.
Also, compilers are really clever pieces of software; sometimes they will actually detect the intent of the code written and be able to use specific constructs meant to make those operations faster, but that relies on it being able to determine what you are trying to do. A perfect example of this is swapping two variables using an intermediary variable, which on IA-32 can be done using XCHG eliminating the intermediary variable, but the compiler has to be able to determine that you are actually doing that and not something clever which may give another result in some cases.
Since the vast majority of the non-explicitly-throwaway software written spends the vast majority of its lifetime in maintenance mode (and lots of throwaway software written is alive and well long past its intended best before date), it makes sense to optimize for maintainability unless that comes at an unacceptable cost in other respects. Of course, if you are evaluating those conditions a trillion times inside a tight loop, targetted optimization very well might make sense. But the profiler will tell you exactly which portions of your code need to be scrutinized more closely from a performance point of view, meaning that you avoid complicating the code unnecessarily.
And the above caveats said, I have been working on code recently making changes that at first glance would almost certainly be considered premature detail optimization. If you have a requirement for high performance and use the profiler to determine which parts of the code are the bottlenecks, then the optimizations aren't premature. (They may still be ill-advised, however, depending on the exact circumstances.)
Depends on your language, I might resort to something like:
$cond = array(true, true, false);
if (count(array_filter($cond)) >= 2)
or
if (array_reduce($cond, function ($i, $k) { return $i + (int)$k; }) >= 2)
There is no absolut answer to this. This depends very much on the underlying architecture. E.g. if you program in VHDL or Verilog some hardware circuit, then for sure the first would give you the fastest result. I assume that your target some kind of CPU, but even here very much will depend on the target cpu, the instruction it supports, and which time they will take. Also you dont specify your target language (e.g. your first approach can be short circuited which can heavily impact speed).
If knowing nothing else I would recommend the second solution - just for the reason that your intentions (at least 2 conditions should be true) are better reflected in the code.
The speed difference of the two solutions would be not very high - if this is just some logic and not the part of some innermost loop that is executed many many many times, I would even guess for premature optimization and try to optimize somewhere else.
You may consider simpy adding them. If you use masroses from standart stdbool.h, then true is 1 and (condition1 + condition2 + condition3) >= 2 is what you want.
But it is still a mere microoptimization, usually you wouldn't get a lot of productivity with this kind of tricks.
Since we're not on a deeply pipelined architecture there's probably no value in branch avoidance, which would normally steer the optimisations offered by desktop developers. Short-cuts are golden, here.
If you go for:
if ((condition1 && (condition2 || condition3)) || (condition2 && condition3))
then you probably have the best chance, without depending on any further information, of getting the best machine code out of the compiler. It's possible, in assembly, to do things like have the second evaluation of condition2 branch back to the first evaluation of condition3 to reduce code size, but there's no reliable way to express this in C.
If you know that you will usually fail the test, and you know which two conditions usually cause that, then you might prefer to write:
if ((rare1 || rare2) && (common3 || (rare1 && rare2)))
but there's still a fairly good chance the compiler will completely rearrange that and use its own shortcut arrangement.
You might like to annotate things with __builtin_expect() or _Rarely() or whatever your compiler provides to indicate the likely outcome of a condition.
However, what's far more likely to meaningfully improve performance is recognising any common factors between the conditions or any way in which the conditions can be tested in a way that simplifies the overall test.
For example, if the tests are simple then in assembly you could almost certainly do some basic trickery with carry to accumulate the conditions quickly. Porting that back to C is sometimes viable.
You seem wiling to evaluate all the conditions, as you proposed such a solution yourself in your question. If the conditions are very complex formulas that take many CPU cycles to compute (like on the order of hundreds of milliseconds), then you may consider evaluating all three conditions simultaneously with threads to get a speed-up. Something like:
pthread_create(&t1, detached, eval_condition1, &status);
pthread_create(&t2, detached, eval_condition2, &status);
pthread_create(&t3, detached, eval_condition3, &status);
pthread_mutex_lock(&status.lock);
while (status.trues < 2 && status.falses < 2) {
pthread_cond_wait(&status.cond, &status.lock);
}
pthread_mutex_unlock(&status.lock);
if (status.trues > 1) {
/* do something */
}
Whether this gives you a speed up depends on how expensive it is to compute the conditions. The compute time has to dominate the thread creation and synchronization overheads.
Try this one:
unsigned char i;
i = condition1;
i += condition2;
i += condition3;
if (i & (unsigned char)0x02)
{
/*
At least 2 conditions are True
0b00 - 0 conditions are true
0b01 - 1 conditions are true
0b11 - 3 conditions are true
0b10 - 2 conditions are true
So, Checking 2nd LS bit is good enough.
*/
}

C syntax or binary optimized syntax?

Let's take a simple example of two lines supposedly doing the same thing:
if (value >= 128 || value < 0)
...
or
if (value & ~ 127)
...
Say 'If's are costly in a loop of thousands of iterations, is it better to keep with the traditional C syntax or better to find a binary optimized one if possible?
I would use first statement with traditional syntax as it is more readable.
It's possible to break the eyes with the second statement.
Care about programmers who will use the code after you.
In 99 cases out of 100, do the one which is more readable and expresses your intent better.
In theory, compilers will do this sort of optimization for you. In practice, they might not. This particular example is a little bit subtle, because the two are not equivalent, unless you make some assumptions about value and on whether or not signed arithmetic is 2's complement on your target platform.
Use whichever you find more readable. If and when you have evidence that the performance of this particular test is critical, use whatever gives you the best performance. Personally, I would probably write:
if ((unsigned int)value >= 96U)
because that's more intuitive to me, and more likely to get handled well by most compilers I've worked with.
It depends on how/where check is. If the check is being done once during program start up to check the command line parameter, then the performance issue is completely moot and you should use whatever is more natural.
On the other hand, if the check was inside some inner loop that is happening millions of times a second, then it may matter. But don't assume one will be better; you should create both versions and time them to see if there is any measurable difference between the two.

What techniques to avoid conditional branching do you know?

Sometimes a loop where the CPU spends most of the time has some branch prediction miss (misprediction) very often (near .5 probability.) I've seen a few techniques on very isolated threads but never a list. The ones I know already fix situations where the condition can be turned to a bool and that 0/1 is used in some way to change. Are there other conditional branches that can be avoided?
e.g. (pseudocode)
loop () {
if (in[i] < C )
out[o++] = in[i++]
...
}
Can be rewritten, arguably losing some readability, with something like this:
loop() {
out[o] = in[i] // copy anyway, just don't increment
inc = in[i] < C // increment counters? (0 or 1)
o += inc
i += inc
}
Also I've seen techniques in the wild changing && to & in the conditional in certain contexts escaping my mind right now. I'm a rookie at this level of optimization but it sure feels like there's got to be more.
Using Matt Joiner's example:
if (b > a) b = a;
You could also do the following, without having to dig into assembly code:
bool if_else = b > a;
b = a * if_else + b * !if_else;
I believe the most common way to avoid branching is to leverage bit parallelism in reducing the total jumps present in your code. The longer the basic blocks, the less often the pipeline is flushed.
As someone else has mentioned, if you want to do more than unrolling loops, and providing branch hints, you're going to want to drop into assembly. Of course this should be done with utmost caution: your typical compiler can write better assembly in most cases than a human. Your best hope is to shave off rough edges, and make assumptions that the compiler cannot deduce.
Here's an example of the following C code:
if (b > a) b = a;
In assembly without any jumps, by using bit-manipulation (and extreme commenting):
sub eax, ebx ; = a - b
sbb edx, edx ; = (b > a) ? 0xFFFFFFFF : 0
and edx, eax ; = (b > a) ? a - b : 0
add ebx, edx ; b = (b > a) ? b + (a - b) : b + 0
Note that while conditional moves are immediately jumped on by assembly enthusiasts, that's only because they're easily understood and provide a higher level language concept in a convenient single instruction. They are not necessarily faster, not available on older processors, and by mapping your C code into corresponding conditional move instructions you're just doing the work of the compiler.
The generalization of the example you give is "replace conditional evaluation with math"; conditional-branch avoidance largely boils down to that.
What's going on with replacing && with & is that, since && is short-circuit, it constitutes conditional evaluation in and of itself. & gets you the same logical results if both sides are either 0 or 1, and isn't short-circuit. Same applies to || and | except you don't need to make sure the sides are constrained to 0 or 1 (again, for logic purposes only, i.e. you're using the result only Booleanly).
At this level things are very hardware-dependent and compiler-dependent. Is the compiler you're using smart enough to compile < without control flow? gcc on x86 is smart enough; lcc is not. On older or embedded instruction sets it may not be possible to compute < without control flow.
Beyond this Cassandra-like warning, it's hard to make any helpful general statements. So here are some general statements that may be unhelpful:
Modern branch-prediction hardware is terrifyingly good. If you could find a real program where bad branch prediction costs more than 1%-2% slowdown, I'd be very surprised.
Performance counters or other tools that tell you where to find branch mispredictions are indispensible.
If you actually need to improve such code, I'd look into trace scheduling and loop unrolling:
Loop unrolling replicates loop bodies and gives your optimizer more control flow to work with.
Trace scheduling identifies which paths are most likely to be taken, and among other tricks, it can tweak the branch directions so that the branch-prediction hardware works better on the most common paths. With unrolled loops, there are more and longer paths, so the trace scheduler has more to work with
I'd be leery of trying to code this myself in assembly. When the next chip comes out with new branch-prediction hardware, chances are excellent that all your hard work goes down the drain. Instead I'd look for a feedback-directed optimizing compiler.
An extension of the technique demonstrated in the original question applies when you have to do several nested tests to get an answer. You can build a small bitmask from the results of all the tests, and the "look up" the answer in a table.
if (a) {
if (b) {
result = q;
} else {
result = r;
}
} else {
if (b) {
result = s;
} else {
result = t;
}
}
If a and b are nearly random (e.g., from arbitrary data), and this is in a tight loop, then branch prediction failures can really slow this down. Can be written as:
// assuming a and b are bools and thus exactly 0 or 1 ...
static const table[] = { t, s, r, q };
unsigned index = (a << 1) | b;
result = table[index];
You can generalize this to several conditionals. I've seen it done for 4. If the nesting gets that deep, though, you want to make sure that testing all of them is really faster than doing just the minimal tests suggested by short-circuit evaluation.
GCC is already smart enough to replace conditionals with simpler instructions. For example newer Intel processors provide cmov (conditional move). If you can use it, SSE2 provides some instructions to compare 4 integers (or 8 shorts, or 16 chars) at a time.
Additionaly to compute minimum you can use (see these magic tricks):
min(x, y) = x+(((y-x)>>(WORDBITS-1))&(y-x))
However, pay attention to things like:
c[i][j] = min(c[i][j], c[i][k] + c[j][k]); // from Floyd-Warshal algorithm
even no jumps are implied is much slower than
int tmp = c[i][k] + c[j][k];
if (tmp < c[i][j])
c[i][j] = tmp;
My best guess is that in the first snippet you pollute the cache more often, while in the second you don't.
In my opinion if you're reaching down to this level of optimization, it's probably time to drop right into assembly language.
Essentially you're counting on the compiler generating a specific pattern of assembly to take advantage of this optimization in C anyway. It's difficult to guess exactly what code a compiler is going to generate, so you'd have to look at it anytime a small change is made - why not just do it in assembly and be done with it?
Most processors provide branch prediction that is better than 50%. In fact, if you get a 1% improvement in branch prediction then you can probably publish a paper. There are a mountain of papers on this topic if you are interested.
You're better off worrying about cache hits and misses.
This level of optimization is unlikely to make a worthwhile difference in all but the hottest of hotspots. Assuming it does (without proving it in a specific case) is a form of guessing, and the first rule of optimization is don't act on guesses.

Resources