Related
Can anybody help me regarding quickest method for evaluating three conditions in minimum steps?
I have three conditions and if any of the two comes out to be true,then whole expression becomes true else false.
I have tried two methods:
if ((condition1 && condition2) ||
(condition1 && condition3) ||
(condition2 && condition3))
Another way is to by introducing variable i and
i = 0;
if (condition1) i++;
if (condition2) i++;
if (condition3) i++;
if (i >= 2)
//do something
I want any other effective method better than the above two.
I am working in a memory constrained environment (Atmeta8 with 8 KB of flash memory) and need a solution that works in C.
This can be reduced to:
if((condition1 && (condition2 || condition3)) || (condition2 && condition3))
//do something
Depending on the likelihood of each condition, you may be able to optimize the ordering to get faster short-circuits (although this would probably be premature optimization...)
It is always hard to give a just "better" solution (better in what regard -- lines of code, readability, execution speed, number of bytes of machine code instructions, ...?) but since you are asking about execution speed in this case, we can focus on that.
You can introduce that variable you suggest, and use it to reduce the conditions to a simple less-than condition once the answer is known. Less-than conditions trivially translate to two machine code instructions on most architectures (for example, CMP (compare) followed by JL (jump if less than) or JNL (jump if not less than) on Intel IA-32). With a little luck, the compiler will notice (or you can do it yourself, but I prefer the clarity that comes with having the same pattern everywhere) that trues < 2 will always be true in the first two if() statements, and optimize it out.
int trues = 0;
if (trues < 2 && condition1) trues++;
if (trues < 2 && condition2) trues++;
if (trues < 2 && condition3) trues++;
// ...
if (trues >= 2)
{
// do something
}
This, once an answer is known, reduces the possibly complex evaluation of conditionN to a simple less-than comparison, because of the boolean short-circuiting behavior of most languages.
Another possible variant, if your language allows you to cast a boolean condition to an integer, is to take advantage of that to reduce the number of source code lines. You will still be evaluating each condition, however.
if( (int)(condition1)
+ (int)(condition2)
+ (int)(condition3)
>= 2)
{
// do something
}
This works based on the assumption that casting a boolean FALSE value to an integer results in 0, and casting TRUE results in 1. You can also use the conditional operator for the same effect, although be aware that it may introduce additional branching.
if( ((condition1) ? 1 : 0)
+ ((condition2) ? 1 : 0)
+ ((condition3) ? 1 : 0)
>= 2)
{
// do something
}
Depending on how smart the compiler's optimzer is, it may be able to determine that once any two conditions have evaluated to true the entire condition will always evaluate to true, and optimize based on that.
Note that unless you have actually profiled your code and determined this to be the culprit, this is likely a case of premature optimization. Always strive for code to be readable by human programmers first, and fast to execute by the computer second, unless you can show definitive proof that the particular piece of code you are looking at is an actual performance bottleneck. Learn how that profiler works and put it to good use. Keep in mind that in most cases, programmer time is an awful lot more expensive than CPU time, and clever techniques take longer for the maintenance programmer to parse.
Also, compilers are really clever pieces of software; sometimes they will actually detect the intent of the code written and be able to use specific constructs meant to make those operations faster, but that relies on it being able to determine what you are trying to do. A perfect example of this is swapping two variables using an intermediary variable, which on IA-32 can be done using XCHG eliminating the intermediary variable, but the compiler has to be able to determine that you are actually doing that and not something clever which may give another result in some cases.
Since the vast majority of the non-explicitly-throwaway software written spends the vast majority of its lifetime in maintenance mode (and lots of throwaway software written is alive and well long past its intended best before date), it makes sense to optimize for maintainability unless that comes at an unacceptable cost in other respects. Of course, if you are evaluating those conditions a trillion times inside a tight loop, targetted optimization very well might make sense. But the profiler will tell you exactly which portions of your code need to be scrutinized more closely from a performance point of view, meaning that you avoid complicating the code unnecessarily.
And the above caveats said, I have been working on code recently making changes that at first glance would almost certainly be considered premature detail optimization. If you have a requirement for high performance and use the profiler to determine which parts of the code are the bottlenecks, then the optimizations aren't premature. (They may still be ill-advised, however, depending on the exact circumstances.)
Depends on your language, I might resort to something like:
$cond = array(true, true, false);
if (count(array_filter($cond)) >= 2)
or
if (array_reduce($cond, function ($i, $k) { return $i + (int)$k; }) >= 2)
There is no absolut answer to this. This depends very much on the underlying architecture. E.g. if you program in VHDL or Verilog some hardware circuit, then for sure the first would give you the fastest result. I assume that your target some kind of CPU, but even here very much will depend on the target cpu, the instruction it supports, and which time they will take. Also you dont specify your target language (e.g. your first approach can be short circuited which can heavily impact speed).
If knowing nothing else I would recommend the second solution - just for the reason that your intentions (at least 2 conditions should be true) are better reflected in the code.
The speed difference of the two solutions would be not very high - if this is just some logic and not the part of some innermost loop that is executed many many many times, I would even guess for premature optimization and try to optimize somewhere else.
You may consider simpy adding them. If you use masroses from standart stdbool.h, then true is 1 and (condition1 + condition2 + condition3) >= 2 is what you want.
But it is still a mere microoptimization, usually you wouldn't get a lot of productivity with this kind of tricks.
Since we're not on a deeply pipelined architecture there's probably no value in branch avoidance, which would normally steer the optimisations offered by desktop developers. Short-cuts are golden, here.
If you go for:
if ((condition1 && (condition2 || condition3)) || (condition2 && condition3))
then you probably have the best chance, without depending on any further information, of getting the best machine code out of the compiler. It's possible, in assembly, to do things like have the second evaluation of condition2 branch back to the first evaluation of condition3 to reduce code size, but there's no reliable way to express this in C.
If you know that you will usually fail the test, and you know which two conditions usually cause that, then you might prefer to write:
if ((rare1 || rare2) && (common3 || (rare1 && rare2)))
but there's still a fairly good chance the compiler will completely rearrange that and use its own shortcut arrangement.
You might like to annotate things with __builtin_expect() or _Rarely() or whatever your compiler provides to indicate the likely outcome of a condition.
However, what's far more likely to meaningfully improve performance is recognising any common factors between the conditions or any way in which the conditions can be tested in a way that simplifies the overall test.
For example, if the tests are simple then in assembly you could almost certainly do some basic trickery with carry to accumulate the conditions quickly. Porting that back to C is sometimes viable.
You seem wiling to evaluate all the conditions, as you proposed such a solution yourself in your question. If the conditions are very complex formulas that take many CPU cycles to compute (like on the order of hundreds of milliseconds), then you may consider evaluating all three conditions simultaneously with threads to get a speed-up. Something like:
pthread_create(&t1, detached, eval_condition1, &status);
pthread_create(&t2, detached, eval_condition2, &status);
pthread_create(&t3, detached, eval_condition3, &status);
pthread_mutex_lock(&status.lock);
while (status.trues < 2 && status.falses < 2) {
pthread_cond_wait(&status.cond, &status.lock);
}
pthread_mutex_unlock(&status.lock);
if (status.trues > 1) {
/* do something */
}
Whether this gives you a speed up depends on how expensive it is to compute the conditions. The compute time has to dominate the thread creation and synchronization overheads.
Try this one:
unsigned char i;
i = condition1;
i += condition2;
i += condition3;
if (i & (unsigned char)0x02)
{
/*
At least 2 conditions are True
0b00 - 0 conditions are true
0b01 - 1 conditions are true
0b11 - 3 conditions are true
0b10 - 2 conditions are true
So, Checking 2nd LS bit is good enough.
*/
}
Throughout all tutorials and books I've read whilst learning programming the practice for iterating through arrays has always been with for loops using:
int len = array.length();
for(int i = 0; i < len; i++) {//loop code here}
Is there a reason why people don't use
for(int i = length(); i > -1; i--) {//loop code here}
From what I can see the code is shorter, easier to read and doesn't create unnecessary variables. I can see that iterating through arrays from 0 to end may be needed in some situations, but direction doesn't make a difference in most cases.
direction doesn't make a difference in most cases
True, in many cases you'll get the same result in the end, assuming there aren't any exceptions.
But in terms of thinking about what the code does, it's usually a lot simpler to think about it from start to finish. That's what we do all the time in the rest of our lives.
Oh, and you've got a bug in your code - you almost certainly don't want length() as the initial value - you probably want array.length() - 1 as otherwise it starts off being an invalid index into the array (and that's only after fixing length() to array.length()). The fact that you've got this bug demonstrates my point: your code is harder to reason about quickly than the code you dislike.
Making code easier to read and understand is much, much more important in almost every case than the tiny, almost-always-insignificant code of an extra variable. (You haven't specified the language, but in many cases in my own code I'd just call array.length() on every iteration, and let optimization sort it out.)
The reason is readability. Enumerating from 0 and up makes most sense to the most of us, therefore it is the default choice for most developers, unless the algorithm explicitly needs reverse iteration.
Declaring an extra integer variable costs virtually nothing in all common programming platforms today.
Depends on the platform you use. In Java for instance, compiler optimization is so good, I wouldn't be surprised if it doesn't make any noticeable difference. I used to benchmark all kinds of different tricks to see what really is faster and what isn't. Rule of thumb is, if you don't have strong evidence that you are wasting resources, don't try to outsmart Java.
First reason is how you think about it. Generally, as humans, we do iterate from the beginning to the end, not the other way around.
Second reason is that this syntax stems from languages like C. In such languages, going from the end to the beginning was downright impossible in some cases, the most obvious example being the string (char*), for which you usually were supplied with the start address and you had to figure out its length by enumerating until you found 0.
Wikipedia, the one true source of knowledge, states:
On most older microprocessors, bitwise
operations are slightly faster than
addition and subtraction operations
and usually significantly faster than
multiplication and division
operations. On modern architectures,
this is not the case: bitwise
operations are generally the same
speed as addition (though still faster
than multiplication).
Is there a practical reason to learn bitwise operation hacks or it is now just something you learn for theory and curiosity?
Bitwise operations are worth studying because they have many applications. It is not their main use to substitute arithmetic operations. Cryptography, computer graphics, hash functions, compression algorithms, and network protocols are just some examples where bitwise operations are extremely useful.
The lines you quoted from the Wikipedia article just tried to give some clues about the speed of bitwise operations. Unfortunately the article fails to provide some good examples of applications.
Bitwise operations are still useful. For instance, they can be used to create "flags" using a single variable, and save on the number of variables you would use to indicate various conditions. Concerning performance on arithmetic operations, it is better to leave the compiler do the optimization (unless you are some sort of guru).
They're useful for getting to understand how binary "works"; otherwise, no. In fact, I'd say that even if the bitwise hacks are faster on a given architecture, it's the compiler's job to make use of that fact — not yours. Write what you mean.
The only case where it makes sense to use them is if you're actually using your numbers as bitvectors. For instance, if you're modeling some sort of hardware and the variables represent registers.
If you want to perform arithmetic, use the arithmetic operators.
Depends what your problem is. If you are controlling hardware you need ways to set single bits within an integer.
Buy an OGD1 PCI board (open graphics card) and talk to it using libpci. http://en.wikipedia.org/wiki/Open_Graphics_Project
It is true that in most cases when you multiply an integer by a constant that happens to be a power of two, the compiler optimises it to use the bit-shift. However, when the shift is also a variable, the compiler cannot deduct it, unless you explicitly use the shift operation.
Funny nobody saw fit to mention the ctype[] array in C/C++ - also implemented in Java. This concept is extremely useful in language processing, especially when using different alphabets, or when parsing a sentence.
ctype[] is an array of 256 short integers, and in each integer, there are bits representing different character types. For example, ctype[;A'] - ctype['Z'] have bits set to show they are upper-case letters of the alphabet; ctype['0']-ctype['9'] have bits set to show they are numeric. To see if a character x is alphanumeric, you can write something like 'if (ctype[x] & (UC | LC | NUM))' which is somewhat faster and much more elegant than writing 'if ('A' = x <= 'Z' || ....'.
Once you start thinking bitwise, you find lots of places to use it. For instance, I had two text buffers. I wrote one to the other, replacing all occurrences of FINDstring with REPLACEstring as I went. Then for the next find-replace pair, I simply switched the buffer indices, so I was always writing from buffer[in] to buffer[out]. 'in' started as 0, 'out' as 1. After completing a copy I simply wrote 'in ^= 1; out ^= 1;'. And after handling all the replacements I just wrote buffer[out] to disk, not needing to know what 'out' was at that time.
If you think this is low-level, consider that certain mental errors such as deja-vu and its twin jamais-vu are caused by cerebral bit errors!
Working with IPv4 addresses frequently requires bit-operations to discover if a peer's address is within a routable network or must be forwarded onto a gateway, or if the peer is part of a network allowed or denied by firewall rules. Bit operations are required to discover the broadcast address of a network.
Working with IPv6 addresses requires the same fundamental bit-level operations, but because they are so long, I'm not sure how they are implemented. I'd wager money that they are still implemented using the bit operators on pieces of the data, sized appropriately for the architecture.
Of course (to me) the answer is yes: there can be practical reasons to learn them. The fact that nowadays, e.g., an add instruction on typical processors is as fast as an or/xor or an and just means that: an add is as fast as, say, an or on those processors.
The improvements in speed of instructions like add, divide, and so on, just means that now on those processors you can use them and being less worried about performance impact; but it is true now as in the past that you usually won't change every adds to bitwise operations to implement an add. That is, in some cases it may depend on which hacks: likely some hack now must be considered educational and not practical anymore; others could have still their practical application.
From a long time ago I have a memory which has stuck with me that says comparisons against zero are faster than any other value (ahem Z80).
In some C code I'm writing I want to skip values which have all their bits set. Currently the type of these values is char but may change. I have two different alternatives to perform the test:
if (!~b)
/* skip */
and
if (b == 0xff)
/* skip */
Apart from the latter making the assumption that b is an 8bit char whereas the former does not, would the former ever be faster due to the old compare to zero optimization trick, or are the CPUs of today way beyond this kind of thing?
If it is faster, the compiler will substitute it for you.
In general, you can't write C better than the compiler can optimize it. And it is architecture specific anyway.
In short, don't worry about it unless that sub-micro-nano-second is ultra important
From what I recall in my architecture classes, I believe they should be equally fast. Both have 2 instructions.
First example
1. Negate b into a temp register
2. Compare temp register equal 0
Second example
1. Subtract 0xff from b into a temp register
2. Compare temp register equal to 0
These are basically identical, and besides, even if your particular architecture requires more or less than this, is it really worth the fraction of a nanosecond? Several minutes have been spent just answering this question.
I would say it's not so much the that CPUs are beyond these kind of tricks as it is the compilers.
The CPUs of today are, however, beyond simple tricks which pull an extra clock-tick or two of speed. Even if you do this 100,000 times a second, we are still only talking about an increase in speed of 0.00003 seconds on a single-core 3Ghz computer - it is simply not worth your time to worry about things like this.
Go with the one that will be easier for the person who is maintaining your code to understand. If you have a successful product, most of the expense in software is in maintenance. If you write cryptic code you add to that expense. If you don't have a successful product, it doesn't matter because no one will have to maintain it. I have been in situations where I had to save every byte I could, and had to resort to tricks like the one you gave, but I only do it as the very very very last resort.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I understand most of the micro-optimizations out there but are they really useful?
Exempli gratia: does doing ++i instead of i++, or while(1) or for(;;) really result in performance improvements (either in memory fingerprint or CPU cycles)?
So the question is, what micro-optimizations can be done in C? Are they really useful?
You should rely on your compiler to optimise this stuff. Concentrate on using appropriate algorithms and writing reliable, readable and maintainable code.
The day tclhttpd, a webserver written in Tcl, one of the slowest scripting language, managed to outperform Apache, a webserver written in C, one of the supposedly fastest compiled language, was the day I was convinced that micro-optimizations significantly pales in comparison to using a faster algorithm/technique*.
Never worry about micro-optimizations until you can prove in a debugger that it is the problem. Even then, I would recommend first coming here to SO and ask if it is a good idea hoping someone would convince you not to do it.
It is counter-intuitive but very often code, especially tight nested loops or recursion, are optimized by adding code rather than removing them. The gaming industry has come up with countless tricks to speed up nested loops using filters to avoid unnecessary processing. Those filters add significantly more instructions than the difference between i++ and ++i.
*note: We have learned a lot since then. The realization that a slow scripting language can outperform compiled machine code because spawning threads is expensive led to the developments of lighttpd, NginX and Apache2.
There's a difference, I think, between a micro-optimization, a trick, and alternative means of doing something. It can be a micro-optimization to use ++i instead of i++, though I would think of it as merely avoiding a pessimization, because when you pre-increment (or decrement) the compiler need not insert code to keep track of the current value of the variable for use in the expression. If using pre-increment/decrement doesn't change the semantics of the expression, then you should use it and avoid the overhead.
A trick, on the other hand, is code that uses a non-obvious mechanism to achieve a result faster than a straight-forward mechanism would. Tricks should be avoided unless absolutely needed. Gaining a small percentage of speed-up is generally not worth the damage to code readability unless that small percentage reflects a meaningful amount of time. Extremely long-running programs, especially calculation-heavy ones, or real-time programs are often candidates for tricks because the amount of time saved may be necessary to meet the systems performance goals. Tricks should be clearly documented if used.
Alternatives, are just that. There may be no performance gain or little; they just represent two different ways of expressing the same intent. The compiler may even produce the same code. In this case, choose the most readable expression. I would say to do so even if it results in some performance loss (though see the preceding paragraph).
I think you do not need to think about these micro-optimizations because most of them is done by compiler. These things can only make code more difficult to read.
Remember, [edited] premature [/edited] optimization is an evil.
To be honest, that question, while valid, is not relevant today - why?
Compiler writers are a lot more smarter than they were 20 years ago, rewind back in time, then these optimizations would have been very relevant, we were all working with old 80286/386 processors, and coders would often resort to tricks to squeeze even more bytes out of the compiled code.
Today, processors are too fast, compiler writers knows the intimate details of operand instructions to make every thing work, considering that there is pipe-lining, core processors, acres of RAM, remember, with a 80386 processor, there would be 4Mb RAM and if you're lucky, 8Mb was considered superior!!
The paradigm has shifted, it was about squeezing every byte out of compiled code, now it is more on programmer productivity and getting the release out the door much sooner.
The above I have stated the nature of the processor, and compilers, I was talking about the Intel 80x86 processor family, Borland/Microsoft compilers.
Hope this helps,
Best regards,
Tom.
If you can easily see that two different code sequences produce identical results, without making assumptions about the data other than what's present in the code, then the compiler can too, and generally will.
It's only when the transformation from one to the other is highly non-obvious or requires assuming something that you may know to be true but the compiler has no way to infer (eg. that an operation cannot overflow or that two pointers will never alias, even though they aren't declared with the restrict keyword) that you should spend time thinking about these things. Even then, the best thing to do is usually to find a way to inform the compiler about the assumptions that it can make.
If you do find specific cases where the compiler misses simple transformations, 99% of the time you should just file a bug against the compiler and get on with working on more important things.
Keeping the fact that memory is the new disk in mind will likely improve your performance far more than applying any of those micro-optimizations.
For a slightly more pragmatic take on the question of ++i vs. i++ (at least in a C++ context) see http://llvm.org/docs/CodingStandards.html#micro_preincrement.
If Chris Lattner says it, I've got to pay attention. ;-)
You would do better to consider every program you write primarily as a language in which you communicate your ideas, intentions and reasoning to other human beings who will have to bug-fix, reuse and understand it. They will spend more time on decoding garbled code than any compiler or runtime system will do executing it.
To summarise, say what you mean in the clearest way, using the common idioms of the language in question.
For these specific examples in C, for(;;) is the idiom for an infinite loop and "i++" is the usual idiom for "add one to i" unless you use the value in an expression, in which case it depends whether the value with the clearest meaning is the one before or after the increment.
Here's real optimization, in my experience.
Someone on SO once remarked that micro-optimization was like "getting a haircut to lose weight". On American TV there is a show called "The Biggest Loser" where obese people compete to lose weight. If they were able to get their body weight down to a few grams, then getting a haircut would help.
Maybe that's overstating the analogy to micro-optimization, because I have seen (and written) code where micro-optimization actually did make a difference, but when starting off there is a lot more to be gained by simply not solving problems you don't have.
x ^= y
y ^= x
x ^= y
++i should be prefered over i++ for situations where you don't use the return value because it better represents the semantics of what you are trying to do (increment i) rather than any possible optimisation (it might be slightly faster, and is probably not worse).
Generally, loops that count towards zero are faster than loops that count towards some other number. I can imagine a situation where the compiler can't make this optimization for you, but you can make it yourself.
Say that you have and array of length x, where x is some very big number, and that you need to perform some operation on each element of x. Further, let's say that you don't care what order these operations occur in. You might do this...
int i;
for (i = 0; i < x; i++)
doStuff(array[i]);
But, you could get a little optimization by doing it this way instead -
int i;
for (i = x-1; i != 0; i--)
{
doStuff(array[i]);
}
doStuff(array[0]);
The compiler doesn't do it for you because it can't assume that order is unimportant.
MaR's example code is better. Consider this, assuming doStuff() returns an int:
int i = x;
while (i != 0)
{
--i;
printf("%d\n",doStuff(array[i]));
}
This is ok as long as printing the array contents in reverse order is acceptable, but the compiler can't decide that for you.
This being an optimization is hardware dependent. From what I remember about writing assembler (many, many years ago), counting up rather than counting down to zero requires an extra machine instruction each time you go through the loop.
If your test is something like (x < y), then evaluation of the test goes something like this:
subtract y from x, storing the result in some register r1
test r1, to set the n and z flags
branch based on the values of the n and z flags
If your test is ( x != 0), you can do this:
test x, to set the z flag
branch based on the value of the z flag
You get to skip a subtract instruction for each iteration.
There are architectures where you can have the subtract instruction set the flags based on the result of the subtraction, but I'm pretty sure x86 isn't one of them, so most of us aren't using compilers that have access to such a machine instruction.