Are If Thens faster than multiplication and assignment? - c

I have a quick question, suppose I have the following code and it's repeated in a simliar way 10 times for example.
if blah then
number = number + 2^n
end if
Would it be faster to evaluate:
number = number + blah*2^n?
Which also brings the question, can you multiply a boolean value times a integer (Although I am not sure the type that is returned from 2^n, is it an integer or unsigned..etc)? (I'm working in Ada, but let's try to generalize this maybe?)
EDIT: Sorry I should clarify I am looking at 2 to the power of n, and I put c in there cause I was interested for my own learning in the future if I ever run into this problem in c and I think there are more c programmers out there on these boards then Ada (I'm assuming and you know what that means), however my current problem is in the Ada language, but the question should be fairly language independent (I hope).

There is no general answer to such a question, this depends a lot on your compiler and CPU. Modern CPU have conditional move instructions, so everything is possible.
The only ways to know here are to inspect the assembler that is produced (usually -S as compiler option) and to measure.

if we are talking about C and blah is not within your control, then just do this:
if(blah) number += (1<<n);
There is really not a boolean in C and does not need to be, false is zero and true is not zero, so you cannot assume that not zero is 1 which is what you would need for your solution, nor can you assume that any particular bit in blah is set, for example:
number += (blah&1)<<n;
Would not necessarily work either because 0x2 or 0x4 or anything non-zero with bit zero clear is considered a true. Typically you will find 0xFFF...FFFF (minus one, or all ones) used as true, but you cannot rely on typical.
Now, if you are in complete control over the value in blah, and keep it strictly to a 0 for false and 1 for true then you could do what you were asking about:
number += blah<<n;
And avoid the potential for a branch, extra cache line fill, etc.
Back to the generic case though, taking this generic solution:
unsigned int fun ( int blah, unsigned int n, unsigned int number )
{
if(blah) number += (1<<n);
return(number);
}
And compiling for the two most popular/used platforms:
testl %edi, %edi
movl %edx, %eax
je .L2
movl $1, %edx
movl %esi, %ecx
sall %cl, %edx
addl %edx, %eax
.L2:
The above uses a conditional branch.
The one below uses conditional execution, no branch, no pipeline flush, is deterministic.
cmp r0,#0
movne r3,#1
addne r2,r2,r3,asl r1
mov r0,r2
bx lr
Could have saved the mov r0,r2 instruction by re-arranging the arguments in the function call, but that is academic, you wouldnt burn a function call on this normally.
EDIT:
As suggested:
unsigned int fun ( int blah, unsigned int n, unsigned int number )
{
number += ((blah!=0)&1)<<n;
return(number);
}
subs r0, r0, #0
movne r0, #1
add r0, r2, r0, asl r1
bx lr
Certainly cheaper, and the code looks good, but I wouldnt make assumptions that the result of blah!=0, which is zero or whatever the compiler has defined as true always has the lsbit set. It doesnt have to have that bit set for the compiler to generate working code. Perhaps the standards dictate the specific value for true. by re-arranging the function parameters the if(blah) number +=... will also result in three single clock instructions and not have assumptions.
EDIT2:
Looking at what I understand to be the C99 standard:
The == (equal to) and != (not equal
to) operators are analogous to the
relational operators except for their
lower precedence. Each of the
operators yields 1 if the specified
relation is true and 0 if it is false.
Which explains why the above edit works and why you get the movne r0,#1 and not some other random number.
The poster was asking the question with regards to C but also noted that ADA was the current language, from a language independent perspective you should not assume "features" like the C feature above and use an if(blah) number = number + (1<<n). But this was asked with a C tag so the generically (processor independent) fastest result for C is, I think, number += (blah!=0)<<n; So Steven Wright's comment had it right and he should get credit for this.
The posters assumption is also basically correct, if you can get blah into a 0 or 1 form then using it in the math is faster in the sense that there is no branch. Getting it into that form without it being more expensive than a branch is the trick.

In Ada...
The original formulation:
if Blah then
Number := Number + (2 ** N);
end if;
The alternative general formulation, assuming Blah is of type Boolean and Number and N are of suitable types:
Number := Number + (Boolean'pos(Blah) * (2 ** N));
(For N and Number of user-defined integer or floating point types, suitable definitions and type conversions may be required, the key point here is the Boolean'pos() construct, which Ada guarantees will give you a 0 or 1 for the predefined Boolean type.)
As for whether this is faster or not, I concur with #Cthutu:
I would keep it with the conditional.
You shouldn't worry about low-level
optimisation details at this point.
Write the code that describes your
algorithm best and trust your
compiler.

I would keep it with the conditional. You shouldn't worry about low-level optimisation details at this point. Write the code that describes your algorithm best and trust your compiler. On some CPUs the multiplication is slower (e.g. ARM processors that have conditionals on each instruction). You could also use the ?: expression which optimises better under some compilers. For example:
number += (blah ? 2^n : 0);
If for some reason this little calculation is the bottleneck of your application after profiling then worry about low-level optimisation.

In C, regarding blah*2^n: Do you have any reason to believe that blah takes the values 0 and 1? The language only promises that 0 <-> FALSE and (everything else) <-> TRUE. C allows you to multiply a "boolean" temporary with another number, but the result is not defined except insofar as result=0 <=> the bool was false or the number was zero.
In Ada, regarding blah*2^n: The language does not define a multiplication operator on type Boolean. Thus blah cannot be a bool and be multiplied.

If your language allows multiplication between a boolean and a number, then yes, that is faster than a conditional. Conditionals require branching, which can invalidate the CPU's pipeline. Also if the branch is big enough, it can even cause a cache miss in the instructions, though that's unlikely in your small example.

Generaly, and particularly when working with Ada, you should not worry about micro-optimization issues like this. Write your code so that it is clear to a reader, and only worry about performance when you have a problem with performance, and have it tracked down to that portion of the code.
Different CPUs have different needs, and they can be insanely complex. For example, in this case which is faster depends a lot on your CPU's pipeline setup, what's in cache at the time, and how its branch prediction unit works. Part of your compiler's job is to be an expert in those things, and it will do a better job than all but the very best assembly programmers. Certianly better than you (or me).
So you just worry about writing good code, and let the compiler worry about making efficient machine code out of it.

For the problem stated, there is indeed simple expressions in C that may produce efficient code.
The nth power of 2 can be computed with the << operator as 1 << n, provided n is less than the number of value bits in an int.
If blah is a boolean, namely an int with a value of 0 or 1, your code fragment can be written:
number += blah << n;
If blah is any scalar type that can be tested for its truth value as if (blah), the expression is slightly more elaborate:
number += !!blah << n;
which is equivalent to number += (blah != 0) << n;
The test is still present but, for modern architectures, the generated code will not have any jumps, as can be verified online using Godbolt's compiler explorer.

In either case, you can't avoid a branch (internally), so don't try!
In
number = number + blah*2^n
the full expression will always have to be evaluated, unless the compiler is smart enough to stop when blah is 0. If it is, you'll get a branch if blah is 0. If it's not, you always get an expensive multiply. In case blah is false, you'll also get the unnecessary add and assignment.
In the "if then" statement, the statement will only do the add and assignment when blah is true.
In short, the answer to your question in this case is "yes".

This code shows they perform similarly, but multiplication is usually slightly faster.
#Test
public void manual_time_trial()
{
Date beforeIfElse = new Date();
if_else_test();
Date afterIfElse = new Date();
long ifElseDifference = afterIfElse.getTime() - beforeIfElse.getTime();
System.out.println("If-Else Diff: " + ifElseDifference);
Date beforeMultiplication = new Date();
multiplication_test();
Date afterMultiplication = new Date();
long multiplicationDifference = afterMultiplication.getTime() - beforeMultiplication.getTime();
System.out.println("Mult Diff : " + multiplicationDifference);
}
private static long loopFor = 100000000000L;
private static short x = 200;
private static short y = 195;
private static int z;
private static void if_else_test()
{
short diff = (short) (y - x);
for(long i = 0; i < loopFor; i++)
{
if (diff < 0)
{
z = -diff;
}
else
{
z = diff;
}
}
}
private static void multiplication_test()
{
for(long i = 0; i < loopFor; i++)
{
short diff = (short) (y - x);
z = diff * diff;
}
}

Related

Optimizing horizontal boolean reduction in ARM NEON

I'm experimenting with a cross-platform SIMD library ala ecmascript_simd aka SIMD.js, and part of this is providing a few "horizontal" SIMD operations. In particular, the API that library offers includes any(<boolN x M>) -> bool and all(<boolN x M>) -> bool functions, where <T x K> is a vector of K elements of type T and boolN is an N-bit boolean, i.e. all ones or all zeros, as SSE and NEON return for their comparison operations.
For example, let v be a <bool32 x 4> (a 128-bit vector), it could be the result of VCLT.S32 or something. I'd like to compute all(v) = v[0] && v[1] && v[2] && v[3] and any(v) = v[0] || v[1] || v[2] || v[3].
This is easy with SSE, e.g. movmskps will extract the high bit of each element, so all for the type above becomes (with C intrinsics):
#include<xmmintrin.h>
int all(__m128 x) {
return _mm_movemask_ps(x) == 8 + 4 + 2 + 1;
}
and similarly for any.
I'm struggling to find obvious/nice/efficient ways to implement this with NEON, which doesn't support an instruction like movmskps. There's the approach of simply extracting each element and computing with scalars. E.g. there's the naive method but there's also the approach of using the "horizontal" operations NEON supports, like VPMAX and VPMIN.
#include<arm_neon.h>
int all_naive(uint32x4_t v) {
return v[0] && v[1] && v[2] && v[3];
}
int all_horiz(uint32x4_t v) {
uint32x2_t x = vpmin_u32(vget_low_u32(v),
vget_high_u32(v));
uint32x2_t y = vpmin_u32(x, x);
return x[0] != 0;
}
(One can do a similar thing for the latter with VPADD, which may be faster, but it's fundamentally the same idea.)
Are there are other tricks one can use to implement this?
Yes, I know that horizontal operations are not great with SIMD vector units. But sometimes it is useful, e.g. many SIMD implementations of mandlebrot will operate on 4 points at once, and bail out of the inner loop when all of them are out of range... which requires doing a comparison and then a horizontal and.
This is my current solution that is implemented in eve library.
If your backend has C++20 support, you can just use the library: it has implementations for arm-v7, arm-v8 (only little endian at the moment) and all x86 from sse2 to avx-512. It's open source and MIT licensed. In beta at the moment. Feel free to reach out (for example with an issue) if you are trying out the library.
Take everything with a grain of salt - I don't yet have the arm benchmarks set up.
NOTE: On top of basic all and any we also have a movemask equivalent to do more complex operations like first_true. That wasn't part of the question and it's not amazing but the code can be found here
ARM-V7, 8 bytes register
Now, arm-v7 is 32 bit architecture, so we try to get to 32 bit elements where we can.
any
Use pairwise 32 bit max. If any element is true, the max is true.
// cast to dwords
dwords = vpmax_u32(dwords, dwords);
return vget_lane_u32(dwords, 0);
all
Pairwise min instead of max. Also what you test against changes.
If you have 4 byte element - just test for true. If shorts or chars - you need to test for -1;
// cast to dwords
dwords = vpmin_u32(dwords, dwords);
std::uint32_t combined = vget_lane_u32(dwords, 0);
// Assuming T is your scalar type
if constexpr ( sizeof(T) >= 4 ) return combined;
// I decided that !~ is better than -1, compiler will figure it out.
return !~combined;
ARM-V7, 16 bytes register
For anything bigger than chars, just do a conversion to a 64 bit one. Here is the list of vector narrow integer conversions.
For chars, the best I found is to reinterpret as uint32 and do an extra check.
So compare for == -1 for all and > 0 for any.
Seemed nicer asm the split in two 8 byte registers.
Then just do all/any on that dword register.
ARM-v8, 8 byte
ARM-v8 has 64 bit support, so you can just get a 64 bit lane. That one is trivially testable.
ARM-v8, 16 byte
We use vmaxvq_u32 since there is not a 64 bit one for any and vminvq_u32, vminvq_u16 or vminvq_u8 for all depending on the element size.
(Which is similar to glibc strlen)
Conclusion
Lack of benchmarks definitely makes me worried, some instructions are problematic sometimes and I don't know about it.
Regardless, that's the best I've got, so far at least.
NOTE: first time looking at arm today, I might be wrong about things.
UPD: Removed ARM-V7 and will write up what we ended up doing in a separate answer
ARM-V8.
For ARM-V8, have a look at this strlen implementation from glibc:
https://code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/strlen_asimd.S.html
ARM-V8 introduced reductions across registers. Here they use min to compare with 0
uminv datab2, datav.16b
mov tmp1, datav2.d[0]
cbnz tmp1, L(main_loop)
Find the smallest char, compare with 0 - take the next 16 bytes.
There are a few other reductions in ARM-V8 like vaddvq_u8.
I'm pretty sure you can do most of the things you'd want from movemask and alike with this.
Another interesting thing here is how they find the first_true
/* Set te NULL byte as 0xff and the rest as 0x00, move the data into a
pair of scalars and then compute the length from the earliest NULL
byte. */
cmeq datav.16b, datav.16b, #0
mov data1, datav.d[0]
mov data2, datav.d[1]
cmp data1, 0
csel data1, data1, data2, ne
sub len, src, srcin
rev data1, data1
add tmp2, len, 8
clz tmp1, data1
csel len, len, tmp2, ne
add len, len, tmp1, lsr 3
Looks a bit intimidating, but my understanding is:
they narrow it down to a 64 bit number just by doing if/else (if the first half doesn't have the zero - the second half does.
use count leading zeroes to find the position (didn't quite understand all of the endianness stuff here but it's libc - so this is the correct one).
So - if you only need V8 - there is a solution.

Fast and efficient array lookup and modification

I have been wondering for a while which of the two following methods are faster or better.
MY CURRENT METHOD
I'm developing a chess game and the pieces are stored as numbers (really bytes to preserve memory) into a one-dimensional array. There is a position for the cursor corresponding to the index in the array. To access the piece at the current position in the array is easy (piece = pieces[cursorPosition]).
The problem is that to get the x and y values for checking if the move is a valid move requires the division and a modulo operators (x = cursorPosition % 8; y = cursorPosition / 8).
Likewise when using x and y to check if moves are valid (you have to do it this way for reasons that would fill the entire page), you have to do something like - purely as an example - if pieces[y * 8 + x] != 0: movePiece = False. The obvious problem is having to do y * 8 + x a bunch of times to access the array.
Ultimately, this means that getting a piece is trivial but then getting the x and y requires another bit of memory and a very small amount of time to compute it each round.
A MORE TRADITIONAL METHOD
Using a two-dimensional array, one can implement the above process a little easier except for the fact that piece lookup is now a little harder and more memory is used. (I.e. piece = pieces[cursorPosition[0]][cursorPosition[1]] or piece = pieces[x][y]).
I don't think this is faster and it definitely doesn't look less memory intensive.
GOAL
My end goal is to have the fastest possible code that uses the least amount of memory. This will be developed for the unix terminal (and potentially Windows CMD if I can figure out how to represent the pieces without color using Ansi escape sequences) and I will either be using a secure (encrypted with protocol and structure) TCP connection to connect people p2p to play chess or something else and I don't know how much memory people will have or how fast their computer will be or how strong of an internet connection they will have.
I also just want to learn to do this the best way possible and see if it can be done.
-
I suppose my question is one of the following:
Which of the above methods is better assuming that there are slightly more computations involving move validation (which means that the y * 8 + x has to be used a lot)?
or
Is there perhaps a method that includes both of the benefits of 1d and 2d arrays with not as many draw backs as I described?
First, you should profile your code to make sure that this is really a bottleneck worth spending time on.
Second, if you're representing your position as an unsigned byte decomposing it into X and Y coordinates will be very fast. If we use the following C code:
int getX(unsigned char pos) {
return pos%8;
}
We get the following assembly with gcc 4.8 -O2:
getX(unsigned char):
shrb $3, %dil
movzbl %dil, %eax
ret
If we get the Y coordinate with:
int getY(unsigned char pos) {
return pos/8;
}
We get the following assembly with gcc 4.8 -O2:
getY(unsigned char):
movl %edi, %eax
andl $7, %eax
ret
There is no short answer to this question; it all depends on how much time you spend optimizing.
On some architectures, two-dimensional arrays might work better than one-dimensional. On other architectures, bitmapped integers might be the best.
Do not worry about division and multiplication.
You're dividing, modulating and multiplying by 8.
This number is in the power of two, thus any computer can use bitwise operations in order to achieve the result.
(x * 8) is the same as (x << 3)
(x % 8) is the same as (x & (8 - 1))
(x / 8) is the same as (x >> 3)
Those operations are normally performed in a single clock cycle. On many modern architectures, they can be performed in less than a single clock cycle (including ARM architectures).
Do not worry about using bitwise operators instead of *, % and /. If you're using a compiler that's less than a decade old, it'll optimize it for you and use bitwise operations.
What you should focus on instead, is how easy it will be for you to find out whether or not a move is legal, for instance. This will help your computer-player to "think quickly".
If you're using an 8*8 array, then it's easy for you to see where a castle can move by checking if only x or y is changed. If checking the queen, then X must either be the same or move the same number of steps as the Y position.
If you use a one-dimensional array, you also have advantages.
But performance-wise, it might be a real good idea to use a 16x16 array or a 1x256 array.
Fill the entire array with 0x80 values (eg. "illegal position"). Then fill the legal fields with 0x00.
If using a 1x256 array, you can check bit 3 and 7 of the index. If any of those are set, then the position is outside the board.
Testing can be done this way:
if(position & 0x88)
{
/* move is illegal */
}
else
{
/* move is legal */
}
... or ...
if(0 == (position & 0x88))
{
/* move is legal */
}
'position' (the index) should be an unsigned byte (uint8_t in C). This way, you'll never have to worry about pointing outside the buffer.
Some people optimize their chess-engines by using 64-bit bitmapped integers.
While this is good for quickly comparing the positions, it has other disadvantages; for instance checking if the knight's move is legal.
It's not easy to say which is better, though.
Personally, I think the one-dimensional array in general might be the best way to do it.
I recommend getting familiar (very familiar) with AND, OR, XOR, bit-shifting and rotating.
See Bit Twiddling Hacks for more information.

Quickly find whether a value is present in a C array?

I have an embedded application with a time-critical ISR that needs to iterate through an array of size 256 (preferably 1024, but 256 is the minimum) and check if a value matches the arrays contents. A bool will be set to true is this is the case.
The microcontroller is an NXP LPC4357, ARM Cortex M4 core, and the compiler is GCC. I already have combined optimisation level 2 (3 is slower) and placing the function in RAM instead of flash. I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i!=0 is faster than checking if i<256). All in all, I end up with a duration of 12.5 µs which has to be reduced drastically to be feasible. This is the (pseudo) code I use now:
uint32_t i;
uint32_t *array_ptr = &theArray[0];
uint32_t compareVal = 0x1234ABCD;
bool validFlag = false;
for (i=256; i!=0; i--)
{
if (compareVal == *array_ptr++)
{
validFlag = true;
break;
}
}
What would be the absolute fastest way to do this? Using inline assembly is allowed. Other 'less elegant' tricks are also allowed.
In situations where performance is of utmost importance, the C compiler will most likely not produce the fastest code compared to what you can do with hand tuned assembly language. I tend to take the path of least resistance - for small routines like this, I just write asm code and have a good idea how many cycles it will take to execute. You may be able to fiddle with the C code and get the compiler to generate good output, but you may end up wasting lots of time tuning the output that way. Compilers (especially from Microsoft) have come a long way in the last few years, but they are still not as smart as the compiler between your ears because you're working on your specific situation and not just a general case. The compiler may not make use of certain instructions (e.g. LDM) that can speed this up, and it's unlikely to be smart enough to unroll the loop. Here's a way to do it which incorporates the 3 ideas I mentioned in my comment: Loop unrolling, cache prefetch and making use of the multiple load (ldm) instruction. The instruction cycle count comes out to about 3 clocks per array element, but this doesn't take into account memory delays.
Theory of operation: ARM's CPU design executes most instructions in one clock cycle, but the instructions are executed in a pipeline. C compilers will try to eliminate the pipeline delays by interleaving other instructions in between. When presented with a tight loop like the original C code, the compiler will have a hard time hiding the delays because the value read from memory must be immediately compared. My code below alternates between 2 sets of 4 registers to significantly reduce the delays of the memory itself and the pipeline fetching the data. In general, when working with large data sets and your code doesn't make use of most or all of the available registers, then you're not getting maximum performance.
; r0 = count, r1 = source ptr, r2 = comparison value
stmfd sp!,{r4-r11} ; save non-volatile registers
mov r3,r0,LSR #3 ; loop count = total count / 8
pld [r1,#128]
ldmia r1!,{r4-r7} ; pre load first set
loop_top:
pld [r1,#128]
ldmia r1!,{r8-r11} ; pre load second set
cmp r4,r2 ; search for match
cmpne r5,r2 ; use conditional execution to avoid extra branch instructions
cmpne r6,r2
cmpne r7,r2
beq found_it
ldmia r1!,{r4-r7} ; use 2 sets of registers to hide load delays
cmp r8,r2
cmpne r9,r2
cmpne r10,r2
cmpne r11,r2
beq found_it
subs r3,r3,#1 ; decrement loop count
bne loop_top
mov r0,#0 ; return value = false (not found)
ldmia sp!,{r4-r11} ; restore non-volatile registers
bx lr ; return
found_it:
mov r0,#1 ; return true
ldmia sp!,{r4-r11}
bx lr
Update:
There are a lot of skeptics in the comments who think that my experience is anecdotal/worthless and require proof. I used GCC 4.8 (from the Android NDK 9C) to generate the following output with optimization -O2 (all optimizations turned on including loop unrolling). I compiled the original C code presented in the question above. Here's what GCC produced:
.L9: cmp r3, r0
beq .L8
.L3: ldr r2, [r3, #4]!
cmp r2, r1
bne .L9
mov r0, #1
.L2: add sp, sp, #1024
bx lr
.L8: mov r0, #0
b .L2
GCC's output not only doesn't unroll the loop, but also wastes a clock on a stall after the LDR. It requires at least 8 clocks per array element. It does a good job of using the address to know when to exit the loop, but all of the magical things compilers are capable of doing are nowhere to be found in this code. I haven't run the code on the target platform (I don't own one), but anyone experienced in ARM code performance can see that my code is faster.
Update 2:
I gave Microsoft's Visual Studio 2013 SP2 a chance to do better with the code. It was able to use NEON instructions to vectorize my array initialization, but the linear value search as written by the OP came out similar to what GCC generated (I renamed the labels to make it more readable):
loop_top:
ldr r3,[r1],#4
cmp r3,r2
beq true_exit
subs r0,r0,#1
bne loop_top
false_exit: xxx
bx lr
true_exit: xxx
bx lr
As I said, I don't own the OP's exact hardware, but I will be testing the performance on an nVidia Tegra 3 and Tegra 4 of the 3 different versions and post the results here soon.
Update 3:
I ran my code and Microsoft's compiled ARM code on a Tegra 3 and Tegra 4 (Surface RT, Surface RT 2). I ran 1000000 iterations of a loop which fails to find a match so that everything is in cache and it's easy to measure.
My Code MS Code
Surface RT 297ns 562ns
Surface RT 2 172ns 296ns
In both cases my code runs almost twice as fast. Most modern ARM CPUs will probably give similar results.
There's a trick for optimizing it (I was asked this on a job-interview once):
If the last entry in the array holds the value that you're looking for, then return true
Write the value that you're looking for into the last entry in the array
Iterate the array until you encounter the value that you're looking for
If you've encountered it before the last entry in the array, then return true
Return false
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
uint32_t x = theArray[SIZE-1];
if (x == compareVal)
return true;
theArray[SIZE-1] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
theArray[SIZE-1] = x;
return i != SIZE-1;
}
This yields one branch per iteration instead of two branches per iteration.
UPDATE:
If you're allowed to allocate the array to SIZE+1, then you can get rid of the "last entry swapping" part:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
theArray[SIZE] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
return i != SIZE;
}
You can also get rid of the additional arithmetic embedded in theArray[i], using the following instead:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t *arrayPtr;
theArray[SIZE] = compareVal;
for (arrayPtr = theArray; *arrayPtr != compareVal; arrayPtr++);
return arrayPtr != theArray+SIZE;
}
If the compiler doesn't already apply it, then this function will do so for sure. On the other hand, it might make it harder on the optimizer to unroll the loop, so you will have to verify that in the generated assembly code...
Keep the table in sorted order, and use Bentley's unrolled binary search:
i = 0;
if (key >= a[i+512]) i += 512;
if (key >= a[i+256]) i += 256;
if (key >= a[i+128]) i += 128;
if (key >= a[i+ 64]) i += 64;
if (key >= a[i+ 32]) i += 32;
if (key >= a[i+ 16]) i += 16;
if (key >= a[i+ 8]) i += 8;
if (key >= a[i+ 4]) i += 4;
if (key >= a[i+ 2]) i += 2;
if (key >= a[i+ 1]) i += 1;
return (key == a[i]);
The point is,
if you know how big the table is, then you know how many iterations there will be, so you can fully unroll it.
Then, there's no point testing for the == case on each iteration because, except on the last iteration, the probability of that case is too low to justify spending time testing for it.**
Finally, by expanding the table to a power of 2, you add at most one comparison, and at most a factor of two storage.
** If you're not used to thinking in terms of probabilities, every decision point has an entropy, which is the average information you learn by executing it.
For the >= tests, the probability of each branch is about 0.5, and -log2(0.5) is 1, so that means if you take one branch you learn 1 bit, and if you take the other branch you learn one bit, and the average is just the sum of what you learn on each branch times the probability of that branch.
So 1*0.5 + 1*0.5 = 1, so the entropy of the >= test is 1. Since you have 10 bits to learn, it takes 10 branches.
That's why it's fast!
On the other hand, what if your first test is if (key == a[i+512)? The probability of being true is 1/1024, while the probability of false is 1023/1024. So if it's true you learn all 10 bits!
But if it's false you learn -log2(1023/1024) = .00141 bits, practically nothing!
So the average amount you learn from that test is 10/1024 + .00141*1023/1024 = .0098 + .00141 = .0112 bits. About one hundredth of a bit.
That test is not carrying its weight!
You're asking for help with optimising your algorithm, which may push you to assembler. But your algorithm (a linear search) is not so clever, so you should consider changing your algorithm. E.g.:
perfect hash function
binary search
Perfect hash function
If your 256 "valid" values are static and known at compile time, then you can use a perfect hash function. You need to find a hash function that maps your input value to a value in the range 0..n, where there are no collisions for all the valid values you care about. That is, no two "valid" values hash to the same output value. When searching for a good hash function, you aim to:
Keep the hash function reasonably fast.
Minimise n. The smallest you can get is 256 (minimal perfect hash function), but that's probably hard to achieve, depending on the data.
Note for efficient hash functions, n is often a power of 2, which is equivalent to a bitwise mask of low bits (AND operation). Example hash functions:
CRC of input bytes, modulo n.
((x << i) ^ (x >> j) ^ (x << k) ^ ...) % n (picking as many i, j, k, ... as needed, with left or right shifts)
Then you make a fixed table of n entries, where the hash maps the input values to an index i into the table. For valid values, table entry i contains the valid value. For all other table entries, ensure that each entry of index i contains some other invalid value which doesn't hash to i.
Then in your interrupt routine, with input x:
Hash x to index i (which is in the range 0..n)
Look up entry i in the table and see if it contains the value x.
This will be much faster than a linear search of 256 or 1024 values.
I've written some Python code to find reasonable hash functions.
Binary search
If you sort your array of 256 "valid" values, then you can do a binary search, rather than a linear search. That means you should be able to search 256-entry table in only 8 steps (log2(256)), or a 1024-entry table in 10 steps. Again, this will be much faster than a linear search of 256 or 1024 values.
If the set of constants in your table is known in advance, you can use perfect hashing to ensure that only one access is made to the table. Perfect hashing determines a hash function
that maps every interesting key to a unique slot (that table isn't always dense, but you can decide how un-dense a table you can afford, with less dense tables typically leading to simpler hashing functions).
Usually, the perfect hash function for the specific set of keys is relatively easy to compute; you don't want that to be long and complicated because that competes for time perhaps better spent doing multiple probes.
Perfect hashing is a "1-probe max" scheme. One can generalize the idea, with the thought that one should trade simplicity of computing the hash code with the time it takes to make k probes. After all, the goal is "least total time to look up", not fewest probes or simplest hash function. However, I've never seen anybody build a k-probes-max hashing algorithm. I suspect one can do it, but that's likely research.
One other thought: if your processor is extremely fast, the one probe to memory from a perfect hash probably dominates the execution time. If the processor is not very fast, than k>1 probes might be practical.
Use a hash set. It will give O(1) lookup time.
The following code assumes that you can reserve value 0 as an 'empty' value, i.e. not occurring in actual data.
The solution can be expanded for a situation where this is not the case.
#define HASH(x) (((x >> 16) ^ x) & 1023)
#define HASH_LEN 1024
uint32_t my_hash[HASH_LEN];
int lookup(uint32_t value)
{
int i = HASH(value);
while (my_hash[i] != 0 && my_hash[i] != value) i = (i + 1) % HASH_LEN;
return i;
}
void store(uint32_t value)
{
int i = lookup(value);
if (my_hash[i] == 0)
my_hash[i] = value;
}
bool contains(uint32_t value)
{
return (my_hash[lookup(value)] == value);
}
In this example implementation, the lookup time will typically be very low, but at the worst case can be up to the number of entries stored. For a realtime application, you can consider also an implementation using binary trees, which will have a more predictable lookup time.
In this case, it might be worthwhile investigating Bloom filters. They're capable of quickly establishing that a value is not present, which is a good thing since most of the 2^32 possible values are not in that 1024 element array. However, there are some false positives that will need an extra check.
Since your table is apparently static, you can determine which false positives exist for your Bloom filter and put those in a perfect hash.
Assuming your processor runs at 204 MHz which seems to be the maximum for the LPC4357, and also assuming your timing result reflects the average case (half of the array traversed), we get:
CPU frequency: 204 MHz
Cycle period: 4.9 ns
Duration in cycles: 12.5 µs / 4.9 ns = 2551 cycles
Cycles per iteration: 2551 / 128 = 19.9
So, your search loop spends around 20 cycles per iteration. That doesn't sound awful, but I guess that in order to make it faster you need to look at the assembly.
I would recommend dropping the index and using a pointer comparison instead, and making all the pointers const.
bool arrayContains(const uint32_t *array, size_t length)
{
const uint32_t * const end = array + length;
while(array != end)
{
if(*array++ == 0x1234ABCD)
return true;
}
return false;
}
That's at least worth testing.
Other people have suggested reorganizing your table, adding a sentinel value at the end, or sorting it in order to provide a binary search.
You state "I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i != 0 is faster than checking if i < 256)."
My first advice is: get rid of the pointer arithmetic and the downcounting. Stuff like
for (i=0; i<256; i++)
{
if (compareVal == the_array[i])
{
[...]
}
}
tends to be idiomatic to the compiler. The loop is idiomatic, and the indexing of an array over a loop variable is idiomatic. Juggling with pointer arithmetic and pointers will tend to obfuscate the idioms to the compiler and make it generate code related to what you wrote rather than what the compiler writer decided to be the best course for the general task.
For example, the above code might be compiled into a loop running from -256 or -255 to zero, indexing off &the_array[256]. Possibly stuff that is not even expressible in valid C but matches the architecture of the machine you are generating for.
So don't microoptimize. You are just throwing spanners into the works of your optimizer. If you want to be clever, work on the data structures and algorithms but don't microoptimize their expression. It will just come back to bite you, if not on the current compiler/architecture, then on the next.
In particular using pointer arithmetic instead of arrays and indexes is poison for the compiler being fully aware of alignments, storage locations, aliasing considerations and other stuff, and for doing optimizations like strength reduction in the way best suited to the machine architecture.
Vectorization can be used here, as it is often is in implementations of memchr. You use the following algorithm:
Create a mask of your query repeating, equal in length to your OS'es bit count (64-bit, 32-bit, etc.). On a 64-bit system you would repeat the 32-bit query twice.
Process the list as a list of multiple pieces of data at once, simply by casting the list to a list of a larger data type and pulling values out. For each chunk, XOR it with the mask, then XOR with 0b0111...1, then add 1, then & with a mask of 0b1000...0 repeating. If the result is 0, there is definitely not a match. Otherwise, there may (usually with very high probability) be a match, so search the chunk normally.
Example implementation: https://sourceware.org/cgi-bin/cvsweb.cgi/src/newlib/libc/string/memchr.c?rev=1.3&content-type=text/x-cvsweb-markup&cvsroot=src
If you can accommodate the domain of your values with the amount of memory that's available to your application, then, the fastest solution would be to represent your array as an array of bits:
bool theArray[MAX_VALUE]; // of which 1024 values are true, the rest false
uint32_t compareVal = 0x1234ABCD;
bool validFlag = theArray[compareVal];
EDIT
I'm astounded by the number of critics. The title of this thread is "How do I quickly find whether a value is present in a C array?" for which I will stand by my answer because it answers precisely that. I could argue that this has the most speed efficient hash function (since address === value). I've read the comments and I'm aware of the obvious caveats. Undoubtedly those caveats limit the range of problems this can be used to solve, but, for those problems that it does solve, it solves very efficiently.
Rather than reject this answer outright, consider it as the optimal starting point for which you can evolve by using hash functions to achieve a better balance between speed and performance.
I'm sorry if my answer was already answered - just I'm a lazy reader. Feel you free to downvote then ))
1) you could remove counter 'i' at all - just compare pointers, ie
for (ptr = &the_array[0]; ptr < the_array+1024; ptr++)
{
if (compareVal == *ptr)
{
break;
}
}
... compare ptr and the_array+1024 here - you do not need validFlag at all.
all that won't give any significant improvement though, such optimization probably could be achieved by the compiler itself.
2) As it was already mentioned by other answers, almost all modern CPU are RISC-based, for example ARM. Even modern Intel X86 CPUs use RISC cores inside, as far as I know (compiling from X86 on fly). Major optimization for RISC is pipeline optimization (and for Intel and other CPU as well), minimizing code jumps. One type of such optimization (probably a major one), is "cycle rollback" one. It's incredibly stupid, and efficient, even Intel compiler can do that AFAIK. It looks like:
if (compareVal == the_array[0]) { validFlag = true; goto end_of_compare; }
if (compareVal == the_array[1]) { validFlag = true; goto end_of_compare; }
...and so on...
end_of_compare:
This way the optimization is that the pipeline is not broken for the worst case (if compareVal is absent in the array), so it is as fast as possible (of course not counting algorithm optimizations such as hash tables, sorted arrays and so on, mentioned in other answers, which may give better results depending on array size. Cycles Rollback approach can be applied there as well by the way. I'm writing here about that I think I didn't see in others)
The second part of this optimization is that that array item is taken by direct address (calculated at compiling stage, make sure you use a static array), and do not need additional ADD op to calculate pointer from array's base address. This optimization may not have significant effect, since AFAIK ARM architecture has special features to speed up arrays addressing. But anyway it's always better to know that you did all the best just in C code directly, right?
Cycle Rollback may look awkward due to waste of ROM (yep, you did right placing it to fast part of RAM, if your board supports this feature), but actually it's a fair pay for speed, being based on RISC concept. This is just a general point of calculation optimization - you sacrifice space for sake of speed, and vice versa, depending on your requirements.
If you think that rollback for array of 1024 elements is too large sacrifice for your case, you can consider 'partial rollback', for example dividing the array into 2 parts of 512 items each, or 4x256, and so on.
3) modern CPU often support SIMD ops, for example ARM NEON instruction set - it allows to execute the same ops in parallel. Frankly speaking I do not remember if it is suitable for comparison ops, but I feel it may be, you should check that. Googling shows that there may be some tricks as well, to get max speed, see https://stackoverflow.com/a/5734019/1028256
I hope it can give you some new ideas.
This is more like an addendum than an answer.
I've had a similar case in the past, but my array was constant over a considerable number of searches.
In half of them, the searched value was NOT present in array. Then I realized I could apply a "filter" before doing any search.
This "filter" is just a simple integer number, calculated ONCE and used in each search.
It's in Java, but it's pretty simple:
binaryfilter = 0;
for (int i = 0; i < array.length; i++)
{
// just apply "Binary OR Operator" over values.
binaryfilter = binaryfilter | array[i];
}
So, before do a binary search, I check binaryfilter:
// Check binaryfilter vs value with a "Binary AND Operator"
if ((binaryfilter & valuetosearch) != valuetosearch)
{
// valuetosearch is not in the array!
return false;
}
else
{
// valuetosearch MAYBE in the array, so let's check it out
// ... do binary search stuff ...
}
You can use a 'better' hash algorithm, but this can be very fast, specially for large numbers.
May be this could save you even more cycles.
Make sure the instructions ("the pseudo code") and the data ("theArray") are in separate (RAM) memories so CM4 Harvard architecture is utilized to its full potential. From the user manual:
To optimize the CPU performance, the ARM Cortex-M4 has three buses for Instruction (code) (I) access, Data (D) access, and System (S) access. When instructions and data are kept in separate memories, then code and data accesses can be done in parallel in one cycle. When code and data are kept in the same memory, then instructions that load or store data may take two cycles.
Following this guideline I observed ~30% speed increase (FFT calculation in my case).
I'm a great fan of hashing. The problem of course is to find an efficient algorithm that is both fast and uses a minimum amount of memory (especially on an embedded processor).
If you know beforehand the values that may occur you can create a program that runs through a multitude of algorithms to find the best one - or, rather, the best parameters for your data.
I created such a program that you can read about in this post and achieved some very fast results. 16000 entries translates roughly to 2^14 or an average of 14 comparisons to find the value using a binary search. I explicitly aimed for very fast lookups - on average finding the value in <=1.5 lookups - which resulted in greater RAM requirements. I believe that with a more conservative average value (say <=3) a lot of memory could be saved. By comparison the average case for a binary search on your 256 or 1024 entries would result in an average number of comparisons of 8 and 10, respectively.
My average lookup required around 60 cycles (on a laptop with an intel i5) with a generic algorithm (utilizing one division by a variable) and 40-45 cycles with a specialized (probably utilizing a multiplication). This should translate into sub-microsecond lookup times on your MCU, depending of course on the clock frequency it executes at.
It can be real-life-tweaked further if the entry array keeps track of how many times an entry was accessed. If the entry array is sorted from most to least accessed before the indeces are computed then it'll find the most commonly occuring values with a single comparison.

Find out max & min of two number without using If else?

I am able to find out the logic from: Here
r = y ^ ((x ^ y) & -(x < y)); // min(x, y)
r = x ^ ((x ^ y) & -(x < y)); // max(x, y)
It says it is faster then doing
r = (x < y) ? x : y
Can someone explain a bit more about it to understand it with example.
How it could be faster?
Discussing optimization without a specific hardware in mind doesn't make any sense. You really can't tell which alternative that is fastest without going into details of a specific system. Boldly making a statement about the first alternative being fastest without any specific hardware in mind, is just pre-mature optimization.
The obscure xor solution might be faster than the comparison alternative if the given CPU's performance relies heavily on branch prediction. In other words, if it executes regular instructions such as arithmetic ones very fast, but gets a performance bottleneck at any conditional statement (such as an if), where the code might branch on several ways. Other factors such as the amount instruction cache memory etc. also matter.
Many CPUs will however execute the second alternative much faster, because it involves fewer operations.
So to sum it up, you'll have to be an expert of the given CPU to actually tell in theory which code that will be the fastest. If you aren't such an expert, simply benchmark it and see. Or look at the disassembly for notable differences.
In the link that you provided, it is explicitly stated:
On some rare machines where branching is very expensive and no condition move instructions exist, the [code] might be faster than the obvious approach, r = (x < y) ? x : y
Later on, it says:
On some machines, evaluating (x < y) as 0 or 1 requires a branch instruction, so there may be no advantage.
In short, the bit manipulation solution is only faster on machines that have poor branch execution, as it operates solely on the numerical values of the operands. On most machines the branching approach is just as fast (and sometimes even faster) and should be preferred for its readability.
Using bit manipulation :
void func(int a,int b){
int c = a - b;
int k = (c >> 31) & 0x1;
int max = a - k * c;
int min = b + k * c;
printf("max = %d\nmin = %d",max,min);
}
The question does not specify the hardware this will run on. My answer will address the case where this is running on x86 (for instance any PC). Lets look at the assembly generated by each.
; r = y ^ ((x ^ y) & -(x < y))
xor edx,edx
cmp ebx,eax
mov ecx,eax
setl dl
xor ecx,ebx
neg edx
and edx,ecx
xor eax,edx
; r = (x < y) ? x : y
cmp ebx,eax
cmovl eax,ebx
The XOR version has to zero registers and move values around on top of the operations it inherently needs to perform, adding up to 8 instructions. However x86 has a cmov or conditional move instruction. So the ?: version compiles to a comparison and a cmovl, just 2 instructions. However this doesn't necessary make the ?: version 4 times faster since different instructions may have different latencies, and different dependency chains. But you can certainly see how ?: will very likely be faster than the XOR version.
It's also worth noting that neither version requires a branch, and so there is no branch misprediction penalty.
The ? risks to be implemented with a conditional branch (instead of a conditional assignment).
Conditional branching is a small "catastrophy" for a processor, as it cannot guess what instruction will be fetched later. This breaks the pipeline organization of the ALU (several instructions being in progress concurrently to increase throughput), and causes pipeline re-initialization delays. To alleviate this, processors resort to branch prediction, i.e. they bet on the branch that will be taken, but they can't be successful all the time.
In conclusion: conditional branches can be slloooowwwwwwww...

Multiple assignment in one line

I just come across the statement in embedded c (dsPIC33)
sample1 = sample2 = 0;
Would this mean
sample1 = 0;
sample2 = 0;
Why do they type it this way? Is this good or bad coding?
Remember that assignment is done right to left, and that they are normal expressions. So from the compilers perspective the line
sample1 = sample2 = 0;
is the same as
sample1 = (sample2 = 0);
which is the same as
sample2 = 0;
sample1 = sample2;
That is, sample2 is assigned zero, then sample1 is assigned the value of sample2. In practice the same as assigning both to zero as you guessed.
Formally, for two variables t and u of type T and U respectively
T t;
U u;
the assignment
t = u = X;
(where X is some value) is interpreted as
t = (u = X);
and is equivalent to a pair of independent assignments
u = X;
t = (U) X;
Note that the value of X is supposed to reach variable t "as if" it has passed through variable u first, but there's no requirement for it to literally happen that way. X simply has to get converted to type of u before being assigned to t. The value does not have to be assigned to u first and then copied from u to t. The above two assignments are actually not sequenced and can happen in any order, meaning that
t = (U) X;
u = X;
is also a valid execution schedule for this expression. (Note that this sequencing freedom is specific to C language, in which the result of an assignment in an rvalue. In C++ assignment evaluates to an lvalue, which requires "chained" assignments to be sequenced.)
There's no way to say whether it is a good or bad programming practice without seeing more context. In cases when the two variables are tightly related (like x and y coordinate of a point), setting them to some common value using "chained" assignment is actually perfectly good practice (I'd even say "recommended practice"). But when the variables are completely unrelated, then mixing them in a single "chained" assignment is definitely not a good idea. Especially if these variables have different types, which can lead to unintended consequences.
I think there is no good answer on C language without actual assembly listing :)
So for a simplistic program:
int main() {
int a, b, c, d;
a = b = c = d = 0;
return a;
}
I've got this assemly (Kubuntu, gcc 4.8.2, x86_64) with -O0 option of course ;)
main:
pushq %rbp
movq %rsp, %rbp
movl $0, -16(%rbp) ; d = 0
movl -16(%rbp), %eax ;
movl %eax, -12(%rbp) ; c = d
movl -12(%rbp), %eax ;
movl %eax, -8(%rbp) ; b = c
movl -8(%rbp), %eax ;
movl %eax, -4(%rbp) ; a = b
movl -4(%rbp), %eax ;
popq %rbp
ret ; return %eax, ie. a
So gcc is actually chaining all the stuff.
sample1 = sample2 = 0;
does mean
sample1 = 0;
sample2 = 0;
if and only if sample2 is declared earlier.
You can't do this way:
int sample1 = sample2 = 0; //sample1 must be declared before assigning 0 to it
You can yourself decide that this way of coding is good or bad.
Simply see the assembly code for the following lines in your IDE.
Then change the code to two separate assignments, and see the differences.
In addition to this, you can also try turning off/on optimizations (both Size & Speed Optimizations) in your compiler to see how that affects the assembly code.
The results are the same. Some people prefer chaining assignments if they are all to the same value. There is nothing wrong with this approach. Personally, I find this preferable if the variables have closely related meanings.
As noticed earlier,
sample1 = sample2 = 0;
is equal to
sample2 = 0;
sample1 = sample2;
The problem is that riscy asked about embedded c, which is often used to drive registers directly. Many of microcontroller's registers have a different purpose on read and write operations. So, in gereral case, it is not the same, as
sample2 = 0;
sample1 = 0;
For example, let UDR be a UART data register. Reading from UDR means getting the recieved value from the input buffer, while writing to UDR means putting the desired value into transmit buffer and hitting the communication. In that case,
sample = UDR = 0;
means the following: a) transmit value of zero using UART (UDR = 0;) and b) read input buffer and place data into sample value (sample = UDR;).
You could see, the behavior of embedded system could be much more complicated than the code writer may expect to be. Use this notation carefully while programming MCUs.
Regarding coding style and various coding recommendations see here:
Readability a=b=c or a=c; b=c;?
I beleive that by using
sample1 = sample2 = 0;
some compilers will produce an assembly slightly faster in comparison to 2 assignments:
sample1 = 0;
sample2 = 0;
specially if you are initializing to a non-zero value. Because, the multiple assignment translates to:
sample2 = 0;
sample1 = sample2;
So instead of 2 initializations you do only one and one copy. The speed up (if any) will be tiny but in embedded case every tiny bit counts!
As others have said, the order in which this gets executed is deterministic. The operator precedence of the = operator guarantees that this is executed right-to-left. In other words, it guarantees that sample2 is given a value before sample1.
However, multiple assignments on one row is bad practice and banned by many coding standards (*). First of all, it is not particularly readable (or you wouldn't be asking this question). Second, it is dangerous. If we have for example
sample1 = func() + (sample2 = func());
then operator precedence guarantees the same order of execution as before (+ has higher precedence than =, therefore the parenthesis). sample2 will get assigned a value before sample1. But unlike operator precedence, the order of evaluation of operators is not deterministic, it is unspecified behavior. We can't know that the right-most function call is evaluated before the left-most one.
The compiler is free to translate the above to machine code like this:
int tmp1 = func();
int tmp2 = func();
sample2 = tmp2;
sample1 = tmp1 + tmp2;
If the code depends on func() getting executed in a particular order, then we have created a nasty bug. It may work fine in one place of the program, but break in another part of the same program, even though the code is identical. Because the compiler is free to evaluate sub-expressions in any order it likes.
(*) MISRA-C:2004 12.2, MISRA-C:2012 13.4, CERT-C EXP10-C.
Take care of this special case ...
suppose b is an array of a structure of the form
{
int foo;
}
and let i be an offset in b. Consider the function realloc_b() returning an int and performing the reallocation of array b. Consider this multiple assignment:
a = (b + i)->foo = realloc_b();
To my experience (b + i) is solved first, let us say it is b_i in the RAM ; then realloc_b() is executed. As realloc_b() changes b in RAM, it results that b_i is no more allocated.
Variable a is well assigned but (b + i)->foo is not because b as been changed
bythe execution of the most left term of the assignment i.e. realloc_b()
This may cause a segmentation fault since b_i is potentially in an unallocated RAM location.
To be bug free, and to have (b + i)->foo equal to a, the one-line assignment must be splitted in two assignments:
a = reallocB();
(b + i)->foo = a;

Resources