Quickly find whether a value is present in a C array? - c

I have an embedded application with a time-critical ISR that needs to iterate through an array of size 256 (preferably 1024, but 256 is the minimum) and check if a value matches the arrays contents. A bool will be set to true is this is the case.
The microcontroller is an NXP LPC4357, ARM Cortex M4 core, and the compiler is GCC. I already have combined optimisation level 2 (3 is slower) and placing the function in RAM instead of flash. I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i!=0 is faster than checking if i<256). All in all, I end up with a duration of 12.5 µs which has to be reduced drastically to be feasible. This is the (pseudo) code I use now:
uint32_t i;
uint32_t *array_ptr = &theArray[0];
uint32_t compareVal = 0x1234ABCD;
bool validFlag = false;
for (i=256; i!=0; i--)
{
if (compareVal == *array_ptr++)
{
validFlag = true;
break;
}
}
What would be the absolute fastest way to do this? Using inline assembly is allowed. Other 'less elegant' tricks are also allowed.

In situations where performance is of utmost importance, the C compiler will most likely not produce the fastest code compared to what you can do with hand tuned assembly language. I tend to take the path of least resistance - for small routines like this, I just write asm code and have a good idea how many cycles it will take to execute. You may be able to fiddle with the C code and get the compiler to generate good output, but you may end up wasting lots of time tuning the output that way. Compilers (especially from Microsoft) have come a long way in the last few years, but they are still not as smart as the compiler between your ears because you're working on your specific situation and not just a general case. The compiler may not make use of certain instructions (e.g. LDM) that can speed this up, and it's unlikely to be smart enough to unroll the loop. Here's a way to do it which incorporates the 3 ideas I mentioned in my comment: Loop unrolling, cache prefetch and making use of the multiple load (ldm) instruction. The instruction cycle count comes out to about 3 clocks per array element, but this doesn't take into account memory delays.
Theory of operation: ARM's CPU design executes most instructions in one clock cycle, but the instructions are executed in a pipeline. C compilers will try to eliminate the pipeline delays by interleaving other instructions in between. When presented with a tight loop like the original C code, the compiler will have a hard time hiding the delays because the value read from memory must be immediately compared. My code below alternates between 2 sets of 4 registers to significantly reduce the delays of the memory itself and the pipeline fetching the data. In general, when working with large data sets and your code doesn't make use of most or all of the available registers, then you're not getting maximum performance.
; r0 = count, r1 = source ptr, r2 = comparison value
stmfd sp!,{r4-r11} ; save non-volatile registers
mov r3,r0,LSR #3 ; loop count = total count / 8
pld [r1,#128]
ldmia r1!,{r4-r7} ; pre load first set
loop_top:
pld [r1,#128]
ldmia r1!,{r8-r11} ; pre load second set
cmp r4,r2 ; search for match
cmpne r5,r2 ; use conditional execution to avoid extra branch instructions
cmpne r6,r2
cmpne r7,r2
beq found_it
ldmia r1!,{r4-r7} ; use 2 sets of registers to hide load delays
cmp r8,r2
cmpne r9,r2
cmpne r10,r2
cmpne r11,r2
beq found_it
subs r3,r3,#1 ; decrement loop count
bne loop_top
mov r0,#0 ; return value = false (not found)
ldmia sp!,{r4-r11} ; restore non-volatile registers
bx lr ; return
found_it:
mov r0,#1 ; return true
ldmia sp!,{r4-r11}
bx lr
Update:
There are a lot of skeptics in the comments who think that my experience is anecdotal/worthless and require proof. I used GCC 4.8 (from the Android NDK 9C) to generate the following output with optimization -O2 (all optimizations turned on including loop unrolling). I compiled the original C code presented in the question above. Here's what GCC produced:
.L9: cmp r3, r0
beq .L8
.L3: ldr r2, [r3, #4]!
cmp r2, r1
bne .L9
mov r0, #1
.L2: add sp, sp, #1024
bx lr
.L8: mov r0, #0
b .L2
GCC's output not only doesn't unroll the loop, but also wastes a clock on a stall after the LDR. It requires at least 8 clocks per array element. It does a good job of using the address to know when to exit the loop, but all of the magical things compilers are capable of doing are nowhere to be found in this code. I haven't run the code on the target platform (I don't own one), but anyone experienced in ARM code performance can see that my code is faster.
Update 2:
I gave Microsoft's Visual Studio 2013 SP2 a chance to do better with the code. It was able to use NEON instructions to vectorize my array initialization, but the linear value search as written by the OP came out similar to what GCC generated (I renamed the labels to make it more readable):
loop_top:
ldr r3,[r1],#4
cmp r3,r2
beq true_exit
subs r0,r0,#1
bne loop_top
false_exit: xxx
bx lr
true_exit: xxx
bx lr
As I said, I don't own the OP's exact hardware, but I will be testing the performance on an nVidia Tegra 3 and Tegra 4 of the 3 different versions and post the results here soon.
Update 3:
I ran my code and Microsoft's compiled ARM code on a Tegra 3 and Tegra 4 (Surface RT, Surface RT 2). I ran 1000000 iterations of a loop which fails to find a match so that everything is in cache and it's easy to measure.
My Code MS Code
Surface RT 297ns 562ns
Surface RT 2 172ns 296ns
In both cases my code runs almost twice as fast. Most modern ARM CPUs will probably give similar results.

There's a trick for optimizing it (I was asked this on a job-interview once):
If the last entry in the array holds the value that you're looking for, then return true
Write the value that you're looking for into the last entry in the array
Iterate the array until you encounter the value that you're looking for
If you've encountered it before the last entry in the array, then return true
Return false
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
uint32_t x = theArray[SIZE-1];
if (x == compareVal)
return true;
theArray[SIZE-1] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
theArray[SIZE-1] = x;
return i != SIZE-1;
}
This yields one branch per iteration instead of two branches per iteration.
UPDATE:
If you're allowed to allocate the array to SIZE+1, then you can get rid of the "last entry swapping" part:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
theArray[SIZE] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
return i != SIZE;
}
You can also get rid of the additional arithmetic embedded in theArray[i], using the following instead:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t *arrayPtr;
theArray[SIZE] = compareVal;
for (arrayPtr = theArray; *arrayPtr != compareVal; arrayPtr++);
return arrayPtr != theArray+SIZE;
}
If the compiler doesn't already apply it, then this function will do so for sure. On the other hand, it might make it harder on the optimizer to unroll the loop, so you will have to verify that in the generated assembly code...

Keep the table in sorted order, and use Bentley's unrolled binary search:
i = 0;
if (key >= a[i+512]) i += 512;
if (key >= a[i+256]) i += 256;
if (key >= a[i+128]) i += 128;
if (key >= a[i+ 64]) i += 64;
if (key >= a[i+ 32]) i += 32;
if (key >= a[i+ 16]) i += 16;
if (key >= a[i+ 8]) i += 8;
if (key >= a[i+ 4]) i += 4;
if (key >= a[i+ 2]) i += 2;
if (key >= a[i+ 1]) i += 1;
return (key == a[i]);
The point is,
if you know how big the table is, then you know how many iterations there will be, so you can fully unroll it.
Then, there's no point testing for the == case on each iteration because, except on the last iteration, the probability of that case is too low to justify spending time testing for it.**
Finally, by expanding the table to a power of 2, you add at most one comparison, and at most a factor of two storage.
** If you're not used to thinking in terms of probabilities, every decision point has an entropy, which is the average information you learn by executing it.
For the >= tests, the probability of each branch is about 0.5, and -log2(0.5) is 1, so that means if you take one branch you learn 1 bit, and if you take the other branch you learn one bit, and the average is just the sum of what you learn on each branch times the probability of that branch.
So 1*0.5 + 1*0.5 = 1, so the entropy of the >= test is 1. Since you have 10 bits to learn, it takes 10 branches.
That's why it's fast!
On the other hand, what if your first test is if (key == a[i+512)? The probability of being true is 1/1024, while the probability of false is 1023/1024. So if it's true you learn all 10 bits!
But if it's false you learn -log2(1023/1024) = .00141 bits, practically nothing!
So the average amount you learn from that test is 10/1024 + .00141*1023/1024 = .0098 + .00141 = .0112 bits. About one hundredth of a bit.
That test is not carrying its weight!

You're asking for help with optimising your algorithm, which may push you to assembler. But your algorithm (a linear search) is not so clever, so you should consider changing your algorithm. E.g.:
perfect hash function
binary search
Perfect hash function
If your 256 "valid" values are static and known at compile time, then you can use a perfect hash function. You need to find a hash function that maps your input value to a value in the range 0..n, where there are no collisions for all the valid values you care about. That is, no two "valid" values hash to the same output value. When searching for a good hash function, you aim to:
Keep the hash function reasonably fast.
Minimise n. The smallest you can get is 256 (minimal perfect hash function), but that's probably hard to achieve, depending on the data.
Note for efficient hash functions, n is often a power of 2, which is equivalent to a bitwise mask of low bits (AND operation). Example hash functions:
CRC of input bytes, modulo n.
((x << i) ^ (x >> j) ^ (x << k) ^ ...) % n (picking as many i, j, k, ... as needed, with left or right shifts)
Then you make a fixed table of n entries, where the hash maps the input values to an index i into the table. For valid values, table entry i contains the valid value. For all other table entries, ensure that each entry of index i contains some other invalid value which doesn't hash to i.
Then in your interrupt routine, with input x:
Hash x to index i (which is in the range 0..n)
Look up entry i in the table and see if it contains the value x.
This will be much faster than a linear search of 256 or 1024 values.
I've written some Python code to find reasonable hash functions.
Binary search
If you sort your array of 256 "valid" values, then you can do a binary search, rather than a linear search. That means you should be able to search 256-entry table in only 8 steps (log2(256)), or a 1024-entry table in 10 steps. Again, this will be much faster than a linear search of 256 or 1024 values.

If the set of constants in your table is known in advance, you can use perfect hashing to ensure that only one access is made to the table. Perfect hashing determines a hash function
that maps every interesting key to a unique slot (that table isn't always dense, but you can decide how un-dense a table you can afford, with less dense tables typically leading to simpler hashing functions).
Usually, the perfect hash function for the specific set of keys is relatively easy to compute; you don't want that to be long and complicated because that competes for time perhaps better spent doing multiple probes.
Perfect hashing is a "1-probe max" scheme. One can generalize the idea, with the thought that one should trade simplicity of computing the hash code with the time it takes to make k probes. After all, the goal is "least total time to look up", not fewest probes or simplest hash function. However, I've never seen anybody build a k-probes-max hashing algorithm. I suspect one can do it, but that's likely research.
One other thought: if your processor is extremely fast, the one probe to memory from a perfect hash probably dominates the execution time. If the processor is not very fast, than k>1 probes might be practical.

Use a hash set. It will give O(1) lookup time.
The following code assumes that you can reserve value 0 as an 'empty' value, i.e. not occurring in actual data.
The solution can be expanded for a situation where this is not the case.
#define HASH(x) (((x >> 16) ^ x) & 1023)
#define HASH_LEN 1024
uint32_t my_hash[HASH_LEN];
int lookup(uint32_t value)
{
int i = HASH(value);
while (my_hash[i] != 0 && my_hash[i] != value) i = (i + 1) % HASH_LEN;
return i;
}
void store(uint32_t value)
{
int i = lookup(value);
if (my_hash[i] == 0)
my_hash[i] = value;
}
bool contains(uint32_t value)
{
return (my_hash[lookup(value)] == value);
}
In this example implementation, the lookup time will typically be very low, but at the worst case can be up to the number of entries stored. For a realtime application, you can consider also an implementation using binary trees, which will have a more predictable lookup time.

In this case, it might be worthwhile investigating Bloom filters. They're capable of quickly establishing that a value is not present, which is a good thing since most of the 2^32 possible values are not in that 1024 element array. However, there are some false positives that will need an extra check.
Since your table is apparently static, you can determine which false positives exist for your Bloom filter and put those in a perfect hash.

Assuming your processor runs at 204 MHz which seems to be the maximum for the LPC4357, and also assuming your timing result reflects the average case (half of the array traversed), we get:
CPU frequency: 204 MHz
Cycle period: 4.9 ns
Duration in cycles: 12.5 µs / 4.9 ns = 2551 cycles
Cycles per iteration: 2551 / 128 = 19.9
So, your search loop spends around 20 cycles per iteration. That doesn't sound awful, but I guess that in order to make it faster you need to look at the assembly.
I would recommend dropping the index and using a pointer comparison instead, and making all the pointers const.
bool arrayContains(const uint32_t *array, size_t length)
{
const uint32_t * const end = array + length;
while(array != end)
{
if(*array++ == 0x1234ABCD)
return true;
}
return false;
}
That's at least worth testing.

Other people have suggested reorganizing your table, adding a sentinel value at the end, or sorting it in order to provide a binary search.
You state "I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i != 0 is faster than checking if i < 256)."
My first advice is: get rid of the pointer arithmetic and the downcounting. Stuff like
for (i=0; i<256; i++)
{
if (compareVal == the_array[i])
{
[...]
}
}
tends to be idiomatic to the compiler. The loop is idiomatic, and the indexing of an array over a loop variable is idiomatic. Juggling with pointer arithmetic and pointers will tend to obfuscate the idioms to the compiler and make it generate code related to what you wrote rather than what the compiler writer decided to be the best course for the general task.
For example, the above code might be compiled into a loop running from -256 or -255 to zero, indexing off &the_array[256]. Possibly stuff that is not even expressible in valid C but matches the architecture of the machine you are generating for.
So don't microoptimize. You are just throwing spanners into the works of your optimizer. If you want to be clever, work on the data structures and algorithms but don't microoptimize their expression. It will just come back to bite you, if not on the current compiler/architecture, then on the next.
In particular using pointer arithmetic instead of arrays and indexes is poison for the compiler being fully aware of alignments, storage locations, aliasing considerations and other stuff, and for doing optimizations like strength reduction in the way best suited to the machine architecture.

Vectorization can be used here, as it is often is in implementations of memchr. You use the following algorithm:
Create a mask of your query repeating, equal in length to your OS'es bit count (64-bit, 32-bit, etc.). On a 64-bit system you would repeat the 32-bit query twice.
Process the list as a list of multiple pieces of data at once, simply by casting the list to a list of a larger data type and pulling values out. For each chunk, XOR it with the mask, then XOR with 0b0111...1, then add 1, then & with a mask of 0b1000...0 repeating. If the result is 0, there is definitely not a match. Otherwise, there may (usually with very high probability) be a match, so search the chunk normally.
Example implementation: https://sourceware.org/cgi-bin/cvsweb.cgi/src/newlib/libc/string/memchr.c?rev=1.3&content-type=text/x-cvsweb-markup&cvsroot=src

If you can accommodate the domain of your values with the amount of memory that's available to your application, then, the fastest solution would be to represent your array as an array of bits:
bool theArray[MAX_VALUE]; // of which 1024 values are true, the rest false
uint32_t compareVal = 0x1234ABCD;
bool validFlag = theArray[compareVal];
EDIT
I'm astounded by the number of critics. The title of this thread is "How do I quickly find whether a value is present in a C array?" for which I will stand by my answer because it answers precisely that. I could argue that this has the most speed efficient hash function (since address === value). I've read the comments and I'm aware of the obvious caveats. Undoubtedly those caveats limit the range of problems this can be used to solve, but, for those problems that it does solve, it solves very efficiently.
Rather than reject this answer outright, consider it as the optimal starting point for which you can evolve by using hash functions to achieve a better balance between speed and performance.

I'm sorry if my answer was already answered - just I'm a lazy reader. Feel you free to downvote then ))
1) you could remove counter 'i' at all - just compare pointers, ie
for (ptr = &the_array[0]; ptr < the_array+1024; ptr++)
{
if (compareVal == *ptr)
{
break;
}
}
... compare ptr and the_array+1024 here - you do not need validFlag at all.
all that won't give any significant improvement though, such optimization probably could be achieved by the compiler itself.
2) As it was already mentioned by other answers, almost all modern CPU are RISC-based, for example ARM. Even modern Intel X86 CPUs use RISC cores inside, as far as I know (compiling from X86 on fly). Major optimization for RISC is pipeline optimization (and for Intel and other CPU as well), minimizing code jumps. One type of such optimization (probably a major one), is "cycle rollback" one. It's incredibly stupid, and efficient, even Intel compiler can do that AFAIK. It looks like:
if (compareVal == the_array[0]) { validFlag = true; goto end_of_compare; }
if (compareVal == the_array[1]) { validFlag = true; goto end_of_compare; }
...and so on...
end_of_compare:
This way the optimization is that the pipeline is not broken for the worst case (if compareVal is absent in the array), so it is as fast as possible (of course not counting algorithm optimizations such as hash tables, sorted arrays and so on, mentioned in other answers, which may give better results depending on array size. Cycles Rollback approach can be applied there as well by the way. I'm writing here about that I think I didn't see in others)
The second part of this optimization is that that array item is taken by direct address (calculated at compiling stage, make sure you use a static array), and do not need additional ADD op to calculate pointer from array's base address. This optimization may not have significant effect, since AFAIK ARM architecture has special features to speed up arrays addressing. But anyway it's always better to know that you did all the best just in C code directly, right?
Cycle Rollback may look awkward due to waste of ROM (yep, you did right placing it to fast part of RAM, if your board supports this feature), but actually it's a fair pay for speed, being based on RISC concept. This is just a general point of calculation optimization - you sacrifice space for sake of speed, and vice versa, depending on your requirements.
If you think that rollback for array of 1024 elements is too large sacrifice for your case, you can consider 'partial rollback', for example dividing the array into 2 parts of 512 items each, or 4x256, and so on.
3) modern CPU often support SIMD ops, for example ARM NEON instruction set - it allows to execute the same ops in parallel. Frankly speaking I do not remember if it is suitable for comparison ops, but I feel it may be, you should check that. Googling shows that there may be some tricks as well, to get max speed, see https://stackoverflow.com/a/5734019/1028256
I hope it can give you some new ideas.

This is more like an addendum than an answer.
I've had a similar case in the past, but my array was constant over a considerable number of searches.
In half of them, the searched value was NOT present in array. Then I realized I could apply a "filter" before doing any search.
This "filter" is just a simple integer number, calculated ONCE and used in each search.
It's in Java, but it's pretty simple:
binaryfilter = 0;
for (int i = 0; i < array.length; i++)
{
// just apply "Binary OR Operator" over values.
binaryfilter = binaryfilter | array[i];
}
So, before do a binary search, I check binaryfilter:
// Check binaryfilter vs value with a "Binary AND Operator"
if ((binaryfilter & valuetosearch) != valuetosearch)
{
// valuetosearch is not in the array!
return false;
}
else
{
// valuetosearch MAYBE in the array, so let's check it out
// ... do binary search stuff ...
}
You can use a 'better' hash algorithm, but this can be very fast, specially for large numbers.
May be this could save you even more cycles.

Make sure the instructions ("the pseudo code") and the data ("theArray") are in separate (RAM) memories so CM4 Harvard architecture is utilized to its full potential. From the user manual:
To optimize the CPU performance, the ARM Cortex-M4 has three buses for Instruction (code) (I) access, Data (D) access, and System (S) access. When instructions and data are kept in separate memories, then code and data accesses can be done in parallel in one cycle. When code and data are kept in the same memory, then instructions that load or store data may take two cycles.
Following this guideline I observed ~30% speed increase (FFT calculation in my case).

I'm a great fan of hashing. The problem of course is to find an efficient algorithm that is both fast and uses a minimum amount of memory (especially on an embedded processor).
If you know beforehand the values that may occur you can create a program that runs through a multitude of algorithms to find the best one - or, rather, the best parameters for your data.
I created such a program that you can read about in this post and achieved some very fast results. 16000 entries translates roughly to 2^14 or an average of 14 comparisons to find the value using a binary search. I explicitly aimed for very fast lookups - on average finding the value in <=1.5 lookups - which resulted in greater RAM requirements. I believe that with a more conservative average value (say <=3) a lot of memory could be saved. By comparison the average case for a binary search on your 256 or 1024 entries would result in an average number of comparisons of 8 and 10, respectively.
My average lookup required around 60 cycles (on a laptop with an intel i5) with a generic algorithm (utilizing one division by a variable) and 40-45 cycles with a specialized (probably utilizing a multiplication). This should translate into sub-microsecond lookup times on your MCU, depending of course on the clock frequency it executes at.
It can be real-life-tweaked further if the entry array keeps track of how many times an entry was accessed. If the entry array is sorted from most to least accessed before the indeces are computed then it'll find the most commonly occuring values with a single comparison.

Related

Fast hashing of 32 bit values to between 0 and 254 inclusive

I'm looking for a fast way in C to hash numbers 32-bit numbers more or less uniformly between 0 and 254. 255 is reserved for a special purpose.
As an added constraint, I'm looking for a method that would map well to being used with ISA-specific vector intrinsics or to a language like OpenCL or CUDA without introducing control flow divergence between the vector lanes/threads.
Ordinarily, I would just use the following code to hash the number between 0 and 255, as this is just a fast way of doing x mod 256.
inline uint8_t hash(uint32_t x){ return x & 255; }
I could just give in and use the following:
inline uint8_t hash(uint32_t x){ return x % 255; }
However, this solution seems unimaginative and unlikely to be the highest performing solution. I found code at this site (http://homepage.cs.uiowa.edu/~jones/bcd/mod.shtml#exmod15) that appears to provide a reasonable solution for scalar code and have inserted it here for your convenience.
uint32_t mod255( uint32_t a ) {
a = (a >> 16) + (a & 0xFFFF); /* sum base 2**16 digits */
a = (a >> 8) + (a & 0xFF); /* sum base 2**8 digits */
if (a < 255) return a;
if (a < (2 * 255)) return a - 255;
return a - (2 * 255);
}
I see two potential performance issues with this code:
The large number of if statements makes me question how easy it will be for a compiler or human :) to effectively vectorize the code without leading to control flow divergence within a warp/wavefront on a SIMT architecture or vectorized execution on a multicore CPU. If such divergence does occur, it will reduce parallel efficiency, as the divergent paths will have to be run in series.
It looks like it could be troublesome for a branch predictor (not applicable on common GPU architectures) as the code path that executes depends on the value of the input. Therefore, if there is a mix of small and large values interspersed with one another, this code will likely sacrifice some performance due to a moderate number of branch mispredictions.
Any recommendations on alternatives that I could use are most welcome. Alternatively, let me know if what I am asking for is unreasonable.
The "if statements on GPU kill performance" is a popular misconception which desperately wants to live on, it seems.
The large number of if statements makes me question how easy it will
be for a compiler or human :) to vectorize the code.
First of all I wouldn't consider 2 if statements a "large number of if statements", and those are so short and trivial that I'm willing to bet the compiler will turn them into branchless conditional moves or predicated instructions. There will be no performance penalty at all. (Do check the generated assembly, however).
It looks like it could be troublesome for a branch predictor as the code path that executes depends on the value of the input. Therefore, if there is a mix of small and large values interspersed with one another, this code will likely sacrifice some performance due to a moderate number of branch mispredictions.
Current GPUs do not have branch predictors. Note however that depending on the underlying hardware, operation on integers (and notably shifting) may be quite costly.
I would just do this:
uchar fast_mod255( uint a32 ) {
ushort a16 = (a32 >> 16) + (a32 & 0xFFFF); /* sum base 2**16 digits */
uchar a8 = (a16 >> 8) + (a16 & 0xFF); /* sum base 2**8 digits */
return (a8 % 255);
}
Another option is to just do:
uchar fast_mod255( uchar4 a ) {
return (dot(a) % 255); // or return (distance(a) % 255);
}
GPUs are very efficient in computing the distances and dot products, even in 4 dimensions. And it is a valid way of hashing as well. Dsicarding the overflowed values.
No branching, and a clever compiler can even optimize it out. Or do you really need that values that fall in the 255 zone have a scattered pattern instead of 1?
I wanted to answer my own question because over the last 2 years I have seen ways to get around a slow integer divide instruction. The easiest way is to make the integer a compile-time constant. Any decent modern compiler should replace the integer divide with an equivalent set of other instructions with typically higher throughput (how many such instructions can be retired per cycle) and reduced latency (how many cycles it takes the instruction to execute). If you're curious, check out Hacker's Delight (an excellent book on low-level computer arithmetic).
I wanted to share another finding, which I found on Daniel Lemire's blog (located here). The code that follows doesn't compute mod 255 but does something similar, which is equally useful in a number of applications and much faster.
Suppose that you have a set of numbers S that are uniformly randomly picked from the range 0 to 2^k - 1 inclusive, where k >= 0. In this case, if you care only about mapping numbers roughly uniformly from 0 to 254 inclusive, you may do the following:
For each number n in a set S, you may map n to one of the 255 candidate values by multiplying n by 255 and then arithmetically shifting the result to the right by k digits.
Here is the function that you call on each n for a fixed value of k:
int map_to_0_to_254(int n, int k){
return (n * 255) >> k;
}
As an example, if the values for the argument n range uniformly randomly from 0 to 4095 (2^12 - 1),
then map_to_0_254(n, 12) will return a value in the range 0 to 254 inclusive.
Here is a more general templated version in C++ for mapping to range from 0 to range_size - 1 inclusive:
template<typename T>
T map_to_0_to_range_size_minus_1(T n, T range_size, T k){
return (n * range_size) >> k;
}
REMEMBER that this code assumes that the inputs for n are roughly uniformly randomly distributed between 0 and 2^k - 1 inclusive. If that property holds, then the outputs will be roughly uniformly distributed between 0 and range_size - 1 inclusive. The larger 2^k is relative to range_size, the more uniform the mapping will be for a fixed set of inputs.
Why This is Useful
This approach has applications to computing hash functions for hash tables where the number of bins is not a power of 2. Those operations would ordinarily require a long-latency integer divide instruction, which is often an order of magnitude slower to execute than an integer multiply, because you often do not know the number of bins in the hash table at compile time.

Fast and efficient array lookup and modification

I have been wondering for a while which of the two following methods are faster or better.
MY CURRENT METHOD
I'm developing a chess game and the pieces are stored as numbers (really bytes to preserve memory) into a one-dimensional array. There is a position for the cursor corresponding to the index in the array. To access the piece at the current position in the array is easy (piece = pieces[cursorPosition]).
The problem is that to get the x and y values for checking if the move is a valid move requires the division and a modulo operators (x = cursorPosition % 8; y = cursorPosition / 8).
Likewise when using x and y to check if moves are valid (you have to do it this way for reasons that would fill the entire page), you have to do something like - purely as an example - if pieces[y * 8 + x] != 0: movePiece = False. The obvious problem is having to do y * 8 + x a bunch of times to access the array.
Ultimately, this means that getting a piece is trivial but then getting the x and y requires another bit of memory and a very small amount of time to compute it each round.
A MORE TRADITIONAL METHOD
Using a two-dimensional array, one can implement the above process a little easier except for the fact that piece lookup is now a little harder and more memory is used. (I.e. piece = pieces[cursorPosition[0]][cursorPosition[1]] or piece = pieces[x][y]).
I don't think this is faster and it definitely doesn't look less memory intensive.
GOAL
My end goal is to have the fastest possible code that uses the least amount of memory. This will be developed for the unix terminal (and potentially Windows CMD if I can figure out how to represent the pieces without color using Ansi escape sequences) and I will either be using a secure (encrypted with protocol and structure) TCP connection to connect people p2p to play chess or something else and I don't know how much memory people will have or how fast their computer will be or how strong of an internet connection they will have.
I also just want to learn to do this the best way possible and see if it can be done.
-
I suppose my question is one of the following:
Which of the above methods is better assuming that there are slightly more computations involving move validation (which means that the y * 8 + x has to be used a lot)?
or
Is there perhaps a method that includes both of the benefits of 1d and 2d arrays with not as many draw backs as I described?
First, you should profile your code to make sure that this is really a bottleneck worth spending time on.
Second, if you're representing your position as an unsigned byte decomposing it into X and Y coordinates will be very fast. If we use the following C code:
int getX(unsigned char pos) {
return pos%8;
}
We get the following assembly with gcc 4.8 -O2:
getX(unsigned char):
shrb $3, %dil
movzbl %dil, %eax
ret
If we get the Y coordinate with:
int getY(unsigned char pos) {
return pos/8;
}
We get the following assembly with gcc 4.8 -O2:
getY(unsigned char):
movl %edi, %eax
andl $7, %eax
ret
There is no short answer to this question; it all depends on how much time you spend optimizing.
On some architectures, two-dimensional arrays might work better than one-dimensional. On other architectures, bitmapped integers might be the best.
Do not worry about division and multiplication.
You're dividing, modulating and multiplying by 8.
This number is in the power of two, thus any computer can use bitwise operations in order to achieve the result.
(x * 8) is the same as (x << 3)
(x % 8) is the same as (x & (8 - 1))
(x / 8) is the same as (x >> 3)
Those operations are normally performed in a single clock cycle. On many modern architectures, they can be performed in less than a single clock cycle (including ARM architectures).
Do not worry about using bitwise operators instead of *, % and /. If you're using a compiler that's less than a decade old, it'll optimize it for you and use bitwise operations.
What you should focus on instead, is how easy it will be for you to find out whether or not a move is legal, for instance. This will help your computer-player to "think quickly".
If you're using an 8*8 array, then it's easy for you to see where a castle can move by checking if only x or y is changed. If checking the queen, then X must either be the same or move the same number of steps as the Y position.
If you use a one-dimensional array, you also have advantages.
But performance-wise, it might be a real good idea to use a 16x16 array or a 1x256 array.
Fill the entire array with 0x80 values (eg. "illegal position"). Then fill the legal fields with 0x00.
If using a 1x256 array, you can check bit 3 and 7 of the index. If any of those are set, then the position is outside the board.
Testing can be done this way:
if(position & 0x88)
{
/* move is illegal */
}
else
{
/* move is legal */
}
... or ...
if(0 == (position & 0x88))
{
/* move is legal */
}
'position' (the index) should be an unsigned byte (uint8_t in C). This way, you'll never have to worry about pointing outside the buffer.
Some people optimize their chess-engines by using 64-bit bitmapped integers.
While this is good for quickly comparing the positions, it has other disadvantages; for instance checking if the knight's move is legal.
It's not easy to say which is better, though.
Personally, I think the one-dimensional array in general might be the best way to do it.
I recommend getting familiar (very familiar) with AND, OR, XOR, bit-shifting and rotating.
See Bit Twiddling Hacks for more information.

Performance of modulus operator in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
From the execution time perspective, is using modulus operator more beneficial or the manual way of doing it if i am supposed to do the modulus thing a large number of times, about 10^6 times ?
Manually doing (number % mod_number) :
while(number >= mod_number) {
number = number - mod_number;
}
Doing the same thing using % operator :
number = number % mod_number;
From what i have tested, manually doing it gives better time performance.
How is the modulus operator defined? I know the outputs for negative numbers are implementation defined, i am asking about the working of the operator, i.e., its complexity so that i can justify the better manual performance.
Note : The question is specifically for implementation in C.
The code snippet:
for (j = 0; j < idx; j++) {
num = mark[j];
dif = k - num;
if (dif < 0) dif = (-1 * dif) + 100;
many = count[num];
prev = ap[dif][k];
ap[dif][k] = ap[dif][k] + ap[dif][num];
//the manual way here works faster than %
if (ap[dif][k] >= mod) ap[dif][k] -= mod;
ap[dif][k] += many;
if (ap[dif][k] >= mod) ap[dif][k] -= mod;
sum = (sum + ap[dif][k]);
if (sum >= mod) sum -= mod;
sum = sum - prev;
}
The above loop is executed 2*(10^5)*t times with 'idx' gradually increasing till 100 for each 't'. Used t = 10.
I would be very surprised if the loop were more efficient when number is many times larger than mod_number. Any CPU you're likely to use has a built-in division operation that returns both the quotient and the remainder in constant time, and this will be used to implement the % operator. Your loop takes O(number/mod_number) time.
I suggest you take a look at the generated assembly code for the two versions and you'll see this.
It depends on the implementation. It is pointless to discuss performance without a given system in mind.
The modulus operator will likely be implemented through the CPU's division instruction, which on most CPUs is relatively slow in comparison to other CPU instructions. However, it seems highly unlikely that a loop like the one in your example will be more efficient.
More likely, the performance difference you are experiencing is either related to wrong optimization settings or incorrect benchmarking.
According to my experience, using the modulus operator should give you better performance. The people who have written C compilers should have considered the optimization of the operation they are performing.
But your test results shows the other away, it may depend on the code you have written. It would be easier to find why? if you show your code...
The example you have shown (not that while loop at the top, the snippet at the bottom) is a case where the "divisor" is only subtracted at most once. That is essentially the one case in which "repeated" subtraction (0 or 1 times, a special case of repeated subtraction) can be (and commonly is, but not necessarily) faster than division-based modulo. Obviously it depends on how fast division is on the target, how fast a test/branch (or test/predicated instruction) is on the target, and in the case of branches it even depends on how predictable the branch will be.
A compiler is unlikely to make that optimization (but it's not impossible), because it only makes sense if it is known that the subtraction will only happen at most once (or perhaps more than one, if division is especially slow on the target, but some lowish bound is still needed), which is in general a hard thing to find out for a compiler.
To give some real life numbers, on a Haswell 32bit signed division (and therefore also modulo) would take 22 to 29 cycles, and a branch misprediction might take up to 20 cycles, but that's a worst case and the branch should not be mispredicted all the time. Also, you could avoid the branch (if it's badly predicted) and do something like this (not tested, just to give you some idea)
sub eax, edx
lea edx, [eax + edx]
cmovl eax, edx
Which should only take about 4 cycles, independent of any predictability. Using a branch may be faster if it can be predicted well.

some mandelbrot drawing routine from c to sse2

I want to rewrite such simple routine to SSE2 code, (preferably
in nasm) and I am not totally sure how to do it, two things
not clear (how to express calculations (inner loop and those from
outer loop too) and how to call c code function "SetPixelInDibInt(i ,j, palette[n]);"
from under staticaly linked asm code
void DrawMandelbrotD(double ox, double oy, double lx, int N_ITER)
{
double ly = lx * double(CLIENT_Y)/double(CLIENT_X);
double dx = lx / CLIENT_X;
double dy = ly / CLIENT_Y;
double ax = ox - lx * 0.5 + dx * 0.5;
double ay = oy - ly * 0.5 + dy * 0.5;
static double re, im, re_n, im_n, c_re, c_im, rere, imim, int n;
for(int j=0; j<CLIENT_Y; j+=1)
{
for(int i=0; i<CLIENT_X; i+=1)
{
c_re = ax + i * dx;
c_im = ay + j * dy;
re = c_re;
im = c_im;
rere=re*re;
imim=im*im;
n=1;
for(int k=0;k<N_ITER;k++)
{
im = (re+re)*im + c_im;
re = rere - imim + c_re;
rere=re*re;
imim=im*im;
if ( (rere + imim) > 4.0 ) break;
n++;
}
SetPixelInDibInt(i ,j, palette[n]);
}
}
}
could someone help, I would like not to see other code
implementations but just nasm-sse translation of those above
- it would be most helpfull in my case - could someone help with that?
Intel has a complete implementation as an AVX example. See below.
What makes Mandelbrot tricky is that the early-out condition for each point in the set (i.e. pixel) is different. You could keep a pair or quad of pixels iterating until the magnitude of both exceeds 2.0 (or you hit max iterations). To do otherwise would require tracking which pixel's points were in which vector element.
Anyway, a simplistic implementation to operate on a vector of 2 (or 4 with AVX) doubles at a time would have its throughput limited by the latency of the dependency chains. You'd need to do multiple dependency chains in parallel to keep both of Haswell's FMA units fed. So you'd duplicate your variables, and interleave operations for two iterations of the outer loop inside the inner loop.
Keeping track of which pixels are being calculated would be a little tricky. I think it might take less overhead to use one set of registers for one row of pixels, and another set of registers for another row. (So you can always just move 4 pixels to the right, rather than checking whether the other dep chain is already processing that vector.)
I suspect that only checking the loop exit condition every 4 iterations or so might be a win. Getting code to branch based on a packed vector comparison, is slightly more expensive than in the scalar case. The extra FP add required is also expensive. (Haswell can do two FMAs per cycle, (latency = 5). The lone FP add unit is one the same port as one of the FMA units. The two FP mul units are on the same ports that can run FMA.)
The loop condition can be checked with a packed-compare to generate a mask of zeros and ones, and a (V)PTEST of that register with itself to see if it's all zero. (edit: movmskps then test+jcc is fewer uops, but maybe higher latency.) Then obviously je or jne as appropriate, depending on whether you did a FP compare that leaves zeros when you should exit, or zeros when you shouldn't. NAN shouldn't be possible, but there's no reason not to choose your comparison op such that a NAN will result in the exit condition being true.
const __mm256d const_four = _mm256_set1_pd(4.0); // outside the loop
__m256i cmp_result = _mm256_cmp_pd(mag_squared, const_four, _CMP_LE_OQ); // vcmppd. result is non-zero if at least one element < 4.0
if (_mm256_testz_si256(cmp_result, cmp_result))
break;
There MIGHT be some way to use PTEST directly on a packed-double, with some bit-hack AND-mask that will pick out bits that will be set iff the FP value is > 4.0. Like maybe some bits in the exponent? Maybe worth considering. I found a forum post about it, but didn't try it out.
Hmm, oh crap, this doesn't record WHEN the loop condition failed, for each vector element separately, for the purpose of coloring the points outside the Mandelbrot set. Maybe test for any element hitting the condition (instead of all), record the result, and then set that element (and c for that element) to 0.0 so it won't trigger the exit condition again. Or maybe scheduling pixels into vector elements is the way to go after all. This code might do fairly well on a hyperthreaded CPU, since there will be a lot of branch mispredicts with every element separately triggering the early-out condition.
That might waste a lot of your throughput, and given that 4 uops per cycle is doable, but only 2 of them can be FP mul/add/FMA, there's room for a significant amount of integer code to schedule points into vector elements. (On Sandybridge/Ivybrideg, without FMA, FP throughput is lower. But there are only 3 ports that can handle integer ops, and 2 of those are the ports for the FP mul and FP add units.)
Since you don't have to read any source data, there's only 1 memory access stream for each dep chain, and it's a write stream. (And it's low bandwidth, since most points take a lot of iterations before you're ready to write a single pixel value.) So the number of hardware prefetch streams isn't a limiting factor for the number of dep chains to run in parallel. Cache misses latency should be hidden by write buffers.
I can write some code if anyone's still interested in this (just post a comment). I stopped at the high-level design stage since this is an old question, though.
==============
I also found that Intel already used the Mandelbrot set as an example for one of their AVX tutorials. They use the mask-off-vector-elements method for the loop condition. (using the mask generated directly by vcmpps to AND). Their results indicate that AVX (single-precision) gave a 7x speedup over scalar float, so apparently it's not common for neighbouring pixels to hit the early-out condition at very different numbers of iterations. (at least for the zoom / pan they tested with.)
They just let the FP results keep accumulating for elements that fail the early-out condition. They just stop incrementing the counter for that element. Hopefully most systems default to having the control word set to zero out denormals, if denormals still take extra cycles.
Their code is silly in one way, though: They track the iteration count for each vector element with a floating-point vector, and then convert it to int at the end before use. It'd be faster, and not occupy an FP execution unit, to use packed-integers for that. Oh, I know why they do that: AVX (without AVX2) doesn't support 256bit integer vector ops. They could have used packed 16bit int loop counters, but that could overflow. (And they'd have to compress the mask down from 256b to 128b).
They also test for all elements being > 4.0 with movmskps and then test that, instead of using ptest. I guess the test / jcc can macro-fuse, and run on a different execution unit than FP vector ops, so it's maybe not even slower. Oh, and of course AVX (without AVX2) doesn't have 256bit PTEST. Also, PTEST is 2 uops, so actually movmskps + test / jcc is fewer uops than ptest + jcc. (PTEST is 1 fused-domain uop on SnB, but still 2 unfused uops for the execution ports. On IvB/HSW, 2 uops even in the fused domain.) So it looks like movmskps is the optimal way, unless you can take advantage of the bitwise AND that's part of PTEST, or need to test more than just the high bit of each element. If a branch is unpredictable, ptest might be lower latency, and thus be worth it by catching mispredicts a cycle sooner.

32x32 Multiply and add optimization

I'm working on optimizing an application . I found that i need to optimize an inner loop for improved performance.
rgiFilter is a 16 bit arrary.
for (i = 0; i < iLen; i++) {
iPredErr = (I32)*rgiResidue;
rgiFilter = rgiFilterBuf;
rgiPrevVal = rgiPrevValRdBuf + iRecent;
rgiUpdate = rgiUpdateRdBuf + iRecent;
iPred = iScalingOffset;
for (j = 0; j < iOrder_Div_8; j++) {
iPred += (I32) rgiFilter[0] * rgiPrevVal[0];
rgiFilter[0] += rgiUpdate[0];
iPred += (I32) rgiFilter[1] * rgiPrevVal[1];
rgiFilter[1] += rgiUpdate[1];
iPred += (I32) rgiFilter[2] * rgiPrevVal[2];
rgiFilter[2] += rgiUpdate[2];
iPred += (I32) rgiFilter[3] * rgiPrevVal[3];
rgiFilter[3] += rgiUpdate[3];
iPred += (I32) rgiFilter[4] * rgiPrevVal[4];
rgiFilter[4] += rgiUpdate[4];
iPred += (I32) rgiFilter[5] * rgiPrevVal[5];
rgiFilter[5] += rgiUpdate[5];
iPred += (I32) rgiFilter[6] * rgiPrevVal[6];
rgiFilter[6] += rgiUpdate[6];
iPred += (I32) rgiFilter[7] * rgiPrevVal[7];
rgiFilter[7] += rgiUpdate[7];
rgiFilter += 8;
rgiPrevVal += 8;
rgiUpdate += 8;
}
ode here
Your only bet is to do more than one operation at a time, and that means one of these 3 options:
SSE instructions (SIMD). You process multiple memory locations with a single instructions
Multi-threading (MIMD). This works best if you have more than 1 cpu core. Split your array into multiple, similarly sized strips that are independant of eachother (dependency will increase this option's complexity a lot, to the point of being slower than sequentially calculating everything if you need a lot of locks). Note that the array has to be big enough to offset the extra context switching and synchronization overhead (it's pretty small, but not negligeable). Best for 4 cores or more.
Both at once. If your array is really big, you could gain a lot by combining both.
If rgiFilterBuf, rgiPrevValRdBuf and rgiUpdateRdBuf are function parameters that don't alias, declare them with the restrict qualifier. This will allow the compiler to optimise more aggresively.
As some others have commented, your inner loop looks like it may be a good fit for vector processing instructions (like SSE, if you're on x86). Check your compiler's intrinsics.
I don't think you can do much to optimize it in C. Your compiler might have options to generate SIMD code, but you probably need to just go and write your own SIMD assembly code if performance is critical...
You can replace the inner loop with very few SSE2 intrinsics
see [_mm_madd_epi16][1] to replace the eight
iPred += (I32) rgiFilter[] * rgiPrevVal[];
and [_mm_add_epi16][2] or _[mm_add_epi32][3] to replace the eight
rgiFilter[] += rgiUpdate[];
You should see a nice acceleration with that alone.
These intrinsics are specific to Microsoft and Intel Compilers.
I am sure equivalents exist for GCC I just havent used them.
EDIT: based on the comments below I would change the following...
If you have mixed types the compiler is not always smart enough to figure it out.
I would suggest the following to make it more obvious and give it a better chance
at autovectorizing.
declare rgiFilter[] as I32 bit for
the purposes of this function. You
will pay one copy.
change iPred to iPred[] as I32 also
do the iPred[] summming outside the inner (or even outer) loop
Pack similar instructions in groups of four
iPred[0] += rgiFilter[0] * rgiPrevVal[0];
iPred[1] += rgiFilter[1] * rgiPrevVal[1];
iPred[2] += rgiFilter[2] * rgiPrevVal[2];
iPred[3] += rgiFilter[3] * rgiPrevVal[3];
rgiFilter[0] += rgiUpdate[0];
rgiFilter[1] += rgiUpdate[1];
rgiFilter[2] += rgiUpdate[2];
rgiFilter[3] += rgiUpdate[3];
This should be enough for the Intel compiler to figure it out
Ensure that iPred is hold in a register (not read from memory before and not written back into memory after each += operation).
Optimize the memory layout for 1st level cache. Ensure that the 3 arrays to not fight for same cache entries. This depends on CPU architecture and isn't simple at all.
Loop unrolling and vectorizing should left to the compiler.
See Gcc Auto-vectorization
Start out by making sure that the data is layed out linearly in memory so that you get no cache misses. This doesn't seem to be an issue though.
If you can't SSE the operations (and if the compiler fails with it - look at the assembly), try to separate into several different for-loops that are smaller (one for each 0 .. 8). Compilers tend to be able to do better optimizations on loops that perform less amount of operations (except in cases like this where it might be able to do vectorization/SSE).
16 bit integers are more expensive for 32/64 bit architecture to use (unless they have specific 16-bit registers). Try converting it to 32 bits before doing the loop (most 64-bit architectures have 32bit registers as well afaik).
Pretty good code.
At each step, you're basically doing three things, a multiplication and two additions.
The other suggestions are good. Also, I've sometimes found that I get faster code if I separate those activities into different loops, like
one loop to do the multiplication and save to a temporary array.
one loop to sum that array in iPred.
one loop to add rgiUpdate to rgiFilter.
With the unrolling, your loop overhead is negligible, but if the number of different things done inside each loop is minimized, the compiler can sometimes make better use of its registers.
There's lots of optimizations that you can do that involve introducing target specific code. I'll stick mostly with generic stuff, though.
First, if you are going to loop with index limits then you should usually try to loop downward.
Change:
for (i = 0; i < iLen; i++) {
to
for (i = iLen-1; i <= 0; i--) {
This can take advantage of the fact that many common processors essentially do a comparison with 0 for the results of any math operation, so you don't have to do an explicit comparison.
This only works, though, if going backwards through the loop has the same results and if the index is signed (though you can sneak around that).
Alternately you could try limiting by pointer math. This might eliminate the need for an explicit index (counter) variable, which could speed things up, especially if registers are in short supply.
for (p = rgiFilter; p <= rgiFilter+8; ) {
iPred += (I32) (*p) + *rgiPreval++;
*p++ += *rgiUpdate++;
....
}
This also gets rid of the odd updating at the end of your inner loop. The updating at the end of the loop could confuse the compiler and make it produce worse code. You may also find that the loop unrolling that you did do may produce worse or equally as good results as if you had only two statements in the body of the inner loop. The compiler is likely able to make good decisions about how this loop should be rolled/unrolled. Or you might just want to make sure that the loop is unrolled twice since rgiFilter is an array of 16 bit values and see if the compiler can take advantage of accessing it just twice to accomplish two reads and two writes -- doing one 32 bit load and one 32 bit store.
for (p = rgiFilter; p <= rgiFilter+8; ) {
I16 x = *p;
I16 y = *(p+1); // Hope that the compiler can combine these loads
iPred += (I32) x + *rgiPreval++;
iPred += (I32) y + *rgiPreval++;
*p++ += *rgiUpdate++;
*p++ += *rgiUpdate++; // Hope that the complier can combine these stores
....
}
If your compiler and/or target processor supports it you can also try issuing prefetch instructions. For instance gcc has:
__builtin_prefetch (const void * addr)
__builtin_prefetch (const void * addr, int rw)
__builtin_prefetch (const void * addr, int rw, int locality)
These can be used to tell the compiler that if the target has prefetch instructions it should use them to try to go ahead and get addr into the cache. Optimally these should be issued once per cache line step per array you're working on. The rw argument is to tell the compiler if you want to read or write to address. Locality has to do with if the data needs to stay in cache after you access it. The compiler just tries to do the best it can figure out how to to generate the right instructions for this, but if it can't do what you ask for on a certain target it just does nothing and it doesn't hurt anything.
Also, since the __builtin_ functions are special the normal rules about variable number of arguments don't really apply -- this is a hint to the compiler, not a call to a function.
You should also look into any vector operations your target supports as well as any generic or platform specific functions, builtins, or pragmas that your compiler supports for doing vector operations.

Resources