What are good test cases for benchmarking & stress testing substring search algorithms? - c

I'm trying to evaluate different substring search (ala strstr) algorithms and implementations and looking for some well-crafted needle and haystack strings that will catch worst-case performance and possible corner-case bugs. I suppose I could work them out myself but I figure someone has to have a good collection of test cases sitting around somewhere...

Some thoughts and a partial answer to myself:
Worst case for brute force algorithm:
a^(n+1) b in (a^n b)^m
e.g. aaab in aabaabaabaabaabaabaab
Worst case for SMOA:
Something like yxyxyxxyxyxyxx in (yxyxyxxyxyxyxy)^n. Needs further refinement. I'm trying to ensure that each advancement is only half the length of the partial match, and that maximal suffix computation requires the maximal amount of backtracking. I'm pretty sure I'm on the right track because this type of case is the only way I've found so far to make my implementation of SMOA (which is asymptotically 6n+5) run slower than glibc's Two-Way (which is asymptotically 2n-m but has moderately painful preprocessing overhead).
Worst case for anything rolling-hash based:
Whatever sequence of bytes causes hash collisions with the hash of the needle. For any reasonably-fast hash and a given needle, it should be easy to construct a haystack whose hash collides with the needle's hash at every point. However, it seems difficult to simultaneously create long partial matches, which are the only way to get the worst-case behavior. Naturally for worst-case behavior the needle must have some periodicity, and a way of emulating the hash by adjusting just the final characters.
Worst case for Two-Way:
Seems to be very short needle with nontrivial MS decomposition - something like bac - where the haystack contains repeated false positives in the right-half component of the needle - something like dacdacdacdacdacdacdac. The only way this algorithm can be slow (other than by glibc authors implementing it poorly...) is by making the outer loop iterate many times and repeatedly incur that overhead (and making the setup overhead significant).
Other algorithms:
I'm really only interested in algorithms that are O(1) in space and have low preprocessing overhead, so I haven't looked at their worst cases so much. At least Boyer-Moore (without the modifications to make it O(n)) has a nontrivial worst-case where it becomes O(nm).

Doesn't answer your question directly, but you may find the algorithms in the book - Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology - interesting (has many novel algorithms on sub-string search). Additionally, it is also a good source of special and complex cases.

A procedure that might give interesting statistics, though I have no time to test right now:
Randomize over string length,
then randomize over string contents of that length,
then randomize over offset/length of a substring (possibly something not in the string),
then randomily clobber over the substring (possibly not at all),
repeat.

You can generate container strings (resp., contained test values) recursively by:
Starting with the empty string, generate all strings given by the augmentation of a string currently in the set by adding a character from an alphabet to the left or the right (both).
The alphabet for generating container strings is chosen by you.
You test 2 alphabets for contained strings. One is the one that makes up container strings, the other is its complement.

Related

Is O(cn) at least as fast as O(n) in a non asymptotically way?

So first of all let me talk about the motivation for this question. Let's supose you have to find the minimum and the maximum values in an array. In this case, you wave two ways of doing so.
The first one consists in iterating over the array and finding the maximum value, then doing the same thing to find the minimum value. This solution is O(2n).
The second one consists in iterating over the array just one time and finding both the minimum and maximum value at the same time. This solution is O(n).
Even though the time complexity has been halved, for each iteration of the O(n) solution you now have twice as many instructions (ignoring how the compiler can possibly optmize these instructions) so I believe they should take the same amount of time to execute.
Let me give you a second example. Now you need to reverse an array. Again, you have two ways of doing so.
The first one is to create an empty array, iterate over the data array filling the empty array. This solution is O(n).
The second one is to iterate over the data array, swapping the 0th and n-1th elements, then the 1th and n-2th elements and so on (using this strategy) until you reach the middle of the array. This solution is O((1/2)n).
Again, even though the time complexity has been cutted in half, you have three times more instructions per iteration. You're iterating over (1/2)n elements, but for each iteration you have to perform three XOR instructions. If you were not to use XOR, but an auxiliary variable you would still need 2 more instructions to perform the variable swapping, so now I believe that o((1/2)n) should actually be worse than o(n).
Having said these things, my question is the following:
Ignoring space complexity, garbage collecting and the compiler possible optimizations, can I assume that having O(c1*n) and O(c2*n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1*n) is as fast or faster than the one that gives me O(c2*n)?
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
The whole issue lies within the words "fast" or "faster". Computational complexity doesn't strictly measure what we intuitively understand as "fast". Without going into mathematical details (although it's a good idea: https://en.wikipedia.org/wiki/Big_O_notation), it answers the question "how fast it will go slower when my input grows". So if you have O(n^2) complexity you can roughly expect that doubling the size of the input will make your algorithm take 4 times more time. Whereas for linear complexity, 2 times bigger input gives only doubles the time. As you can see, it's relative, so any constants cancel out.
To sum up: from the way you ask your question, it doesn't seem the big-O notation is the correct tool here.
By definition, if c1 and c2 are constants, O(c1*n) === O(c2*c) === O(n). That is, the number of operations per element of your array of length n is completely irrelevant in this kind of complexity analysis.
All that it will tell you is that "it's linear". That is, if you have 1 bazillion operations for an array of length n, then you'll have 2 bazillion operations for an array of length 2*n (plus or minus something that grows slower than linear).
can I assume that having O(c1n) and O(c2n) algorithms so that c1 > c2, can I be sure that the algorithm that gives me O(c1n) is as fast or faster than the one that gives me O(c2n)?
Nope, not at all.
First, because the constants there are meaningless in that analysis. There's no way to put it: it is absolutely irrelevant whatever restrictions you put in c1 and c2 for big-O analysis. The whole idea is that it will discard those restrictions.
Second, because they don't tell you anything that would enable you to compare the two algorithms runtime for a specific value of n.
Such complexity analysis only enables you to compare the asymptotic behavior of algorithms. Real-world problems in general don't care about where the asymptotes are.
Assume that A1(n) is the number of operations Algorithm 1 needs for an input of length n, and A2(n) is the same for Algorithm 2. You could have:
A1(n) = 10n + 900
A2(n) = 100n
The complexity of both is O(A1) = O(A2) = O(n). For small inputs, A2 is faster. For large inputs, A1 is faster. The point where they change is n == 10.
This question is cool because it can make a difference on how I start writing code from here and on. If the "more complex" (c1) way is as fast as the "less complex" (c2) but more readable, i'm sticking with the "more complex" one.
Not only that, but also there's the fact that when you have 2 different algorithms that are really of different complexity classes (e.g., linear vs quadratic), it might still make sense to use the one of higher complexity as it may still be faster.
For example:
A3(n) = n^2
A4(n) = n + 10^20.
E.g., Algorithm 3 is quadratic, while Algorithm 4 is linear but it has a constant huge initialization time.
For inputs of size of up to around n == 10^10, it will be faster to use the quadratic algorithm.
It may very well be the case that all relevant inputs for your specific problem fall within that range, meaning that the quadratic algorithm would be the better, faster choice.
The bottom line is: for analyzing the actual time it will take to run an algorithm on a given input (or a given bounded range of inputs, as nearly all real-world problems are) and compare it with another algorithm, big-O analysis is meaningless.
Another way to put it: you're asking a practical "engineering" question (i.e., which option is better / faster) but trying to answer the question with a tool that's only useful for "theoretical" analysis. That tool is important, yes. But it has no chance of giving you the answer you're looking for, by design.
By definition, time complexity ignores constants. So O((1/2)n) == O(n) == O(2n) == O(cn).
Your example of O((1/2)n) shows why this is the case, because the constants can measure units of anything, so comparing them is meaningless.
You can never tell which algorithm is faster based only on the time complexity. But, you can tell which one would be faster as n approaches infinity. Since constants are removed from the time complexity, they would be considered equal and therefore with O(c1n) and O(c2n) you still would not be able to tell which one is faster even as n approaches infinity.
(my theoretical computer science courses are a couple of decades ago)
O(cn) is O(n).
It's still a linear search over the array.

Cache Optimization - Hashmap vs QuickSort?

Suppose that I have N unsorted arrays, of integers. I'd like to find the intersection of those arrays.
There are two good ways to approach this problem.
One, I can sort the arrays in place with an nlogn sort, like QuickSort or MergeSort. Then I can put a pointer at the start of each array. Compare each array to the one below it, iterating the pointer of whichever array[pointer] is smaller, or if they're all equal, you've found an intersection.
This is an O(nlogn) solution, with constant memory (since everything is done in-place).
The second solution is to use a hash map, putting in the values that appear in the first array as keys, and then incrementing those values as you traverse through the remaining arrays (and then grabbing everything that had a value of N). This is an O(n) solution, with O(n) memory, where n is the total size of all of the arrays.
Theoretically, the former solution is o(nlogn), and the latter is O(n). However, hash maps do not have great locality, due to the way that items can be randomly scattered through the map, due to collisions. The other solution, although o(nlogn), traverses through the array one at a time, exhibiting excellent locality. Since a CPU will tend to pull the array values from memory that are next to the current index into the cache, the O(nlogn) solution will be hitting the cache much more often than the hash map solution.
Therefore, given a significantly large array size (as number of elements goes to infinity), is it feasible that the o(nlogn) solution is actually faster than the O(n) solution?
For integers you can use a non-comparison sort (see counting, radix sort). A large set might be encoded, e.g. sequential runs into ranges. That would compress the data set and allow for skipping past large blocks (see RoaringBitmaps). There is the potential to be hardware friendly and have O(n) complexity.
Complexity theory does not account for constants. As you suspect there is always the potential for an algorithm with a higher complexity to be faster than the alternative, due to the hidden constants. By exploiting the nature of the problem, e.g. limiting the solution to integers, there are potential optimizations not available to general purpose approach. Good algorithm design often requires understanding and leveraging those constraints.

Is it possible to allow mismatches in KMP algorithm?

I am looking for an efficient algorithm to allow mismatches (at most 3) when comparing a pattern with a text. Original KMP does this job efficiently on my data but was considering this to extend this algo to accommodate for mismatches.
For my case: GACCCT is considered a match with GGGGGAGGTTTTTT with start position 4 in second sequence
I need to do pairwise comparison between two files. Each contains approximately 500,000 sequences. Sequences in one file is relatively short (~50 bases) while in other is longer (~200)
I tried Regex package in python, Levenshtein algorithm and edit distances. But they are slow and I will have to wait for couple of weeks to get the work done.
I think your data isn't too large, so maybe this will work:
I think you should create a suffix tree for your data. Once you do this, finding substrings will be very easy, whether or not you want to count mismatches: you just traverse the tree with the characters you're looking for, until you've either found a substring, or hit the most number of mismatches you can tolerate.
If you want at most three mismatches, there's a simple but kind of daft algorithm that'll work on most real cases. Break your pattern into four contiguous parts arbitrarily. (It is probably useful for them to match a random text location with roughly the same probability.) Find all matches in the text of your four contiguous parts. See which of those completes to an at-most-three-mismatches match by brute force.
Mehrdad's solution of using a suffix tree is better in general, but it requires more programming effort.

Regarding in-place merge in an array

I came across the following question.
Given an array of n elements and an integer k where k < n. Elements {a0...ak} and
{ak+1...an} are already sorted. Give an algorithm to sort in O(n) time and O(1) space.
It does not seem to me like it can be done in O(n) time and O(1) space. The problem really seems to be asking how to do the merge step of mergesort but in-place. If it was possible, wouldn't mergesort be implemented that way? I am unable to convince myself though and need some opinion.
This seems to indicate that it is possible to do in O(lg^2 n) space. I cannot see how to prove that it is impossible to merge in constant space, but I cannot see how to do it either.
Edit:
Chasing references, Knuth Vol 3 - Exercise 5.5.3 says "A considerably more complicated algorithm of L. Trabb-Pardo provides the best possible answer to this problem: It is possible to do stable merging in O(n) time and stable sorting in O(n lg n) time, using only O(lg n) bits of auxiliary memory for a fixed number of index variables.
More references that I have not read. Thanks for an interesting problem.
Further edit:
This article claims that the article by Huang and Langston have an algorithm that merges two lists of size m and n in time O(m + n), so the answer to your question would seem to be yes. Unfortunately I do not have access to the article, so I must trust the second hand information. I'm not sure how to reconcile this with Knuth's pronouncement that the Trabb-Pardo algorithm is optimal. If my life depended on it, I'd go with Knuth.
I now see that this had been asked as and earlier Stack Overflow question a number of times. I don't have the heart to flag it as a duplicate.
Huang B.-C. and Langston M. A., Practical in-place merging, Comm. ACM 31 (1988) 348-352
There are several algorithms for doing this, none of which are very easy to intuit. The key idea is to use a part of the arrays to merge as a buffer, then doing a standard merge using this buffer for auxiliary space. If you can then reposition the elements so that the buffer elements are in the right place, you're golden.
I have written up an implementation of one of these algorithms on my personal site if you're interested in looking at it. It's based on the paper "Practical In-Place Merging" by Huang and Langston. You probably will want to look over that paper for some insight.
I've also heard that there are good adaptive algorithms for this, which use some fixed-size buffer of your choosing (which could be O(1) if you wanted), but then scale elegantly with the buffer size. I don't know any of these off the top of my head, but I'm sure a quick search for "adaptive merge" might turn something up.
No it isn't possible, although my job would be much easier if it was :).
You have a O(log n) factor which you can't avoid. You can choose to take it as time or space, but the only way to avoid it is to not sort. With O(log n) space you can build a list of continuations that keep track of where you stashed the elements that didn't quite fit. With recursion this can be made to fit in O(1) heap, but that's only by using O(log n) stack frames instead.
Here is the progress of merge-sorting odds and evens from 1-9. Notice how you require log-space accounting to track the order inversions caused by the twin constraints of constant space and linear swaps.
. -
135792468
. -
135792468
: .-
125793468
: .-
123795468
#.:-
123495768
:.-
123459768
.:-
123456798
.-
123456789
123456789
There are some delicate boundary conditions, slightly harder than binary search to get right, and even in this (possible) form, and therefore a bad homework problem; but a really good mental exercise.
Update
Apparently I am mistaken and there is an algorithm that provides O(n) time and O(1) space. I have downloaded the papers to enlighten myself, and withdraw this answer as incorrect.

What is the fastest substring search algorithm?

OK, so I don't sound like an idiot I'm going to state the problem/requirements more explicitly:
Needle (pattern) and haystack (text to search) are both C-style null-terminated strings. No length information is provided; if needed, it must be computed.
Function should return a pointer to the first match, or NULL if no match is found.
Failure cases are not allowed. This means any algorithm with non-constant (or large constant) storage requirements will need to have a fallback case for allocation failure (and performance in the fallback care thereby contributes to worst-case performance).
Implementation is to be in C, although a good description of the algorithm (or link to such) without code is fine too.
...as well as what I mean by "fastest":
Deterministic O(n) where n = haystack length. (But it may be possible to use ideas from algorithms which are normally O(nm) (for example rolling hash) if they're combined with a more robust algorithm to give deterministic O(n) results).
Never performs (measurably; a couple clocks for if (!needle[1]) etc. are okay) worse than the naive brute force algorithm, especially on very short needles which are likely the most common case. (Unconditional heavy preprocessing overhead is bad, as is trying to improve the linear coefficient for pathological needles at the expense of likely needles.)
Given an arbitrary needle and haystack, comparable or better performance (no worse than 50% longer search time) versus any other widely-implemented algorithm.
Aside from these conditions, I'm leaving the definition of "fastest" open-ended. A good answer should explain why you consider the approach you're suggesting "fastest".
My current implementation runs in roughly between 10% slower and 8 times faster (depending on the input) than glibc's implementation of Two-Way.
Update: My current optimal algorithm is as follows:
For needles of length 1, use strchr.
For needles of length 2-4, use machine words to compare 2-4 bytes at once as follows: Preload needle in a 16- or 32-bit integer with bitshifts and cycle old byte out/new bytes in from the haystack at each iteration. Every byte of the haystack is read exactly once and incurs a check against 0 (end of string) and one 16- or 32-bit comparison.
For needles of length >4, use Two-Way algorithm with a bad shift table (like Boyer-Moore) which is applied only to the last byte of the window. To avoid the overhead of initializing a 1kb table, which would be a net loss for many moderate-length needles, I keep a bit array (32 bytes) marking which entries in the shift table are initialized. Bits that are unset correspond to byte values which never appear in the needle, for which a full-needle-length shift is possible.
The big questions left in my mind are:
Is there a way to make better use of the bad shift table? Boyer-Moore makes best use of it by scanning backwards (right-to-left) but Two-Way requires a left-to-right scan.
The only two viable candidate algorithms I've found for the general case (no out-of-memory or quadratic performance conditions) are Two-Way and String Matching on Ordered Alphabets. But are there easily-detectable cases where different algorithms would be optimal? Certainly many of the O(m) (where m is needle length) in space algorithms could be used for m<100 or so. It would also be possible to use algorithms which are worst-case quadratic if there's an easy test for needles which provably require only linear time.
Bonus points for:
Can you improve performance by assuming the needle and haystack are both well-formed UTF-8? (With characters of varying byte lengths, well-formed-ness imposes some string alignment requirements between the needle and haystack and allows automatic 2-4 byte shifts when a mismatching head byte is encountered. But do these constraints buy you much/anything beyond what maximal suffix computations, good suffix shifts, etc. already give you with various algorithms?)
Note: I'm well aware of most of the algorithms out there, just not how well they perform in practice. Here's a good reference so people don't keep giving me references on algorithms as comments/answers: http://www-igm.univ-mlv.fr/~lecroq/string/index.html
Build up a test library of likely needles and haystacks. Profile the tests on several search algorithms, including brute force. Pick the one that performs best with your data.
Boyer-Moore uses a bad character table with a good suffix table.
Boyer-Moore-Horspool uses a bad character table.
Knuth-Morris-Pratt uses a partial match table.
Rabin-Karp uses running hashes.
They all trade overhead for reduced comparisons to a different degree, so the real world performance will depend on the average lengths of both the needle and haystack. The more initial overhead, the better with longer inputs. With very short needles, brute force may win.
Edit:
A different algorithm might be best for finding base pairs, english phrases, or single words. If there were one best algorithm for all inputs, it would have been publicized.
Think about the following little table. Each question mark might have a different best search algorithm.
short needle long needle
short haystack ? ?
long haystack ? ?
This should really be a graph, with a range of shorter to longer inputs on each axis. If you plotted each algorithm on such a graph, each would have a different signature. Some algorithms suffer with a lot of repetition in the pattern, which might affect uses like searching for genes. Some other factors that affect overall performance are searching for the same pattern more than once and searching for different patterns at the same time.
If I needed a sample set, I think I would scrape a site like google or wikipedia, then strip the html from all the result pages. For a search site, type in a word then use one of the suggested search phrases. Choose a few different languages, if applicable. Using web pages, all the texts would be short to medium, so merge enough pages to get longer texts. You can also find public domain books, legal records, and other large bodies of text. Or just generate random content by picking words from a dictionary. But the point of profiling is to test against the type of content you will be searching, so use real world samples if possible.
I left short and long vague. For the needle, I think of short as under 8 characters, medium as under 64 characters, and long as under 1k. For the haystack, I think of short as under 2^10, medium as under a 2^20, and long as up to a 2^30 characters.
Published in 2011, I believe it may very well be the "Simple Real-Time Constant-Space String Matching" algorithm by Dany Breslauer, Roberto Grossi, and Filippo Mignosi.
Update:
In 2014 the authors published this improvement: Towards optimal packed string matching.
I was surprised to see our tech report cited in this discussion; I am one of the authors of the algorithm that was named Sustik-Moore above. (We did not use that term in our paper.)
I wanted here to emphasize that for me the most interesting feature of the algorithm is that it is quite simple to prove that each letter is examined at most once. For earlier Boyer-Moore versions they proved that each letter is examined at most 3 and later 2 times at most, and those proofs were more involved (see cites in paper). Therefore I also see a didactical value in presenting/studying this variant.
In the paper we also describe further variations that are geared toward efficiency while relaxing the theoretical guarantees. It is a short paper and the material should be understandable to an average high school graduate in my opinion.
Our main goal was to bring this version to the attention of others who can further improve on it. String searching has so many variations and we alone cannot possibly think of all where this idea could bring benefits. (Fixed text and changing pattern, fixed pattern different text, preprocessing possible/not possible, parallel execution, finding matching subsets in large texts, allow errors, near matches etc., etc.)
The http://www-igm.univ-mlv.fr/~lecroq/string/index.html
link you point to is
an excellent source and summary of some of the best known and researched
string matching algorithms.
Solutions to most search problems involve
trade offs with respect to pre-processing overhead, time and
space requirements. No single
algorithm will be optimal or practical in all cases.
If you objective is to design a specific algorithm for string searching, then ignore the
rest of what I have to say, If you want to develop a generalized string searching service
routine then try the following:
Spend some time reviewing the specific strengths and weaknesses of
the algorithms you have already referenced. Conduct the
review with the objective of finding a set of
algorithms that cover the range and scope of string searches you are
interested in. Then, build a front end search selector based on a classifier
function to target the best algorithm for the given inputs. This way you may
employ the most efficient algorithm to do the job. This is particularly
effective when an algorithm is very good for certain searches but degrades poorly. For
example, brute force is probably the best for needles of length 1 but
quickly degrades as needle length increases, whereupon the sustik-moore algoritim may become more efficient (over small alphabets), then for longer needles and larger alphabets, the KMP or Boyer-Moore algorithms may be better. These are just examples to illustrate a possible strategy.
The multiple algorithm approach not a new idea. I believe it has been employed by a few
commercial Sort/Search packages (e.g. SYNCSORT commonly used on mainframes implements
several sort algorithms and uses heuristics to choose the "best" one for the given inputs)
Each search algorithm comes in several variations that
can make significant differences to its performance, as,
for example, this paper illustrates.
Benchmark your service to categorize the areas where additional search strategies are needed or to more effectively
tune your selector function. This approach is not quick or easy but if
done well can produce very good results.
The fastest substring search algorithm is going to depend on the context:
the alphabet size (e.g. DNA vs English)
the needle length
The 2010 paper "The Exact String Matching Problem: a Comprehensive Experimental Evaluation" gives tables with runtimes for 51 algorithms (with different alphabet sizes and needle lengths), so you can pick the best algorithm for your context.
All of those algorithms have C implementations, as well as a test suite, here:
http://www.dmi.unict.it/~faro/smart/algorithms.php
A really good question. Just add some tiny bits...
Someone were talking about DNA sequence matching. But for DNA sequence, what we usually do is to build a data structure (e.g. suffix array, suffix tree or FM-index) for the haystack and match many needles against it. This is a different question.
It would be really great if someone would like to benchmark various algorithms. There are very good benchmarks on compression and the construction of suffix arrays, but I have not seen a benchmark on string matching. Potential haystack candidates could be from the SACA benchmark.
A few days ago I was testing the Boyer-Moore implementation from the page you recommended (EDIT: I need a function call like memmem(), but it is not a standard function, so I decided to implement it). My benchmarking program uses random haystack. It seems that the Boyer-Moore implementation in that page is times faster than glibc's memmem() and Mac's strnstr(). In case you are interested, the implementation is here and the benchmarking code is here. This is definitely not a realistic benchmark, but it is a start.
A faster "Search for a single matching character" (ala strchr) algorithm.
Important notes:
These functions use a "number / count of (leading|trailing) zeros" gcc compiler intrinsic- __builtin_ctz. These functions are likely to only be fast on machines that have an instruction(s) that perform this operation (i.e., x86, ppc, arm).
These functions assume the target architecture can perform 32 and 64 bit unaligned loads. If your target architecture does not support this, you will need to add some start up logic to properly align the reads.
These functions are processor neutral. If the target CPU has vector instructions, you might be able to do (much) better. For example, The strlen function below uses SSE3 and can be trivially modified to XOR the bytes scanned to look for a byte other than 0. Benchmarks performed on a 2.66GHz Core 2 laptop running Mac OS X 10.6 (x86_64) :
843.433 MB/s for strchr
2656.742 MB/s for findFirstByte64
13094.479 MB/s for strlen
... a 32-bit version:
#ifdef __BIG_ENDIAN__
#define findFirstZeroByte32(x) ({ uint32_t _x = (x); _x = ~(((_x & 0x7F7F7F7Fu) + 0x7F7F7F7Fu) | _x | 0x7F7F7F7Fu); (_x == 0u) ? 0 : (__builtin_clz(_x) >> 3) + 1; })
#else
#define findFirstZeroByte32(x) ({ uint32_t _x = (x); _x = ~(((_x & 0x7F7F7F7Fu) + 0x7F7F7F7Fu) | _x | 0x7F7F7F7Fu); (__builtin_ctz(_x) + 1) >> 3; })
#endif
unsigned char *findFirstByte32(unsigned char *ptr, unsigned char byte) {
uint32_t *ptr32 = (uint32_t *)ptr, firstByte32 = 0u, byteMask32 = (byte) | (byte << 8);
byteMask32 |= byteMask32 << 16;
while((firstByte32 = findFirstZeroByte32((*ptr32) ^ byteMask32)) == 0) { ptr32++; }
return(ptr + ((((unsigned char *)ptr32) - ptr) + firstByte32 - 1));
}
... and a 64-bit version:
#ifdef __BIG_ENDIAN__
#define findFirstZeroByte64(x) ({ uint64_t _x = (x); _x = ~(((_x & 0x7F7F7F7F7f7f7f7full) + 0x7F7F7F7F7f7f7f7full) | _x | 0x7F7F7F7F7f7f7f7full); (_x == 0ull) ? 0 : (__builtin_clzll(_x) >> 3) + 1; })
#else
#define findFirstZeroByte64(x) ({ uint64_t _x = (x); _x = ~(((_x & 0x7F7F7F7F7f7f7f7full) + 0x7F7F7F7F7f7f7f7full) | _x | 0x7F7F7F7F7f7f7f7full); (__builtin_ctzll(_x) + 1) >> 3; })
#endif
unsigned char *findFirstByte64(unsigned char *ptr, unsigned char byte) {
uint64_t *ptr64 = (uint64_t *)ptr, firstByte64 = 0u, byteMask64 = (byte) | (byte << 8);
byteMask64 |= byteMask64 << 16;
byteMask64 |= byteMask64 << 32;
while((firstByte64 = findFirstZeroByte64((*ptr64) ^ byteMask64)) == 0) { ptr64++; }
return(ptr + ((((unsigned char *)ptr64) - ptr) + firstByte64 - 1));
}
Edit 2011/06/04 The OP points out in the comments that this solution has a "insurmountable bug":
it can read past the sought byte or null terminator, which could access an unmapped page or page without read permission. You simply cannot use large reads in string functions unless they're aligned.
This is technically true, but applies to virtually any algorithm that operates on chunks that are larger than a single byte, including the method suggested by the OP in the comments:
A typical strchr implementation is not naive, but quite a bit more efficient than what you gave. See the end of this for the most widely used algorithm: http://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
It also really has nothing to do with alignment per-se. True, this could potentially cause the behavior discussed on the majority of common architectures in use, but this has more to do with microarchitecture implementation details- if the unaligned read straddles a 4K boundary (again, typical), then that read will cause a program terminating fault if the next 4K page boundary is unmapped.
But this isn't a "bug" in the algorithm given in the answer- that behavior is because functions like strchr and strlen do not accept a length argument to bound the size of the search. Searching char bytes[1] = {0x55};, which for the purposes of our discussion just so happens to be placed at the very end of a 4K VM page boundary and the next page is unmapped, with strchr(bytes, 0xAA) (where strchr is a byte-at-a-time implementation) will crash exactly the same way. Ditto for strchr related cousin strlen.
Without a length argument, there is no way to tell when you should switch out of the high speed algorithm and back to a byte-by-byte algorithm. A much more likely "bug" would be to read "past the size of the allocation", which technically results in undefined behavior according to the various C language standards, and would be flagged as an error by something like valgrind.
In summary, anything that operates on larger than byte chunks to go faster, as this answers code does and the code pointed out by the OP, but must have byte-accurate read semantics is likely to be "buggy" if there is no length argument to control the corner case(s) of "the last read".
The code in this answer is a kernel for being able to find the first byte in a natural CPU word size chunk quickly if the target CPU has a fast ctz like instruction. It is trivial to add things like making sure it only operates on correctly aligned natural boundaries, or some form of length bound, which would allow you to switch out of the high speed kernel and in to a slower byte-by-byte check.
The OP also states in the comments:
As for your ctz optimization, it only makes a difference for the O(1) tail operation. It could improve performance with tiny strings (e.g. strchr("abc", 'a'); but certainly not with strings of any major size.
Whether or not this statement is true depends a great deal on the microarchitecture in question. Using the canonical 4 stage RISC pipeline model, then it is almost certainly true. But it is extremely hard to tell if it is true for a contemporary out-of-order super scalar CPU where the core speed can utterly dwarf the memory streaming speed. In this case, it is not only plausible, but quite common, for there to be a large gap in "the number of instructions that can be retired" relative to "the number of bytes that can be streamed" so that you have "the number of instructions that can be retired for each byte that can be streamed". If this is large enough, the ctz + shift instruction can be done "for free".
I know it's an old question, but most bad shift tables are single character. If it makes sense for your dataset (eg especially if it's written words), and if you have the space available, you can get a dramatic speedup by using a bad shift table made of n-grams rather than single characters.
Here's Python's search implementation, used from throughout the core. The comments indicate it uses a compressed boyer-moore delta 1 table.
I have done some pretty extensive experimentation with string searching myself, but it was for multiple search strings. Assembly implementations of Horspool and Bitap can often hold their own against algorithms like Aho-Corasick for low pattern counts.
Just search for "fastest strstr", and if you see something of interest just ask me.
In my view you impose too many restrictions on yourself (yes we all want sub-linear linear at max searcher), however it takes a real programmer to step in, until then I think that the hash approach is simply a nifty-limbo solution (well reinforced by BNDM for shorter 2..16 patterns).
Just a quick example:
Doing Search for Pattern(32bytes) into String(206908949bytes) as-one-line ...
Skip-Performance(bigger-the-better): 3041%, 6801754 skips/iterations
Railgun_Quadruplet_7Hasherezade_hits/Railgun_Quadruplet_7Hasherezade_clocks: 0/58
Railgun_Quadruplet_7Hasherezade performance: 3483KB/clock
Doing Search for Pattern(32bytes) into String(206908949bytes) as-one-line ...
Skip-Performance(bigger-the-better): 1554%, 13307181 skips/iterations
Boyer_Moore_Flensburg_hits/Boyer_Moore_Flensburg_clocks: 0/83
Boyer_Moore_Flensburg performance: 2434KB/clock
Doing Search for Pattern(32bytes) into String(206908949bytes) as-one-line ...
Skip-Performance(bigger-the-better): 129%, 160239051 skips/iterations
Two-Way_hits/Two-Way_clocks: 0/816
Two-Way performance: 247KB/clock
Sanmayce,
Regards
The Two-Way Algorithm that you mention in your question (which by the way is incredible!) has recently been improved to work efficiently on multibyte words at a time: Optimal Packed String Matching.
I haven't read the whole paper, but it seems they rely on a couple of new, special CPU instructions (included in e.g. SSE 4.2) being O(1) for their time complexity claim, though if they aren't available they can simulate them in O(log log w) time for w-bit words which doesn't sound too bad.
You could implement, say, 4 different algorithms. Every M minutes (to be determined empirically) run all 4 on current real data. Accumulate statistics over N runs (also TBD). Then use only the winner for the next M minutes.
Log stats on Wins so that you can replace algorithms that never win with new ones. Concentrate optimization efforts on the winningest routine. Pay special attention to the stats after any changes to the hardware, database, or data source. Include that info in the stats log if possible, so you won't have to figure it out from the log date/time-stamp.
I recently discovered a nice tool to measure the performance of the various available algos:
http://www.dmi.unict.it/~faro/smart/index.php
You might find it useful.
Also, if I have to take a quick call on substring search algorithm, I would go with Knuth-Morris-Pratt.
The fastest is currently EPSM, by S. Faro and O. M. Kulekci.
See https://smart-tool.github.io/smart/
"Exact Packed String Matching" optimized for SIMD SSE4.2 (x86_64 and aarch64). It performs stable and best on all sizes.
The site I linked to compares 199 fast string search algorithms, with the usual ones (BM, KMP, BMH) being pretty slow. EPSM outperforms all the others being mentioned here on these platforms. It's also the latest.
Update 2020: EPSM was recently optimized for AVX and is still the fastest.
You might also want to have diverse benchmarks with several types of strings, as this may have a great impact on performance. The algos will perform differenlty based on searching natural language (and even here there still might be fine grained distinctions because of the different morphologoies), DNA strings or random strings etc.
Alphabet size will play a role in many algos, as will needle size. For instance Horspool does good on English text but bad on DNA because of the different alphabet size, making life hard for the bad-character rule. Introducing the good-suffix allieviates this greatly.
Use stdlib strstr:
char *foundit = strstr(haystack, needle);
It was very fast, only took me about 5 seconds to type.
I don't know if it's the absolute best, but I've had good experience with Boyer-Moore.

Resources