I would like to know the fastest search method for array of structures
typedef struct fault_table_type {
fault_types_t fault_code;
faultmanger_time_ process_time;
fault_behavior_enum_type_t behavior;
FAILUREMGR_ACTION fault_action;
bool forward_fault;
} fault_table_type_t;
static const fault_table_type_t fault_table[] = {
{ COMMS_FAILURE, 60, BEHAVIOR_3, FAIL, false,},
{ QUEUE_FAILURE, 10, BEHAVIOR_1, RESET, true},
};
If I am just getting the fault_code. How do I search this value in the fault_table. Thanks
Linear search via a good old for-loop. The compiler will probably unroll the loop that for you. If not write a macro to check index i and call it for your 2 records. You could try lfind() but it will probably be a little slower just due to function call overheads.
The next step (if you have 1000+ records) is to sort your key either the array itself array or a separate index then use bsearch(). If you change from an array of struct to a struct of arrays then your key is sorted and serves as an index.
In either case you start with code optimized for readability. Benchmark it and then see if you need to make changes. You also want to be more precise what you mean with fastest mean: fewest cpu cycles, fewest cache misses, what access pattern (cold vs hot), user time etc.
Related
I'm making a simple ASCII chess game in c for practice, as I'm new to the language. I'd like to create a function that returns some sort of list of possible chess moves. I can store the data for a single move within an integer, as 32 bits is more than enough to encode all the information I need for a single move. Hence, it seems reasonable to return an integer array. No problem here.
I have no problem finding the legal moves. The problem is, when generating this list, I don't know what size or what datetype of array to initialize. This is, of course, because the amount of possible legal moves per board state varies dramatically.
A simple solution would be to initialize an array that has a length greater than I'd ever need.
Pseudocode:
int[] legal_moves():
int legals[1000];
int index = 0;
for (int move : moves):
if (legal(move)) {
legals[index] = move;
index++;
}
return legals;
After filling this array with moves, there'd be many empty slots in the array. However this seems highly memory inefficient. Not that chess is a memory intensive game, I'd simply like to get everything as relatively memory and time efficient and as possible (as per why I'm using c). As I'm used to programming in python, lists can have arbitrary lengths, and there I'd simply 'append' a move; which isn't trivial in c.
Memory-efficiency is much less of a priority than time-efficiency, so if the previous approach is the most time-efficient; I'm happy with that. I'd like to hear your thoughts for different methods, however.
I know that accessing something from an array takes O(1) time if it is of one type using mathematical formula array[n]=(start address of array + (n * size Of(type)), but Assume you have an array of objects. These objects could have any number of fields including nested objects. Can we consider the access time to be constant?
Edit- I am mainly asking for JAVA, but I would like to know if there is a difference in case I choose another mainstream language like python, c++, JavaScript etc.
For example in the below code
class tryInt{
int a;
int b;
String s;
public tryInt(){
a=1;
b=0;
s="Sdaas";
}
}
class tryobject{
public class tryObject1{
int a;
int b;
int c;
}
public tryobject(){
tryObject1 o=new tryObject1();
sss="dsfsdf";
}
String sss;
}
public class Main {
public static void main(String[] args) {
System.out.println("Hello World!");
Object[] arr=new Object[5];
arr[0]=new tryInt();
arr[1]=new tryobject();
System.out.println(arr[0]);
System.out.println(arr[1]);
}
}
I want to know that since the tryInt type object should take less space than tryobject type object, how will an array now use the formula array[n]=(start address of array + (n * size Of(type)) because the type is no more same and hence this formula should/will fail.
The answer to your question is it depends.
If it's possible to random-access the array when you know the index you want, yes, it's a O(1) operation.
On the other hand, if each item in the array is a different length, or if the array is stored as a linked list, it's necessary to start looking for your element at the beginning of the array, skipping over elements until you find the one corresponding to your index. That's an O(n) operation.
In the real world of working programmers and collections of data, this O(x) stuff is inextricably bound up with the way the data is represented.
Many people reserve the word array to mean a randomly accessible O(1) collection. The pages of a book are an array. If you know the page number you can open the book to the page. (Flipping the book open to the correct page is not necessarily a trivial operation. You may have to go to your library and find the right book first. The analogy applies to multi-level computer storage ... hard drive / RAM / several levels of processor cache)
People use list for a sequentially accessible O(n) collection. The sentences of text on a page of a book are a list. To find the fifth sentence, you must read the first four.
I mention the meaning of the words list and array here for an important reason for professional programmers. Much of our work is maintaining existing code. In our justifiable rush to get things done, sometimes we grab the first collection class that comes to hand, and sometimes we grab the wrong one. For example, we might grab a list O(n) rather than an array O(1) or a hash O(1, maybe). The collection we grab works well for our tests. But, boom!, performance falls over just when the application gets successful and scales up to holding a lot of data. This happens all the time.
To remedy that kind of problem we need a practical understanding of these access issues. I once inherited a project with a homegrown hashed dictionary class that consumed O(n cubed) when inserting lots of items into the dictionary. It took a lot of digging to get past the snazzy collection-class documentation to figure out what was really going on.
In Java, the Object type is a reference to an object rather than an object itself. That is, a variable of type Object can be thought of as a pointer that says “here’s where you should go to find your Object” rather than “I am an actual, honest-to-goodness Object.” Importantly, the size of this reference - the number of bytes used up - is the same regardless of what type of thing the Object variable refers to.
As a result, if you have an Object[], then the cost of indexing into that array is indeed O(1), since the entries in that array are all the same size (namely, the size of an object reference). The sizes of the objects being pointed at might not all be the same, as in your example, but the pointers themselves are always the same size and so the math you’ve given provides a way to do array indexing in constant time.
The answer depends on context.
It's really common in some textbooks to treat array access as O(1) because it simplifies analysis.
And in fairness it is O(1) cpu instructions in today's architectures.
But:
As the dataset gets larger tending to infinity, it doesn't fit in memory. If the "array" is implemented as a database structure spread across multiple machines, you'll end up with a tree structure and probably have logarithmic worst case access times.
If you don't care about data size going to infinity, then big O notation may not be the right fit for your situation
On real hardware, memory accesses are not all equal -- there are many layers of caches and cache misses cost hundreds or thousands of cycles. The O(1) model for memory access tends to ignore that
In theory work, random access machines access memory in O(1), but turing machines cannot. Hierarchical cache effects tend to be ignored. Some models like transdichotomous RAM try to account for this.
In short, this is a property of your model of computation. There are many valid and interesting models of computation and what to choose depends on your needs and your situation.
In general. array denotes a fixed-size memory range, storing elements of the same size. If we consider this usual concept of array, then if objects are members of the array, then under the hood your array stores the object references and referring the i'th element in your array you find the reference of the object/pointer it contains, with an O(1) complexity and the address it points to is something the language is to find.
However, there are arrays which do not comply to this definition. For example, in Javascript you can easily add items to the array, which makes me think that in Javascript the arrays are somewhat different from an allocated fixed-sized range of elements of the same size/type. Also, in Javascript you can add any types of elements to an array. So, in general I would say the complexity is O(1), but there are quite a few important exceptions from this rule, depending on the technologies.
I'm implementing a Hashtable that handles collisions with robin hood hashing. However, previously i had chaining instead, and the process of inserting almost 1 million keys was pretty much instantaneous. The same doesn't happen with the Robin Hood hashing which i found strange since i had the impression it was much quicker. So what i want to ask is if my insertion function is properly implemented. Here's the code:
typedef struct hashNode{
char *word;
int freq; //not utilized in the insertion
int probe;// distance from the calculated index to the actual index it was inserted.
struct hashNode* next; //not utilized in the insertion
struct hashNode* base; //not utilized in the insertion
}hashNode;
typedef hashNode* hash_ptr;
hash_ptr hashTable[NUM_WORDS] = {NULL}; // NUM_WORDS = 1000000
// Number of actual entries = 994707
hash_ptr swap(hash_ptr node, int index){
hash_ptr temp = hashTable[index];
hashTable[index] = node;
return temp;
}
static void insertion(hash_ptr node,int index){
while(hashTable[index]){
if(node->probe > hashTable[index]->probe){
node = swap(node,index);
}
node->probe++;
index++;
if(index > NUM_WORDS) index = 0;
}
hashTable[index] = node;
}
To contextualize everything:
the node parameter is the new entry.
the index parameter is where the new entry will be, if it isn't occupied.
The Robin Hood algorithm is very clever but it is just as dependent on having a good hash function as is any other open hashing technique.
As a worst case, consider the worst possible hash function:
int hash(const char* key) { return 0; }
Since this will map every item to the same slot, it is easy to see that the total number of probes is quadratic in the number of entries: the first insert succeeds on the first probe; the second insert requires two probes; the third one three probes; and so on, leading to a total of n(n+1)/2 probes for n inserts. This is true whether you use simple linear probing or Robin Hood probing.
Interestingly, this hash function might have no impact whatsoever on insertion into a chained hash table if -- and this is a very big if -- no attempt is made to verify that the inserted element is unique. (This is the case in the code you present, and it's not totally unreasonable; it is quite possible that the hash table is being built as a fixed lookup table and it is already known that the entries to be added are unique. More on this point later.)
In the chained hash implementation, the non-verifying insert function might look like this:
void insert(hashNode *node, int index) {
node->next = hashTable[index];
hashTable[index] = node;
}
Note that there is no good reason to use a doubly-linked list for a hash chain, even if you are planning to implement deletion. The extra link is just a waste of memory and cycles.
The fact that you can build the chained hash table in (practically) no time at all does not imply that the algorithm has built a good hash table. When it comes time to look a value up in the table, the problem will be discovered: the average number of probes to find the element will be half the number of elements in the table. The Robin Hood (or linear) open-addressed hash table has exactly the same performance, because all searches start at the beginning of the table. The fact that the open-addressed hash table was also slow to build is probably almost irrelevant compared to the cost of using the table.
We don't need a hash function quite as terrible as the "always use 0" function to produce quadratic performance. It's sufficient for the hash function to have an extremely small range of possible values (compared with the size of the hash table). If the possible values are equally likely, the chained hash will still be quadratic but the average chain length will be divided by the number of possible values. That's not the case for the linear/R.Hood probed hash, though, particularly if all the possible hash values are concentrated in a small range. Suppose, for example, the hash function is
int hash(const char* key) {
unsigned char h = 0;
while (*key) h += *key++;
return h;
}
Here, the range of the hash is limited to [0, 255). If the table size is much larger than 256, this will rapidly reduce to the same situation as the constant hash function. Very soon the first 256 entries in the hash table will be filled, and every insert (or lookup) after that point will require a linear search over a linearly-increasing compact range at the beginning of the table. So the performance will be indistinguishable from the performance of the table with a constant hash function.
None of this is intended to motivate the use of chained hash tables. Rather, it is pointing to the importance of using a good hash function. (Good in the sense that the result of hashing a key is uniformly distributed over the entire range of possible node positions.) Nonetheless, it is generally the case that clever open-addressing schemes are more sensitive to bad hash functions than simple chaining.
Open-addressing schemes are definitely attractive, particularly in the case of static lookup tables. They are more attractive in the case of static lookup tables because deletion can really be a pain, so not having to implement key deletion removes a huge complication. The most common solution for deletion is to replace the deleted element with a DELETED marker element. Lookup probes must still skip over the DELETED markers, but if the lookup is going to be followed by an insertion, the first DELETED marker can be remembered during the lookup scan, and overwritten by the inserted node if the key is not found. That works acceptably, but the load factor has to be calculated with the expected number of DELETED markers, and if the usage pattern sometimes successively deletes a lot of elements, the real load factor for the table will go down significantly.
In the case where deletion is not an issue, though, open-addressed hash tables have some important advantages. In particular, they are much lower overhead in the case that the payload (the key and associated value, if any) is small. In the case of a chained hash table, every node must contain a next pointer, and the hash table index must be a vector of pointers to node chains. For a hash table whose key occupies only the space of a pointer, the overhead is 100%, which means that a linear-probed open-addressed hash table with a load factor of 50% occupies a little less space than a chained table whose index vector is fully occupied and whose nodes are allocated on demand.
Not only is the linear probed table more storage efficient, it also provides much better reference locality, which means that the CPU's RAM caches will be used to much greater advantage. With linear probing, it might be possible to do eight probes using a single cacheline (and thus only one slow memory reference), which could be almost eight times as fast as probing through a linked list of randomly allocated table entries. (In practice, the speed up won't be this extreme, but it could well be more than twice as fast.) For string keys in cases where performance really matters, you might think about storing the length and/or the first few characters of the key in the hash entry itself, so that the pointer to the full character string is mostly only used once, to verify the successful probe.
But both the space and time benefits of open addressing are dependent on the hash table being an array of entries, not an array of pointers to entries as in your implementation. Putting the entries directly into the hash index avoids the possibly-significant overhead of a pointer per entry (or at least per chain), and permits the efficient use of memory caches. So that's something you might want to think about in your final implementation.
Finally, it's not necessarily the case that open addressing makes deletion complicated. In a cuckoo hash (and the various algorithms which it has inspired in recent years), deletion is no more difficult than deletion in a chained hash, and possibly even easier. In a cuckoo hash, any given key can only be in one of two places in the table (or, in some variants, one of k places for some small constant k) and a lookup operation only needs to examine those two places. (Insertion can take longer, but it is still expected O(1) for load factors less than 50%.) So you can delete an entry simply by removing it from where it is; that will have no noticeable effect on lookup/insertion speed, and the slot will be transparently reused without any further intervention being necessary. (On the down side, the two possible locations for a node are not adjacent and they are likely to be on separate cache lines. But there are only two of them for a given lookup. Some variations have better locality of reference.)
A few last comments on your Robin Hood implementation:
I'm not totally convinced that a 99.5% load factor is reasonable. Maybe it's OK, but the difference between 99% and 99.5% is so tiny that there is no obvious reason to tempt fate. Also, the rather slow remainder operation during the hash computation could be eliminated by making the size of the table a power of two (in this case 1,048,576) and computing the remainder with a bit mask. The end result might well be noticeably faster.
Caching the probe count in the hash entry does work (in spite of my earlier doubts) but I still believe that the suggested approach of caching the hash value instead is superior. You can easily compute the probe distance; it's the difference between the current index in the search loop and the index computed from the cached hash value (or the cached starting index location itself, if that's what you choose to cache). That computation does not require any modification to the hash table entry, so it's cache friendlier and slightly faster, and it doesn't require any more space. (But either way, there is a storage overhead, which also reduces cache friendliness.)
Finally, as noted in a comment, you have an off-by-one error in your wraparound code; it should be
if(index >= NUM_WORDS) index = 0;
With the strict greater-than test as written, your next iteration will try to use the entry at index NUM_WORDS, which is out of bounds.
Just to leave it here: the 99% fillrate is not resonable. Nether is 95%, nor 90%. I know they said it in the paper, they are wrong. Very wrong. Use 60%-80% like you should with open adressing
Robin Hood hashing does not change the number or collisions when you are inserting, the average (and the total) number of collisions remain the same. Only their distribution changes: Robin Hood improves the worst cases. But for the averages it's the same as linear, quadratic or double hashing.
at 75% you get about 1.5 collisions before a hit
at 80% about 2 collisions
at 90% about 4.5 collisions
at 95% about 9 collisions
at 99% about 30 collisions
I tested on a 10000 element random table. Robin Hood hashing cannot change this average, but it improves the 1% worst case number of collisions from having 150-250 misses (at 95% fill rate) to having about 30-40.
I think it's probably a simple answer but I thought I'd quickly check...
Let's say I'm adding Ints to an array at various points in my code, and then I want to find if an array contains a certain Int in the future..
var array = [Int]()
array.append(2)
array.append(4)
array.append(5)
array.append(7)
if array.contains(7) { print("There's a 7 alright") }
Is this heavier performance wise than if I created a dictionary?
var dictionary = [Int:Int]()
dictionary[7] = 7
if dictionary[7] != nil { print("There's a value for key 7")}
Obviously there's reasons like, you might want to eliminate the possibility of having duplicate entries of the same number... but I could also do that with a Set.. I'm mainly just wondering about the performance of dictionary[key] vs array.contains(value)
Thanks for your time
Generally speaking, Dictionaries provide constant, i.e. O(1), access, which means searching if a value exists and updating it are faster than with an Array, which, depending on implementation can be O(n). If those are things that you need to optimize for, then a Dictionary is a good choice. However, since dictionaries enforce uniqueness of keys, you cannot insert multiple values under the same key.
Based on the question, I would recommend for you to read Ray Wenderlich's Collection Data Structures to get a more holistic understanding of data structures than I can provide here.
I did some sampling!
I edited your code so that the print statements are empty.
I ran the code 1.000.000 times. Every time I measured how long it takes to access the dictionary and array separately. Then I subtracted the dictTime for arrTime (arrTime - dictTime) and saved this number each time.
Once it finished I took the average of the results.
The result is: 23150. Meaning that over 1.000.000 tries the array was faster to access by 23150 nanoSec.
The max difference was 2426737 and the min was -5711121.
Here are the results on a graph:
Is there a faster way to do this:
Vector3* points = malloc(maxBufferCount*sizeof(Vector3));
//put content into the buffer and increment bufferCount
...
// remove one point at index `removeIndex`
bufferCount--;
for (int j=removeIndex; j<bufferCount; j++) {
points[j] = points[j+1];
}
I'm asking because I have a huge buffer from which I remove elements quite often.
No, sorry - removing elements from the middle of an array takes O(n) time. If you really want to modify the elements often (i. e. remove certain items and/or add others), use a linked list instead - that has constant-time removal and addition. In contrast, arrays have constant lookup time, while linked lists can be accessed (read) in linear time. So decide what you will do more frequently (reading or writing) and choose the appropriate data structure based upon that decision.
Note, however, that I (kindly) assumed you are not trying to commit the crime of premature optimization. If you haven't benchmarked that this is the bottleneck, then probably just don't worry about it.
Unless you know it's a bottleneck you can probably let the compiler optimize for you, but you could try memmove.
The selected answer here is pretty comprehensive: When to use strncpy or memmove?
A description is here: http://www.kernel.org/doc/man-pages/online/pages/man3/memmove.3.html
A few things to say. The memmove function will probably copy faster than you, often it is optimised by the writers of your particular complier to use special instructions which arent available in the C language without inline assembler. I believe these instructions are called SIMD instructions (Single Instruction Multiple Data)? Somebody correct me if I am wrong.
If you can save up items to be removed, then you can optimse by sorting the list of items you wish to remove and then, doing a single pass. It isnt hard but just takes some funny arithmetic.
Also you could just store each item in a linked list, removing an item is trivial, but you lose random acccess to your array.
Finally you can have an additional array of pointers, the same size of your array, each pointer pointing to an element. Then you can access the array through double indirection, you can sort the array by swapping pointers, and you can delete items by making their pointer NULL.
Hope this gives you some ideas. There usually is a way to optimise things, but then it becomes more application specific.