I have a large index of size 80-bits and its corresponding data to be stored in a data structure on which I need to search. Can we use the 80-bit index in a hash table?? Or is there a better alternative data structure that will take a constant time for lookup (search)?
EDIT:
I think my question was not clear.... Here is the setup --- I have millions of files for which I will produce a cryptographic hash trapdoor of size 80-bits (to represent the file securely) and each 80-bit trapdoor is to be stored with its data in a data structure like hash table. Now since the domain of 80-bit trapdoor is larger than the range of hash table, there will be collisions for sure. But I need unique <80-bit trapdoor,data> pairs to be stored in the data structure. How can I achieve this using hash table? Or if there is any other alternative DS?
EDIT 2 :
Let's say that I created a hash table and there occurred a collision when adding the keys (say x & y in order) because the hash function generated the same index (i) for those keys. But by using collision resolution techniques (eg. double hashing), y is inserted in a different location j which is not i. I understand till this point. Now if I want to search based on a key y, does the hash table return the location i or j? If not i, how will it return j (the exact desired record)? does it store any counter(probe) for number of collisions?
You should probably review how hash tables work.
The object you want to use as an index are passed through an hash function and the the resulting value is used to find the memory position where you should place/look for the data associated to that index value.
If you need constant time lookups go for an hash table. Just be sure to use an appropriate hash function.
You can use whatever you want as index in a hash table if you provide a hash function. I don't hink there is a better alternative if you want constant time access.
Related
I have a series of fixed size arrays of binary values (individuals from a genetic algorithm) that I would like to associate with a floating point value (fitness value). Such look up table would have a fairly large size constrained by available memory. Due to the nature of the keys is there a hash function that would guarantee no collisions? I tried a few things but they result in collisions. What other data structure could I use to build this look up system?
To answer your questions:
There is no hash function that guarantees no collisions unless you make a hash function that encodes completely the bit array, meaning that given the hash you can reconstruct the bit array. This type of function would be a compression function. If your arrays have a lot of redundant information (for example most of the values are zeros), compressing them could be useful to reduce the total size of the lookup table.
A question on compressing bit array in C is answered here: Compressing a sparse bit array
Since you have most of the bits set to zero, the easiest solution would be to just write a function that converts your bit array in an integer array that keeps track of the positions of the bits that are set to '1'. Then write a function that does the opposite if you need the bit array again. You can save in the hashmap only the encoded array.
Another option to reduce the total size of the lookup table is to erase the old values. Since you are using a genetic algorithm, the population should change over time and old values should become useless, you could periodically remove the older values from the lookup table.
I have a requirement to do a lookup based on a large number. The number could fall in the range 1 - 2^32. Based on the input, i need to return some other data structure. My question is that what data structure should i use to effectively hold this?
I would have used an array giving me O(1) lookup if the numbers were in the range say, 1 to 5000. But when my input number goes large, it becomes unrealistic to use an array as the memory requirements would be huge.
I am hence trying to look at a data structure that yields the result fast and is not very heavy.
Any clues anybody?
EDIT:
It would not make sense to use an array since i may have only 100 or 200 indices to store.
Abhishek
unordered_map or map, depending on what version of C++ you are using.
http://www.cplusplus.com/reference/unordered_map/unordered_map/
http://www.cplusplus.com/reference/map/map/
A simple solution in C, given you've stated at most 200 elements is just an array of structs with an index and a data pointer (or two arrays, one of indices and one of data pointers, where index[i] corresponds to data[i]). Linearly search the array looking for the index you want. With a small number of elements, (200), that will be very fast.
One possibility is a Judy Array, which is a sparse associative array. There is a C Implementation available. I don't have any direct experience of these, although they look interesting and could be worth experimenting with if you have the time.
Another (probably more orthodox) choice is a hash table. Hash tables are data structures which map keys to values, and provide fast lookup and insertion times (provided a good hash function is chosen). One thing they do not provide, however, is ordered traversal.
There are many C implementations. A quick Google search turned up uthash which appears to be suitable, particularly because it allows you to use any value type as the key (many implementations assume a string as the key). In your case you want to use an integer as the key.
I want to construct a data structure when given a 80-bit key (unsigned char), it should store that key and its corresponding value at a unique index (dense indexing) since an index is usually 32-bit size but not 80-bit. And should provide constant-time (worst case) search functionality.
If I am correct a hash table with open addressing collision mechanism can achieve this, right?
And/Or is there any other better data structure to achieve these objectives?
Note: My 80-bit key is of type unsigned char since I'm working in C.
I understand that some hash tables use "buckets", which is a linked list of "entries".
HashTable
-size //total possible buckets to use
-count // total buckets in use
-buckets //linked list of entries
Entry
-key //key identifier
-value // the object you are storing for reference
-next //the next entry
In order to get the bucket by index, you have to call:
myBucket = someHashTable[hashIntValue]
Then, you could iterate the linked list of entries until you find the one you are looking for or null.
Does the hash function always return a NUMBER % HashTable.size? That way, you stay within the limit? Is that how the hash function should work?
Mathematically speaking, a hash function is usually defined as a mapping from the universe of elements you want to store in the hash table to the range {0, 1, 2, .., numBuckets - 1}. This means that in theory, there's no requirement whatsoever that you use the mod operator to map some integer hash code into the range of valid bucket indices.
However, in practice, almost universally programmers will use a generic hash code that produces a uniformly-distributed integer value and then mod it down so that it fits in the range of the buckets. This allows hash codes to be developed independently of the number of buckets used in the hash table.
EDIT: Your description of a hash table is called a chained hash table and uses a technique called closed addressing. There are many other implementations of hash tables besides the one you've described. If you're curious - and I hope you are! :-) - you might want to check out the Wikipedia page on the subject.
what is hash table?
It is also known as hash map is a data structure used to implement an associative array.It is a structure that can map keys to values.
How it works?
A hash table uses a hash function to compute an index into an array of buckets or slots, from which the correct value can be found.
See the below diagram it clearly explains.
Advantages:
In a well-dimensioned hash table, the average cost for each lookup is independent of the number of elements stored in the table.
Many hash table designs also allow arbitrary insertions and deletions of key-value pairs.
In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure.
Disadvantages:
The hash tables are not effective when the number of entries is very small. (However, in some cases the high cost of computing the hash function can be mitigated by saving the hash value together with the key.)
Uses:
They are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches and sets.
There is no predefined rule for how a hash function should behave. You can have all of your values map to index 0 - a perfectly valid hash function (performs poorly, but works).
Of course, if your hash function returns a value outside of the range of indices in your associated array, it won't work correctly. Thats not to say however, that you need to use the formula (number % TABLE_SIZE)
No, the table is typically an array of entries. You don't iterate it until you found the same hash, you use the hash result (or usually hash modulo numBuckets) to directly index into the array of entries. That gives you the O(1) behaviour (iterating would be O(n)).
When you try to store two different objects with the same hash result (called a 'hash collision'), you have to find some way to make space. Different implementations vary in how they handle collisions. You can create a linked list of all the objects with same hash, or use some rehashing to store in a different entry of the table.
I have to do a table lookup to translate from input A to output A'. I have a function with input A which should return A'. Using databases or flat files are not possible for certain reasons. I have to hardcode the lookup in the program itself.
What would be the the most optimum (*space-wise and time-wise separately): Using a hashmap, with A as the key and A' as the value, or use switch case statements in the function?
The table is a string to string lookup with a size of about 60 entries.
If speed is ultra ultra necessary, then I would consider perfect hashing. Otherwise I'd use an array/vector of string to string pairs, created statically in sort order and use binary search. I'd also write a small test program to check the speed and memory constraints were met.
I believe that both the switch and the table-look up will be equivalent (although one should do some tests on the compiler being used). A modern C compiler will implement a big switch with a look-up table. The table look-up can be created more easily with a macro or a scripting language.
For both solutions the input A must be an integer. If this is not the case, one solution will be to implement a huge if-else statement.
If you have strings you can create two arrays - one for input and one for output (this will be inefficient if they aren't of the same size). Then you need to iterate the contents of the input array to find a match. Based on the index you find, you return the corresponding output string.
Make a key that is fast to calculate, and hash
If the table is pretty static, unlikely to change in future, you could have a look-see if adding a few selected chars (with fix indexes) in the "key" string could get unique values (value K). From those insert the "value" strings into a hash_table by using the pre-calculated "K" value for each "key" string.
Although a hash method is fast, there is still the possibility of collision (two inputs generating the same hash value). A fast method depends on the data type of the input.
For integral types, the fastest table lookup method is an array. Use the incoming datum as an index into the array. One of the problems with this method is that the array must account for the entire spectrum of values for the fastest speed. Otherwise execution is slowed down by translating the original index into an index for the array (kind of like a hashing method).
For string input types, a nested look up may be the fastest. One example is to break up tables by length. The first array returns pointers to the table to search based on length, e.g. char * sub_table = First_Array[5] for a string of length 5. These can be configured for specialized input data.
Another method is to use a B-Tree, which is a binary tree of "pages". Behavior is similar to nested arrays.
If you let us know the input type, we can better answer your question.