getting combination of indices of on bits from 2 bitmaps - c

This problem has 2 level
Level 1.
I have a 64 bit bitmap and I know only few of them are on or set to 1. Is there a way to get which bits are set without using branching ?
e.g.___(0)___________________________________________________________(63)
BMP = 000000001000010000000000010000000000000000000000011000000000000
f(BMP) = {9, 14, 26, 51, 52}
Level 2.
Now I have 2 64 bit bitmaps and I need combination of set bits in 2.
e.g.____(0)___________________________________________________________(63)
BMP1 = 000000001000000000000000000000000000000000000000011000000000000
BMP2 = 000000000000010000000000010000000000000000000000000000000000000
f(BMP1, BMP2) = {(9,14), (9, 26), (51, 14), (51, 26), (52, 14), (52, 26)}
I know that the bitmap almost always is sparse.
It would be great if the solution suggested can be expanded to more than 2 bitmaps at a time but I would rather have a method which works extremely fast for upto 2 and then a little slower for more than that.
Even if solution without branching is not possible then please suggest what will be fastest possible method with branching.
(Sorry for bad formatting)

You could store the possible bitfields in a hash table, if there are only a relative few of them, such as if you know no more than two bits are set and there are at most a few thousand possibilities.
Failing that, there are a few tricks you can do with two’s-complement arithmetic and signed numbers to get the first bit set in a vector. v & -v will get you a column vector of the lowest-order bit that’s set in v. You can bitmask and repeat to get them all.

Related

Compressing a sparse bit array

I have arrays of 1024 bytes (8192 bits) which are mostly zero.
Between 0.01% and 10% of bits will be set (random, no pattern).
How could these be compressed, given the lack of structure and the relatively small size?
(My first thought was to store the distances between set bits. I need 13 bits for each distance, but at worst case 10% occupancy this needs 13 * 816 / 8 = 1326 bytes, which is not an improvement.)
This is for ultra-low bandwidth comms, so every byte matters.
I've dealt deeply with a similar problem, but my sets are much bigger (30 million possible values with between 1 and 30 million elements in each set), so they both gain much more from compression and the compression metadata is insignificant compared to the size of the data. I have never gone down to squeezing things into units smaller than uint16_t, so the things I write below might not apply if you start chopping up 13 bit values into pieces. It feels like it should work, but caveat emptor.
What I've found works is to employ several strategies that depend on the particular data we have. The good news is that the count of elements in each set is a very good indicator of which compression strategy will work best for a particular set. So all the metadata you need is a count of elements in the set. In my data format the first and only metadata value (I'll be unspecific and just call it "value", you can squeeze things in bytes, 16 bit values or 13 bit values however you feel) is the count of elements in the set, the rest is just the encoding of the set elements.
The strategies are:
If very few elements are in the set, you can't do better than an array that says "1, 4711, 8140", so in this case the data is encoded as: [3, 1, 4711, 8140]
If almost all elements are in the set, you can just keep track of elements that aren't. For example [8190, 17, 42].
If around half of the elements are in the set you pretty much can't do much better than a bitmap, so you get [4000, {bitmap}], this is the only case where your data ends up being longer than strictly uncompressed.
If more than "a few" but many fewer than "around half" elements are set, I found another strategy. Divide the bits of your possible values in the set in half. Let's say we have 2^16 (it's easier to describe, it should probably work for 2^13) possible values. The values are divided into 256 ranges with each range with 256 possible values. We then have an array with 256 bytes, each of these bytes describes how many values are in each range (so byte 0 tells us how many elements are [0,255], byte 1 gives us [256,511], etc.) immediately after follow arrays with the values in each range mod 256. The trick here is that while every element in the set encoded as an array (strategy 1) would be 2 bytes, in this scheme each element is only 1 bytes + 256 static bytes for the counts of elements. This means that as soon as we have more than 256 elements in the set this saves us space by switching from strategy 1 to 4.
Strategy 4 can be refined (probably meaningless if your data is random as you mention, but my data had more patterns sometimes, so it worked for me). Since we still need 8 bits for each element in the previous encoding, as soon as a sub-array of elements goes over 32 elements (256 bytes), we can store it as a bitmap instead. This is also a good breakpoint for switching strategies between 4/5 to 3. If all the arrays in this strategy are just bitmaps, then we should just use strategy 3 (it's more complicated than that, but the breakpoint between strategies can be precomputed quite accurately that you'll end up picking the most likely efficient strategy each time).
I have only vaguely tried saving deltas between numbers in the set. Quick experiments showed that they weren't really much more efficient than the strategies I mentioned above, had unpredictable degenerate cases, but most importantly, the application I work with really likes to not have to deserialise its data, just use it raw straight from disk (mmap).

fast poker hand ranking

I am working on a simulation of poker and now I have to rank hands effectively:
Every hand is a combination of 5 cards and is represented as an uint64_t.
Every bit from 0 (Ace of Spades), 1 (Ace of Hearts) to 51 (Two of Clubs) indicates if the corresponding card is part (bit == 1) or isn't part (bit == 0) of the hand. The bits from 52 to 63 are always set to zero and don't hold any information.
I already know how I theoretically could generate a table, so that every valid hand can be mapped to rang (represented as uint16_t) between 1 (2,3,4,5,7 - not in the same color) and 7462 (Royal Flush) and all the others to the rang zero.
So a naive lookup table (with the integer value of the card as index) would have an enormous size of
2 bytes * 2^52 >= 9.007 PB.
Most of this memory would be filled with zeros, because almost all uint64_t's from 0 to 2^52-1 are invalid hands and therefor have a rang equal to zero.
The valuable data occupies only
2 bytes * 52!/(47!*5!) = 5.198 MB.
What method can I use for the mapping so that I only have to save the ranks from the valid cards and some overhead (max. 100 MB's) and still don't have to do some expensive search...
It should be as fast as possible!
If you have any other ideas, you're welcome! ;)
You need only a table of 13^5*2, with the extra level of information dictating if all the cards are of the same suit. If for some reason 'heart' outranks 'diamond', you need still at most a table with size of 13^6, as the last piece of information encodes as '0 = no pattern, 1 = all spades, 2 = all hearts, etc.'.
A hash table is probably also a good and fast approach -- Creating a table from nCk(52,5) combinations doesn't take much time (compared to all possible hands). One would, however, need to store 65 bits of information for each entry to store both the key (52 bits) and the rank (13 bits).
Speeding out evaluation of the hand, one first rules out illegal combinations from the mask:
if (popcount(mask) != 5); afterwards once can use enough bits from e.g. crc32(mask), which has instruction level support in i7-architecture at least.
If I understand your scheme correctly, you only need to know that the hamming weight of a particular hand is exactly 5 for it to be a valid hand. See Calculating Hamming Weight in O(1) for information on how to calculate the hamming weight.
From there, it seems you could probably work out the rest on your own. Personally, I'd want to store the result in some persistent memory (if it's available on your platform of choice) so that subsequent runs are quicker since they don't need to generate the index table.
This is a good source
Cactus Kev's
For a hand you can take advantage of at most 4 of any suit
4 bits for the rank (0-12) and 2 bits for the suit
6 bits * 5 cards is just 30 bit
Call it 4 bytes
There are only 2598960 hands
Total size a little under 10 mb
A simple implementation that comes to mind would be to change your scheme to a 5-digit number in base 52. The resulting table to hold all of these values would still be larger than necessary, but very simple to implement and it would easily fit into RAM on modern computers.
edit: You could also cut down even more by only storing the rank of each card and an additional flag (e.g., lowest bit) to specify if all cards are of the same suit (i.e., flush is possible). This would then be in base 13 + one bit for the ranking representation. You would presumably then need to store the specific suits of the cards separately to reconstruct the exact hand for display and such.
I would represent your hand in a different way:
There are only 4 suits = 2bits and only 13 cards = 4 bits for a total of 6 bits * 5 = 30 - so we fit into a 32bit int - we can also force this to always be sorted as per your ordering
[suit 0][suit 1][suit 2][suit 3][suit 4][value 0][value 1][value 2][value 3][value 4]
Then I would use a separate hash for:
consectutive values (very small) [mask off the suits]
1 or more multiples (pair, 2 pair, full house) [mask off the suits]
suits that are all the same (very small) [mask off the values]
Then use the 3 hashes to calculate your rankings
At 5MB you will likely have enough caching issues that will make a bit of math and three small lookups faster

Fastest possible lookup for known static set of integers?

I am implementing a VM compiler, and naturally, I've come to the point of implementing switches. Also naturally, for short switches, a sequential lookup array would be optimal but what about bigger switches?
So far I've come up with a data structure that gives me a pretty good lookup time. I don't know the name of that structure, but it is similar to a binary tree but monolith, with the difference it only applies to a static set of integers, cannot add or remove. It looks like a table, where value increases to the top and the right, here is an example:
Integers -89, -82, -72, -68, -65, -48, -5, 0, 1, 3, 7, 18, 27, 29, 32, 37, 38, 42, 45, 54, 76, 78, 87, 89, 92
and the table:
-65 3 32 54 92
-68 1 29 45 89
-82 -5 18 38 78
-89 -48 7 37 76
Which gives me the worst possible case width + height iterations. Let's say the case is 37, -65 is less than 37, so move to the right, same for 3 move to the right, same for 32 move to the right, 54 is bigger so move down (step a width since it is a sequential array anyway), 45 is bigger so move down, 38 is bigger so move down and there we have 37 in 7 hops.
Is there any possible faster lookup algorithm?
Also, is there a name for this kind of arrangement? I came up with it on my own, but most likely someone else did that before me, so it is most probably named already.
EDIT: OK, as far as I got it, a "perfect hash" will offer me better THEORETICAL performance. But how will this play out in real life? If I understand correctly a two level "perfect hash" will be rather spread out instead of a continuous block of memory, so while the theoretical complexity is lower, there is a potential penalty of tens if not hundreds of cycles before that memory is fetched. In contrast, a slower theoretical worst case scenario will actually perform better just because it is more cache friendly than a perfect hash... Or not?
When implementing switches among a diverse set of alternatives, you have several options:
Make several groups of flat lookup arrays. For example, if you see numbers 1, 2, 3, 20000, 20001, 20002, you could do a single if to take you to 1-s or to 20,000-s, and then employ two flat lookup arrays.
Discover a pattern. For example, if you see numbers 100, 200, 300, 400, 500, 600, divide the number by 100, and then go for a flat lookup array.
Make a hash table. Since all the numbers that you are hashing are known to you, you can play with the load factor of the table to make sure that the lookup is not going to take a lot of probing.
Your algorithm is similar to binary search, in the sense that it's from the "divide an conquer" family. Such algorithms have logarithmic time complexity, which may not be acceptable for switches, because they are expected to be O(1).
Is there any possible faster lookup algorithm?
Binary search is faster.
Binary search completes in log2(w*h) = log2(w) + log2(h).
Your algorithm completes in w+h.

generate random RGB color using rand() function

I need a function which will generate three numbers so I can use them as RGB pattern for my SVG.
While this is simple, I also need to make sure I'm not using the same color twice.
How exactly do I do that? Generate one number at a time with simple rand (seed time active) and then what? I don't want to exclude a number, but maybe the whole pattern?
I'm kind of lost here.
To be precise, by first calling of this function I will get for example 218 199 154 and by second I'll get 47 212 236 which definitely are two different colors. Any suggestions?
Also I think a struct with int r, int g, int b would be suitable for this?
Edit: Colors should be different to the human eye. Sorry for not mentioning this earlier.
You could use a set to store the generated colors.
First instanciate a new set.
Then, every time you generate a color, look if the value is present in your set.
If the record exists, skip it and retry for a new colour. If not, you can use it but dont forget to cache it in the Set after.
This may become not performant if you need to generate a big quantity of colour.
The cheapest way to do this would be to use a Bloom filter which is very small memory wise, but leads to occasional false positives (i.e., you will think you have used a colour, but you haven't). Basically, create three random numbers between 0-255, save them however you like, hash them as a triplet and place the hash in the filter.
Also, you might want to throw away the low bits of each channel since it's probably not easy to tell #FFFFF0 versus #FFFFF2.
Here is a simple way:
1.Generate a random integer.
2.Shift it 8 times to have 24 meaningful bits, store this integer value.
3.Use first 8 bits for R, second group of 8 bits for G,
and the remaining 8 bits for B value.
For every new random number, shift it 8 times, compare all the other integer values that you stored before, if none of them matches with the new one use it for the new color(step3).
The differentiation by human eye is an interesting topic, because perceptional thresholds vary from one to another person. To achieve it shift the integer 14 times, get the first 6 bits for R(pad two 0s to get 8 bits again), get the second 6 bits for G, and last 6 bits for B. If you think that 6 bits are not good for it, decrease it 5,4...
Simple Run with 4 significant bits for each channel:
My random integer is:
0101-1111-0000-1111-0000-1100-1101-0000
I shift(you can also use multiply or modulo) it to left 20 times:
0000-0000-0000-0000-0000-0101-1111-0000
store this value.
Then get first 4 bits for R second 4 bits for G and last 4 bits for B:
R: 0101
G: 1111
B: 0000
Pad them to make each of them 8 bits.
R: 0101-0000
G: 1111-0000
B: 0000-0000
Use those for your color components.
For each new random number after shifting it compare it with your stored integer values so far. If it is different, then store and use it for color.
One idea would be to use a bit vector to represent the set of colors generated. For 24-bit precision, the bit vector would be 224 bits long, which is 16,777,216 bits, or 2 MB. Certainly not a lot, these days, and it would be very fast to look up and insert colors.

App Engine - Precomputing bounding boxes for proximity search

I'm trying to do a location-based search on App Engine, but since the data store doesn't support multiple inequality operators, I can't search "where lat between a and b and lon between c and d".
One of the solutions is to pre-compute bounding boxes to search on, as explained here:
http://code.google.com/appengine/articles/geosearch.html
http://mutiny.googlecode.com
However, I'm a little confused about "slices". I'm trying to figure out:
Why have slices? Why not just increase the resolution? Don't they do the same thing?
Why does the same have 5 configs - won't one do?
GEOBOX_CONFIGS = (
(4, 5, True),
(3, 2, True),
(3, 8, False),
(3, 16, False),
(2, 5, False),
)
I'm trying to figure out what to set the config to for my own app, but there are so many variables, it's not clear what to do. Do I increase the resolution (first number), the number of slices (second number), add/remove config?
Ultimately, I'm interested in points within 10-15 miles (the code already sorts them by distance), but I don't understand why it can't be done with 1 config and the resolution set high enough.
I found another example which seems to wrap everything up nicely, and I don't need to worry about all those crazy config values!
http://code.google.com/p/geomodel/wiki/Usage

Resources