I am implementing a VM compiler, and naturally, I've come to the point of implementing switches. Also naturally, for short switches, a sequential lookup array would be optimal but what about bigger switches?
So far I've come up with a data structure that gives me a pretty good lookup time. I don't know the name of that structure, but it is similar to a binary tree but monolith, with the difference it only applies to a static set of integers, cannot add or remove. It looks like a table, where value increases to the top and the right, here is an example:
Integers -89, -82, -72, -68, -65, -48, -5, 0, 1, 3, 7, 18, 27, 29, 32, 37, 38, 42, 45, 54, 76, 78, 87, 89, 92
and the table:
-65 3 32 54 92
-68 1 29 45 89
-82 -5 18 38 78
-89 -48 7 37 76
Which gives me the worst possible case width + height iterations. Let's say the case is 37, -65 is less than 37, so move to the right, same for 3 move to the right, same for 32 move to the right, 54 is bigger so move down (step a width since it is a sequential array anyway), 45 is bigger so move down, 38 is bigger so move down and there we have 37 in 7 hops.
Is there any possible faster lookup algorithm?
Also, is there a name for this kind of arrangement? I came up with it on my own, but most likely someone else did that before me, so it is most probably named already.
EDIT: OK, as far as I got it, a "perfect hash" will offer me better THEORETICAL performance. But how will this play out in real life? If I understand correctly a two level "perfect hash" will be rather spread out instead of a continuous block of memory, so while the theoretical complexity is lower, there is a potential penalty of tens if not hundreds of cycles before that memory is fetched. In contrast, a slower theoretical worst case scenario will actually perform better just because it is more cache friendly than a perfect hash... Or not?
When implementing switches among a diverse set of alternatives, you have several options:
Make several groups of flat lookup arrays. For example, if you see numbers 1, 2, 3, 20000, 20001, 20002, you could do a single if to take you to 1-s or to 20,000-s, and then employ two flat lookup arrays.
Discover a pattern. For example, if you see numbers 100, 200, 300, 400, 500, 600, divide the number by 100, and then go for a flat lookup array.
Make a hash table. Since all the numbers that you are hashing are known to you, you can play with the load factor of the table to make sure that the lookup is not going to take a lot of probing.
Your algorithm is similar to binary search, in the sense that it's from the "divide an conquer" family. Such algorithms have logarithmic time complexity, which may not be acceptable for switches, because they are expected to be O(1).
Is there any possible faster lookup algorithm?
Binary search is faster.
Binary search completes in log2(w*h) = log2(w) + log2(h).
Your algorithm completes in w+h.
Related
I am new to coding and a bit confused on the syntax for comparing elements within two numpy arrays to each other.
I am looking at two arrays of physiological signals (heart rate and blood pressure) that are time and sample matched. My goal is to create a counting variable that counts each time that heart rate and blood pressure go up together, three consecutive times.
For example if at times 60s, 61s, and 62s HR and BP behaved like this:
HR: 123, 124, 127
BP: 137, 139, 141
I would want the counting variable to count one time. Eventually the function should return the counter after it cycles through both arrays.
This is what i Have so far, but I know I am off-track. Code
This problem has 2 level
Level 1.
I have a 64 bit bitmap and I know only few of them are on or set to 1. Is there a way to get which bits are set without using branching ?
e.g.___(0)___________________________________________________________(63)
BMP = 000000001000010000000000010000000000000000000000011000000000000
f(BMP) = {9, 14, 26, 51, 52}
Level 2.
Now I have 2 64 bit bitmaps and I need combination of set bits in 2.
e.g.____(0)___________________________________________________________(63)
BMP1 = 000000001000000000000000000000000000000000000000011000000000000
BMP2 = 000000000000010000000000010000000000000000000000000000000000000
f(BMP1, BMP2) = {(9,14), (9, 26), (51, 14), (51, 26), (52, 14), (52, 26)}
I know that the bitmap almost always is sparse.
It would be great if the solution suggested can be expanded to more than 2 bitmaps at a time but I would rather have a method which works extremely fast for upto 2 and then a little slower for more than that.
Even if solution without branching is not possible then please suggest what will be fastest possible method with branching.
(Sorry for bad formatting)
You could store the possible bitfields in a hash table, if there are only a relative few of them, such as if you know no more than two bits are set and there are at most a few thousand possibilities.
Failing that, there are a few tricks you can do with two’s-complement arithmetic and signed numbers to get the first bit set in a vector. v & -v will get you a column vector of the lowest-order bit that’s set in v. You can bitmask and repeat to get them all.
I'm trying to write a fast 3D lut lookup function and noticed that most luts are either 33x33x33 or 17x17x17.
Why 33 or 17? Wouldn't the math be quicker with 32 or 16 instead? So you could do some shifts instead of divides? Or maybe I'm not understanding why.
Anyone?
This paper will provide a synopsis: https://www.hpl.hp.com/techreports/98/HPL-98-95.pdf
Basically what you need is to divide the color space into a certain number of pieces and do linear interpolation between those pieces. It's a method of doing the lookup table such that you can find the color positions without much error but with a more sparced lookup than you would otherwise have.
And here's the reason: if you cut a line 2 times, you have 3 pieces.
The reason you have 17 or 33 rather than 16 or 32 is that you need the piece you are in, not the nearest position. If you divide you're going to bitshift a 2^8 value, you'll have 16 values that you could have. But, since you need to linear interpolation the position within that piece, you need 17 values.
In short, the reason you have 17 and not 16 is that with 17 you can evenly divide the value by 16 which is faster, and then check the value that occurs after your floored integer division, and then make an educated guess where you should be between those values. And that takes N+1 values in the lookup table.
All the references to this error I could find searching online were completely inapplicable to my situation, they were dealing with some kind of variables involving dots, like a.b (structures in other words), whereas I am strictly using arrays. Nothing involves a dot, nor does my code ask about it.
Ok, I have this GINORMOUS array called tier2comparatorconnectionpoints. It is a 4-D array of size 400×10×20×10. Consider tier2comparatorconnectionpoints(counter,counter2,counter3,counter4).
counter is a number 1 to 400,
counter2 is a number 1 to numchromosomes(counter), and numchromosomes(counter1) is bound to 10,
counter3 is a number 1 to tier2numcomparators(counter,counter2), which is in turn bounded to 20.
counter4 is a number 1 to tier2inputspercomparator(counter,counter2,counter3), which is bounded to 10.
Now, so that I don't run out of RAM, I have tier2comparatorconnectionpoints as type int8, and UNFORTUNATELY at some point in my horrendous amount of code, I forgot to cast it to a double when I'm doing math with it, and a rounding error involved with multiplying it with a rand ends up with tier2comparatorconnectionpoints for some values of its 4 inputs exceeding what it's allowed to be.
The values it's allowed to have are 1 through tier1numcomparators(counter,counter2), which is bounded to 40, 41 through 40+tier2numcomparators(counter,counter2), with tier2numcomparators(counter,counter2) being bounded to 20, and 61 through 60+tier2numcomparators(counter,counter2), thus it's not allowed to be more than 80 since tier2numcomparators(counter,counter2) is bounded to 20 and it's not allowed to be more than 60+tier2numcomparators(counter,counter2), but it's also not allowed to be less than 40 but more than tier1numcomparators(counter,counter2) and it's not allowed to be less than 60 but more than 40+tier2numcomparators(counter,counter2). I became aware of the problem because it was being set to 81 somewhere.
This is an evolutionary simulation by the way, it's natural selection on simulated organisms. I need to hunt down the part of the code that is allowing the values of tier2comparatorconnectionpoints to exceed what it's allowed to be. But that is a separate problem.
A temporary fix of my data, just so that it at least is made to conform to its allowed values, is to set anything that is greater than tier1numcomparators(counter,counter2) but less than 40 to tier1numcomparators(counter,counter2), to set anything that is greater than 40+tier2numcomparators(counter,counter2) but less than 60 to 40+tier2numcomparators(counter,counter2), and to set anything that is greater than 60+tier2numcomparators(counter,counter2) to 60+tier2numcomparators(counter,counter2). I first found this problem because it was being set to 81, so it didn't just exceed 60+tier2numcomparators(counter,counter2), it exceeded 60+20, with tier2numcomparators being bounded to 20.
I hope this isn't all too-much-information, but I felt it might be necessary to get you to understand just what sort of variables these are.
So in my attempts to at least turn the data into valid data, I did the following:
for counter=1:size(tier2comparatorconnectionpoints,1)
for counter2=1:size(tier2comparatorconnectionpoints,2)
for counter3=1:size(tier2comparatorconnectionpoints,3)
for counter4=1:size(tier2comparatorconnectionpoints,4)
if tier2comparatorconnectionpoints(counter,counter2,counter3,counter4)>60+tier2numcomparators(counter,counter2)
tier2comparatorconnectionpoints(counter,counter2,counter3,counter4)=60+tier2numcomparators(counter,counter2);
end
end
end
end
end
And that worked just fine. And then:
for counter=1:size(tier2comparatorconnectionpoints,1)
for counter2=1:size(tier2comparatorconnectionpoints,2)
for counter3=1:size(tier2comparatorconnectionpoints,3)
for counter4=1:size(tier2comparatorconnectionpoints,4)
if tier2comparatorconnectionpoints(counter,counter2,counter3,counter4)>40+tier2numcomparators(counter,counter2)
if tier2comparatorconnectionpoints(counter,counter2,counter3,counter4)<60
tier2comparatorconnectionpoints(counter,counter2,counter3,counter4)=40+tier2numcomparators(counter,counter2);
end
end
end
end
end
end
And that's where it said "Attempt to reference field of non-structure array".
TBH it sounds like maybe you've made a typo and put a . in somewhere? Otherwise please post the entire error as maybe it's happening in a different function or something.
Either way you don't need all those for loops, it's simpler and usually quicker to do this (and should bypass your error):
First replicate your tier2numcomparators matrix so that it has the same dimension sizes as tier2comparatorconnectionpoints
T = repmat(tier2numcomparators + 40, 1, 1, size(tier2comparatorconnectionpoints, 3), size(tier2comparatorconnectionpoints, 4));
Now in one shot you can create a logical matrix of which elements meet your criteria:
ind = tier2comparatorconnectionpoints > T | tier2comparatorconnectionpoints < 60;
Finally employ logical indexing to set your desired elements:
tier2comparatorconnectionpoints(ind) = T(ind);
You can play around with bsxfun instead of repmat if this is slow or takes too much memory
I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm.
I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd.
Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all.
The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic
The largest axis is divided by locating the bin with the median pixel
(by population), selecting the longer side, and dividing in the center
of that side. We could have simply put the bin with the median pixel
in the shorter side, but in the early stages of subdivision, this
tends to put low density clusters (that are not considered in the
subdivision) in the same vbox as part of a high density cluster that
will outvote it in median vbox color, even with future median-based
subdivisions. The algorithm used here is particularly important in
early subdivisions, and 3is useful for giving visible but low
population color clusters their own vbox. This has little effect on
the subdivision of high density clusters, which ultimately will have
roughly equal population in their vboxes.
For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following
Iterate over all the possible green and blue variations of the red color
For each iteration it adds to the total number of pixels (population) it's found along the red axis
For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red
note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red
So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations
4 8 20 16 1 9 12 8 8
After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above
4 12 32 48 49 58 70 78 86
And total would have a value of 86
Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346
It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2
Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49
The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was..
Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin.
Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)
In the 9-bin example, 49 is the number of pixels in the first 5 bins. 49 is the median number in the set of 9 partial sums, but we want the median pixel in the set of 86 pixels, which is 43 (or 44), and it resides in the 4th bin.
Inspection of the modified median cut algorithm in colorquant2.c of leptonica shows that the actual cut location for the 3d box does not necessarily occur adjacent to the bin containing the median pixel. The reasons for this are explained in the function medianCutApply(). This is one of the "modifications" to Paul Heckbert's original method. The other significant modification is to make the decision of which 3d box to cut next based on a combination of both population and the product (population * volume), thus permitting splitting of large but sparsely populated regions of color space.
I do not know the algo, but I would assume your array contains the population of each red; let's explain this with an example:
Assume you have four gradations of red: A,B,C and D
And you have the following sequence of red values:
AABDCADBBBAAA
To find the median, you would have to sort them according to red value and take the middle:
median
v
AAAAAABBBBCDD
Now let's use their approach:
A:6 => 6
B:4 => 10
C:1 => 11
D:2 => 13
13/2 = 6.5 => B
I think the mismatch happened because you are counting the population; the average color would be:
(6*A+4*B+1*C+2*D)/13