Optimal Selection in Ruby - arrays

Given an array of values,
arr = [8,10,4,5,3,7,6,0,1,9,13,2]
X is an array of values can be chosen at a time where X.length != 0 and X.length < arr.length
The chosen values are then fed into a function, score(), which will return a score based on the array of select values.
Example 1:
X = [8]
score(X) = 71
Example 2:
X = [4]
score(X) = 36
Example 3:
X = [8,10,7]
score(X) = 51
Example 4:
X = [5,9,0]
score(X) = 4
The function score() here is a blackbox and we can't modify how the function works, we just provide an input and the function will return the score output.
My problem: How to get the lowest score for each set of numbers?
Meaning, if X is an array that has only 1 value, and I feed all the different values in arr, each value will return me a different score value, and I find which arr value provides the lowest score.
If X is an array of 3 values, I feed a combination of all the different possible values in arr, with each different set of 3 values returning a different score and finding the lowest score.
This is simple enough to do if my arr is small. However if I have an array of 50 or even 100 values, how can I create an algorithm that would provide the lowest score based on the number of input values

tl;dr: If you don't know anything about score, then you can't speed it up.
In order to optimize score itself, you would have to know how it works. After all "optimizing" simply means "does the same thing more efficient", but how can you know if it really does "the same thing" if you don't know what "the same thing" is? Plus, speeding up score will not help you with the combinatorial explosion anyway. The number of combinations grows so fast, that any speedups to score will be quickly eaten up by slightly larger inputs.
In order to optimize how you apply score, you would again need to know something about it. If you knew something about score, you could, for example, only generate combinations that you know will yield different values, or combinations that you know will only yield larger values. In other words, you could exploit some structure in the output of score in order to reduce the input size. However, we don't know the structure of the output of score, in fact, we don't even know if there is some structure at all! So we can't exploit it. Plus, there would have to be some extreme redundancy and regularity in the structure, in order for a significant reduction in input size.
In his comment, #ndn suggested applying some form of machine learning to discover structure in the output.. How well this works depends on what kind of structure the output has. And of course, this again assumes that there even is some structure to discover, which we don't know. And again, even if there were some structure, it would have to very redundant and regular to make up for the combinatorial explosion of the input space.
Really, brute force is the only way. Our last straw is going to be parallelization. Maybe, if we distribute the problem across enough CPU cores, we can tackle it? Unfortunately, the combinatorial explosion in the input space is still really going to hurt you:
If we assume that we have a 10THz CPU (i.e. a thousand times faster than the fastest currently available CPU), and we assume that we can compute score in a single clock cycle, and we assume that we have a computer with 10 million cores (again, that's a thousand times larger than the largest supercomputers), it's still going to take over 400 years to find the optimal selection for an input array as small as 100 numbers. And even if we make our CPU a billion times faster and the computer a billion times bigger, simply doubling the size of the array to 200 items will increase the runtime to 500 trillion years.
There is a reason why we call combinatorial explosion "combinatorial explosion", after all.

Related

Data Structure for a recursive function with two parameters one of which is Large the other small

Mathematician here looking for a bit of help. (If you ever need math help I'll try to reciprocate on math.stackexchange!) Sorry if this is a dup. Couldn't find it myself.
Here's the thing. I write a lot of code (mostly in C) that is extremely slow and I know it could be sped up considerably but I'm not sure what data structure to use. I went to school 20 years ago and unfortunately never got to take a computer science course. I have watched a lot of open-course videos on data structures but I'm still a bit fuddled never taking an actual class.
Mostly my functions just take integers to integers. I almost always use 64-bit numbers and I have three use cases that I'm interested in. I use the word small to mean no more than a million or two in quantity.
Case 1: Small numbers as input. Outputs are arbitrary.
Case 2: Any 64-bit values as input, but only a small number of them. Outputs are arbitrary.
Case 3: Two parameter functions with one parameter that's small in value (say less than two million), and the other parameter is Large but with only a small number of possible inputs. Outputs are arbitrary.
For Case 1, I just make an array to cache the values. Easy and fast.
For Case 2, I think I should be using a hash. I haven't yet done this but I think I could figure it out if I took the time.
Case 3 is the one I'd like help with and I'm not even sure what I need.
For a specific example take a function F(n,p) that takes large inputs n for the first parameter and a prime p for the second. The prime is at most the square root of n. so even if n is about 10^12, the primes are only up to about a million. Suppose this function is recursive or otherwise difficult to calculate (expensive) and will be called over and over with the same inputs. What might be a good data structure to use to easily create and retrieve the possible values of F(n,p) so that I don't have to recalculate it every time? Total number of possible inputs should be 10 or 20 million at most.
Help please! and Thank you in advance!
You are talking about memoizing I presume. Trying to answer without a concrete exemple...
If you have to retrieve values from a small range (the 2nd parameter), say from 0 to 10^6, and that needs to be upper fast, and... you have enough memory, you could simply declare an array of int (long...), which basically stores the output values from all input.
To make things simple, let say the value 0 means there is no-value set
long *small = calloc(MAX, sizeof(*small)); // Calloc intializes to 0
then in a function that gives the value for a small range
if (small[ input ]) return small[ input ];
....calculate
small[ input ] = value;
+/-
+ Very fast
- Memory consumption takes the whole range, [ 0, MAX-1 ].
If you need to store arbitrary input, use the many libraries available (there are so many). Use a Set structure, that tells if the items exists or no.
if (set.exists( input )) return set.get( input );
....calculate
set.set( input, value );
+/-
+ less memory usage
+ still fast (said to be O(1))
- but, not as fast as a mere array
Add to this the hashed set (...), which are faster, as in terms of probabilities, values (hashes) are better distributed.
+/-
+ less memory usage than array
+ faster than a simple Set
- but, not as fast as a mere array
- use more memory than a simple Set

Certainty of data distribution profile when performing a sort operation

Lets assume we have some data structure like an array of n entries, and for arguments sake lets assume that the data has bounded numerical values .
Is there a way to determine the profile of the data , say monotonically ascending ,descending etc to a reasonable degree, perhaps with a certainty value of z having checked k entries within the data structure?
Assuming we have an array of size N, this means that we have N-1 comparisons between each adjacent elements in the array. Let M=N-1. M represents the number of relations. The probability of the array not being in the correct order is
1/M
If you select a subset of K relations to determine monoticallly ascending or descending, the theoretical probability of certainty is
K / M
Since these are two linear equations, it is easy to see that if you want to be .9 sure, then you will need to check about 90% of the entries.
This only takes into account the assumptions in your question. If you can are aware of the probability distribution, then using statistics, you could randomly check a small percentage of the array.
If you only care about the array being in relative order(for example, on an interval from [0,10], most 1s would be close to the beginning.), this is another question altogether. An algorithm that does this as opposed to just sorting, would have to have a high cost for swapping elements and a cheap cost for comparisons. Otherwise, there would be no performance pay offs from writing a complex algorithm to handle the check.
It is important to note that this is theoretically speaking. I am assuming no distribution in the array.
The easier problem is to check the probability of encountering such orderly behavior from random data.
Eg. If numbers are arranged randomly there is p=0.5 that the first number is lower than the second number (we will come to the case of repetitions later). Now, if you sample k pairs and in every case first number is lower than second number, the probability of observing it is 2^(-k).
Coming back to repetitions, keep track of observed repetitions and factor it in. Eg. If the probability of repetition is q, probability of not observing repetitions is (1-q), probability of observing increasing or equal is q + (1-q)/2, so exponentiatewith k to get the probaility.

Algorithm - How to select one number from each column in an array so that their sum is as close as possible to a particular value

I have an m x n matrix of real numbers. I want to choose a single value from each column such that the sum of my selected values is as close as possible to a pre-specified total.
I am not an experienced programmer (although I have an experienced friend who will help). I would like to achieve this using Matlab, Mathematica or c++ (MySQL if necessary).
The code only needs to run a few times, once every few days - it does not necessarily need to be optimised. I will have 16 columns and about 12 rows.
Normally I would suggest dynamic programming, but there are a few features of this situation suggesting an alternative approach. First, the performance demands are light; this program will be run only a couple times, and it doesn't sound as though a running time on the order of hours would be a problem. Second, the matrix is fairly small. Third, the matrix contains real numbers, so it would be necessary to round and then do a somewhat sophisticated search to ensure that the optimal possibility was not missed.
Instead, I'm going to suggest the following semi-brute-force approach. 12**16 ~ 1.8e17, the total number of possible choices, is too many, but 12**9 ~ 5.2e9 is doable with brute force, and 12**7 ~ 3.6e7 fits comfortably in memory. Compute all possible choices for the first seven columns. Sort these possibilities by total. For each possible choice for the last nine columns, use an efficient search algorithm to find the best mate among the first seven. (If you have a lot of memory, you could try eight and eight.)
I would attempt a first implementation in C++, using std::sort and std::lower_bound from the <algorithm> standard header. Measure it; if it's too slow, then try an in-memory B+-tree (does Boost have one?).
I spent some more time thinking about how to implement what I wrote above in the simplest way possible. Here's an approach that will work well for a 12 by 16 matrix on a 64-bit machine with roughly 4 GB of memory.
The number of choices for the first eight columns is 12**8. Each choice is represented by a 4-byte integer between 0 and 12**8 - 1. To decode a choice index i, the row for the first column is given by i % 12. Update i /= 12;. The row for the second column now is given by i % 12, et cetera.
A vector holding all choices requires roughly 12**8 * 4 bytes, or about 1.6 GB. Two such vectors require 3.2 GB. Prepare one for the first eight columns and one for the last eight. Sort them by sum of the entries that they indicate. Use saddleback search to find the best combination. (Initialize an iterator into the first vector and a reverse iterator into the second. While neither iterator is at its end, compare the current combination against the current best and update the current best if necessary. If the current combination sums to than the target, increment the first iterator. If the sum is greater than the target, increment the second iterator.)
I would estimate that this requires less than 50 lines of C++.
Without knowing the range of values that might fill the arrays, how about something generic like this:
divide the target by the number of remaining columns.
Pick the number from that column closest to that value.
Repeat from 1. Until each column picked.

How does the HyperLogLog algorithm work?

I've been learning about different algorithms in my spare time recently, and one that I came across which appears to be very interesting is called the HyperLogLog algorithm - which estimates how many unique items are in a list.
This was particularly interesting to me because it brought me back to my MySQL days when I saw that "Cardinality" value (which I always assumed until recently that it was calculated not estimated).
So I know how to write an algorithm in O(n) that will calculate how many unique items are in an array. I wrote this in JavaScript:
function countUniqueAlgo1(arr) {
var Table = {};
var numUnique = 0;
var numDataPoints = arr.length;
for (var j = 0; j < numDataPoints; j++) {
var val = arr[j];
if (Table[val] != null) {
continue;
}
Table[val] = 1;
numUnique++;
}
return numUnique;
}
But the problem is that my algorithm, while O(n), uses a lot of memory (storing values in Table).
I've been reading this paper about how to count duplicates in a list in O(n) time and using minimal memory.
It explains that by hashing and counting bits or something one can estimate within a certain probability (assuming the list is evenly distributed) the number of unique items in a list.
I've read the paper, but I can't seem to understand it. Can someone give a more layperson's explanation? I know what hashes are, but I don't understand how they are used in this HyperLogLog algorithm.
The main trick behind this algorithm is that if you, observing a stream of random integers, see an integer which binary representation starts with some known prefix, there is a higher chance that the cardinality of the stream is 2^(size of the prefix).
That is, in a random stream of integers, ~50% of the numbers (in binary) starts with "1", 25% starts with "01", 12,5% starts with "001". This means that if you observe a random stream and see a "001", there is a higher chance that this stream has a cardinality of 8.
(The prefix "00..1" has no special meaning. It's there just because it's easy to find the most significant bit in a binary number in most processors)
Of course, if you observe just one integer, the chance this value is wrong is high. That's why the algorithm divides the stream in "m" independent substreams and keep the maximum length of a seen "00...1" prefix of each substream. Then, estimates the final value by taking the mean value of each substream.
That's the main idea of this algorithm. There are some missing details (the correction for low estimate values, for example), but it's all well written in the paper. Sorry for the terrible english.
A HyperLogLog is a probabilistic data structure. It counts the number of distinct elements in a list. But in comparison to a straightforward way of doing it (having a set and adding elements to the set) it does this in an approximate way.
Before looking how the HyperLogLog algorithm does this, one has to understand why you need it. The problem with a straightforward way is that it consumes O(distinct elements) of space. Why there is a big O notation here instead of just distinct elements? This is because elements can be of different sizes. One element can be 1 another element "is this big string". So if you have a huge list (or a huge stream of elements) it will take a lot memory.
Probabilistic Counting
How can one get a reasonable estimate of a number of unique elements? Assume that you have a string of length m which consists of {0, 1} with equal probability. What is the probability that it will start with 0, with 2 zeros, with k zeros? It is 1/2, 1/4 and 1/2^k. This means that if you have encountered a string starting with k zeros, you have approximately looked through 2^k elements. So this is a good starting point. Having a list of elements that are evenly distributed between 0 and 2^k - 1 you can count the maximum number of the biggest prefix of zeros in binary representation and this will give you a reasonable estimate.
The problem is that the assumption of having evenly distributed numbers from 0 t 2^k-1 is too hard to achieve (the data we encountered is mostly not numbers, almost never evenly distributed, and can be between any values. But using a good hashing function you can assume that the output bits would be evenly distributed and most hashing function have outputs between 0 and 2^k - 1 (SHA1 give you values between 0 and 2^160). So what we have achieved so far is that we can estimate the number of unique elements with the maximum cardinality of k bits by storing only one number of size log(k) bits. The downside is that we have a huge variance in our estimate. A cool thing that we almost created 1984's probabilistic counting paper (it is a little bit smarter with the estimate, but still we are close).
LogLog
Before moving further, we have to understand why our first estimate is not that great. The reason behind it is that one random occurrence of high frequency 0-prefix element can spoil everything. One way to improve it is to use many hash functions, count max for each of the hash functions and in the end average them out. This is an excellent idea, which will improve the estimate, but LogLog paper used a slightly different approach (probably because hashing is kind of expensive).
They used one hash but divided it into two parts. One is called a bucket (total number of buckets is 2^x) and another - is basically the same as our hash. It was hard for me to get what was going on, so I will give an example. Assume you have two elements and your hash function which gives values form 0 to 2^10 produced 2 values: 344 and 387. You decided to have 16 buckets. So you have:
0101 011000 bucket 5 will store 1
0110 000011 bucket 6 will store 4
By having more buckets you decrease the variance (you use slightly more space, but it is still tiny). Using math skills they were able to quantify the error (which is 1.3/sqrt(number of buckets)).
HyperLogLog
HyperLogLog does not introduce any new ideas, but mostly uses a lot of math to improve the previous estimate. Researchers have found that if you remove 30% of the biggest numbers from the buckets you significantly improve the estimate. They also used another algorithm for averaging numbers. The paper is math-heavy.
And I want to finish with a recent paper, which shows an improved version of hyperLogLog algorithm (up until now I didn't have time to fully understand it, but maybe later I will improve this answer).
The intuition is if your input is a large set of random number (e.g. hashed values), they should distribute evenly over a range. Let's say the range is up to 10 bit to represent value up to 1024. Then observed the minimum value. Let's say it is 10. Then the cardinality will estimated to be about 100 (10 × 100 ≈ 1024).
Read the paper for the real logic of course.
Another good explanation with sample code can be found here:
Damn Cool Algorithms: Cardinality Estimation - Nick's Blog

finding a number appearing again among numbers stored in a file

Say, i have 10 billions of numbers stored in a file. How would i find the number that has already appeared once previously?
Well i can't just populate billions of number at a stretch in array and then keep a simple nested loop to check if the number has appeared previously.
How would you approach this problem?
Thanks in advance :)
I had this as an interview question once.
Here is an algorithm that is O(N)
Use a hash table. Sequentially store pointers to the numbers, where the hash key is computed from the number value. Once you have a collision, you have found your duplicate.
Author Edit:
Below, #Phimuemue makes the excellent point that 4-byte integers have a fixed bound before a collision is guaranteed; that is 2^32, or approx. 4 GB. When considered in the conversation accompanying this answer, worst-case memory consumption by this algorithm is dramatically reduced.
Furthermore, using the bit array as described below can reduce memory consumption to 1/8th, 512mb. On many machines, this computation is now possible without considering either a persistent hash, or the less-performant sort-first strategy.
Now, longer numbers or double-precision numbers are less-effective scenarios for the bit array strategy.
Phimuemue Edit:
Of course one needs to take a bit "special" hash table:
Take a hashtable consisting of 2^32 bits. Since the question asks about 4-byte-integers, there are at most 2^32 different of them, i.e. one bit for each number. 2^32 bit = 512mb.
So now one has just to determine the location of the corresponding bit in the hashmap and set it. If one encounters a bit which already is set, the number occured in the sequence already.
The important question is whether you want to solve this problem efficiently, or whether you want accurately.
If you truly have 10 billion numbers and just one single duplicate, then you are in a "needle in the haystack" type of situation. Intuitively, short of very grimy and unstable solution, there is no hope of solving this without storing a significant amount of the numbers.
Instead, turn to probabilistic solutions, which have been used in most any practical application of this problem (in network analysis, what you are trying to do is look for mice, i.e., elements which appear very infrequently in a large data set).
A possible solution, which can be made to find exact results: use a sufficiently high-resolution Bloom filter. Either use the filter to determine if an element has already been seen, or, if you want perfect accuracy, use (as kbrimington suggested you use a standard hash table) the filter to, eh, filter out elements which you can't possibly have seen and, on a second pass, determine the elements you actually see twice.
And if your problem is slightly different---for instance, you know that you have at least 0.001% elements which repeat themselves twice, and you would like to find out how many there are approximately, or you would like to get a random sample of such elements---then a whole score of probabilistic streaming algorithms, in the vein of Flajolet & Martin, Alon et al., exist and are very interesting (not to mention highly efficient).
Read the file once, create a hashtable storing the number of times you encounter each item. But wait! Instead of using the item itself as a key, you use a hash of the item iself, for example the least significant digits, let's say 20 digits (1M items).
After the first pass, all items that have counter > 1 may point to a duplicated item, or be a false positive. Rescan the file, consider only items that may lead to a duplicate (looking up each item in table one), build a new hashtable using real values as keys now and storing the count again.
After the second pass, items with count > 1 in the second table are your duplicates.
This is still O(n), just twice as slow as a single pass.
How about:
Sort input by using some algorith which allows only portion of input to be in RAM. Examples are there
Seek duplicates in output of 1st step -- you'll need space for just 2 elements of input in RAM at a time to detect repetitions.
Finding duplicates
Noting that its a 32bit integer means that you're going to have a large number of duplicates, since a 32 bit int can only represent 4.3ish billion different numbers and you have "10 billions".
If you were to use a tightly packed set you could represent whether all the possibilities are in 512 MB, which can easily fit into current RAM values. This as a start pretty easily allows you to recognise the fact if a number is duplicated or not.
Counting Duplicates
If you need to know how many times a number is duplicated you're getting into having a hashmap that contains only duplicates (using the first 500MB of the ram to tell efficiently IF it should be in the map or not). At a worst case scenario with a large spread you're not going to be able fit that into ram.
Another approach if the numbers will have an even amount of duplicates is to use a tightly packed array with 2-8 bits per value, taking about 1-4GB of RAM allowing you to count up to 255 occurrances of each number.
Its going to be a hack, but its doable.
You need to implement some sort of looping construct to read the numbers one at a time since you can't have them in memory all at once.
How? Oh, what language are you using?
You have to read each number and store it into a hashmap, so that if a number occurs again, it will automatically get discarded.
If possible range of numbers in file is not too large then you can use some bit array to indicate if some of the number in range appeared.
If the range of the numbers is small enough, you can use a bit field to store if it is in there - initialize that with a single scan through the file. Takes one bit per possible number.
With large range (like int) you need to read through the file every time. File layout may allow for more efficient lookups (i.e. binary search in case of sorted array).
If time is not an issue and RAM is, you could read each number and then compare it to each subsequent number by reading from the file without storing it in RAM. It will take an incredible amount of time but you will not run out of memory.
I have to agree with kbrimington and his idea of a hash table, but first of all, I would like to know the range of the numbers that you're looking for. Basically, if you're looking for 32-bit numbers, you would need a single array of 4.294.967.296 bits. You start by setting all bits to 0 and every number in the file will set a specific bit. If the bit is already set then you've found a number that has occurred before. Do you also need to know how often they occur?Still, it would need 536.870.912 bytes at least. (512 MB.) It's a lot and would require some crafty programming skills. Depending on your programming language and personal experience, there would be hundreds of solutions to solve it this way.
Had to do this a long time ago.
What i did... i sorted the numbers as much as i could (had a time-constraint limit) and arranged them like this while sorting:
1 to 10, 12, 16, 20 to 50, 52 would become..
[1,10], 12, 16, [20,50], 52, ...
Since in my case i had hundreds of numbers that were very "close" ($a-$b=1), from a few million sets i had a very low memory useage
p.s. another way to store them
1, -9, 12, 16, 20, -30, 52,
when i had no numbers lower than zero
After that i applied various algorithms (described by other posters) here on the reduced data set
#include <stdio.h>
#include <stdlib.h>
/* Macro is overly general but I left it 'cos it's convenient */
#define BITOP(a,b,op) \
((a)[(size_t)(b)/(8*sizeof *(a))] op (size_t)1<<((size_t)(b)%(8*sizeof *(a))))
int main(void)
{
unsigned x=0;
size_t *seen = malloc(1<<8*sizeof(unsigned)-3);
while (scanf("%u", &x)>0 && !BITOP(seen,x,&)) BITOP(seen,x,|=);
if (BITOP(seen,x,&)) printf("duplicate is %u\n", x);
else printf("no duplicate\n");
return 0;
}
This is a simple problem that can be solved very easily (several lines of code) and very fast (several minutes of execution) with the right tools
my personal approach would be in using MapReduce
MapReduce: Simplified Data Processing on Large Clusters
i'm sorry for not going into more details but once getting familiar with the concept of MapReduce it is going to be very clear on how to target the solution
basicly we are going to implement two simple functions
Map(key, value)
Reduce(key, values[])
so all in all:
open file and iterate through the data
for each number -> Map(number, line_index)
in the reduce we will get the number as the key and the total occurrences as the number of values (including their positions in the file)
so in Reduce(key, values[]) if number of values > 1 than its a duplicate number
print the duplicates : number, line_index1, line_index2,...
again this approach can result in a very fast execution depending on how your MapReduce framework is set, highly scalable and very reliable, there are many diffrent implementations for MapReduce in many languages
there are several top companies presenting already built up cloud computing environments like Google, Microsoft azure, Amazon AWS, ...
or you can build your own and set a cluster with any providers offering virtual computing environments paying very low costs by the hour
good luck :)
Another more simple approach could be in using bloom filters
AdamT
Implement a BitArray such that ith index of this array will correspond to the numbers 8*i +1 to 8*(i+1) -1. ie first bit of ith number is 1 if we already had seen 8*i+1. Second bit of ith number is 1 if we already have seen 8*i + 2 and so on.
Initialize this bit array with size Integer.Max/8 and whenever you saw a number k, Set the k%8 bit of k/8 index as 1 if this bit is already 1 means you have seen this number already.

Resources