Number of Distinct Subarrays - arrays

I want to find an algorithm to count the number of distinct subarrays of an array.
For example, in the case of A = [1,2,1,2],
the number of distinct subarrays is 7:
{ [1] , [2] , [1,2] , [2,1] , [1,2,1] , [2,1,2], [1,2,1,2]}
and in the case of B = [1,1,1], the number of distinct subarrays is 3:
{ [1] , [1,1] , [1,1,1] }
A sub-array is a contiguous subsequence, or slice, of an array. Distinct means different contents; for example:
[1] from A[0:1] and [1] from A[2:3] are not distinct.
and similarly:
B[0:1], B[1:2], B[2:3] are not distinct.

Construct suffix tree for this array. Then add together lengths of all edges in this tree.
Time needed to construct suffix tree is O(n) with proper algorithm (Ukkonen's or McCreight's algorithms). Time needed to traverse the tree and add together lengths is also O(n).

Edit: I think about how to reduce iteration/comparison number.
I foud a way to do it: if you retrieve a sub-array of size n, then each sub-arrays of size inferior to n will already be added.
Here is the code updated.
List<Integer> A = new ArrayList<Integer>();
A.add(1);
A.add(2);
A.add(1);
A.add(2);
System.out.println("global list to study: " + A);
//global list
List<List<Integer>> listOfUniqueList = new ArrayList<List<Integer>>();
// iterate on 1st position in list, start at 0
for (int initialPos=0; initialPos<A.size(); initialPos++) {
// iterate on liste size, start on full list and then decrease size
for (int currentListSize=A.size()-initialPos; currentListSize>0; currentListSize--) {
//initialize current list.
List<Integer> currentList = new ArrayList<Integer>();
// iterate on each (corresponding) int of global list
for ( int i = 0; i<currentListSize; i++) {
currentList.add(A.get(initialPos+i));
}
// insure unicity
if (!listOfUniqueList.contains(currentList)){
listOfUniqueList.add(currentList);
} else {
continue;
}
}
}
System.out.println("list retrieved: " + listOfUniqueList);
System.out.println("size of list retrieved: " + listOfUniqueList.size());
global list to study: [1, 2, 1, 2]
list retrieved: [[1, 2, 1, 2], [1, 2, 1], [1, 2], [1], [2, 1, 2], [2, 1], [2]]
size of list retrieved: 7
With a list containing the same patern many time the number of iteration and comparison will be quite low.
For your example [1, 2, 1, 2], the line if (!listOfUniqueList.contains(currentList)){ is executed 10 times. It only raise to 36 for the input [1, 2, 1, 2, 1, 2, 1, 2] that contains 15 different sub-arrays.

You could trivially make a set of the subsequences and count them, but i'm not certain it is the most efficient way, as it is O(n^2).
in python that would be something like :
subs = [tuple(A[i:j]) for i in range(0, len(A)) for j in range(i + 1, len(A) + 1)]
uniqSubs = set(subs)
which gives you :
set([(1, 2), (1, 2, 1), (1,), (1, 2, 1, 2), (2,), (2, 1), (2, 1, 2)])
The double loop in the comprehension clearly states the O(n²) complexity.
Edit
Apparently there are some discussion about the complexity. Creation of subs is O(n^2) as there are n^2 items.
Creating a set from a list is O(m) where m is the size of the list, m being n^2 in this case, as adding to a set is amortized O(1).
The overall is therefore O(n^2).

Right my first answer was a bit of a blonde moment.
I guess the answer would be to generate them all and then remove duplicates. Or if you are using a language like Java with a set object make all the arrays and add them to a set of int[]. Sets only contain one instance of each element and automatically remove duplicates so you can just get the size of the set at the end

I can think of 2 ways...
first is compute some sort of hash then add to a set.
if on adding your hashes are the same is an existing array... then do a verbose comparison... and log it so that you know your hash algorithm isn't good enough...
The second is to use some sort of probable match and then drill down from there...
if number of elements is same and the total of the elements added together is the same, then check verbosely.

Create an array of pair where each pair store the value of the element of subarray and its index.
pair[i] = (A[i],i);
Sort the pair in increasing order of A[i] and then decreasing order of i.
Consider example A = [1,3,6,3,6,3,1,3];
pair array after sorting will be pair = [(1,6),(1,0),(3,7),(3,5),(3,3),(3,1),(6,4),(6,2)]
pair[0] has element of index 6. From index 6 we can have two sub-arrays [1] and [1,3]. So ANS = 2;
Now take each consecutive pair one by one.
Taking pair[0] and pair[1],
pair[1] has index 0. We can have 8 sub-arrays beginning from index 0. But two subarrays [1] and [1,3] are already counted. So to remove them, we need to compare longest common prefix of sub-array for pair[0] and pair[1]. So longest common prefix length for indices beginning from 0 and 6 is 2 i.e [1,3].
So now new distinct sub-arrays will be [1,3,6] .. to [1,3,6,3,6,3,1,3] i.e. 6 sub-arrays.
So new value of ANS is 2+6 = 8;
So for pair[i] and pair[i+1]
ANS = ANS + Number of sub-arrays beginning from pair[i+1] - Length of longest common prefix.
The sorting part takes O(n logn).
Iterating each consecutive pair is O(n) and for each iteration find longest common prefix takes O(n) making whole iteration part O(n^2). Its the best I could get.
You can see that we dont need pair for this. The first value of pair, value of element was not required. I used this for better understanding. You can always skip that.

Related

How to find pairs in an array that all equal to the same number?

Given an array, for example: [1, 2, 3, 2], how can I write an algorithm that pairs the numbers together (given that the input array is even) so that all pairs have the same sum?
For the example, the pairs would be: [[1,3],[2,2]] because the sum of each pair is the same (4).
For this, you can sort the array and pair the elements one from the beginning and one from the end. I used the sorted() function but writing a custom merge sort for this will be more efficient since I had to iterate the list again to pair the elements. Complexity is O(n*log(n)).
def equal_pair(arr):
sorted_array = sorted(arr)
equal_pairs = []
pair_sum = (2*sum(arr))/len(arr)
for i in range(len(arr) // 2):
if sorted_array[i] + sorted_array[-(i + 1)] == pair_sum:
equal_pairs.append([sorted_array[i], sorted_array[-(i + 1)]])
else:
return "Equal Pairs not possible "
return equal_pairs
print(equal_pair([1, 2, 3, 2]))

For each element, count how many given subarrays contain it

While I was solving a problem in a coding contest, I found that I needed to do this: Given the pairs (i, j) indicating the indexes of a subarray; for each element, count how many subarrays contain it. For example:
We have the array [7, -2, -7, 0, 6], and the pairs are (0, 2), (1, 4), (2, 3), (0, 3), then the result array will be [2, 3, 4, 3, 1], since the first element is in the subarrays (0, 2), (0, 3), the second one is in (0, 2), (1, 4), (0, 3), etc.
One way of doing it would of course be manually counting, using an array and count, but that will likely give me TLE because the array size and number of pairs is too big to do it. I've also read somewhere else that you can use an extra array storing "stuff" inside it (add/subtract something for every subarray), and then traverse through that array to find out, but I don't remember where I read it.
Here's the original problem if you also have any improvements on my algorithm:
Given an array of size n, for each i-th element in the array (0 <= i <= n-1), count how many balanced subarrays contain that element. A balanced subarray is a subarray whose sum is equal to 0.
I have known a way to find subarrays with its sum to be equal to 0: https://www.geeksforgeeks.org/print-all-subarrays-with-0-sum/ . But the second task, which I have stated above, I haven't figured it out yet. If you have some ideas about this, please let me know and thank you very much.
You could take each pair and mark a +1 where it starts and -1 where it stops (in a separate array). These represent changes in the number of ranges that overlap a certain index. Then iterate that list to collect and accumulate those changes into absolute numbers (number of overlapping ranges).
Here is an implementation in JavaScript for the ranges you have given as example -- the content of the given list really is not relevant for this, only its size (n=5 in the example):
let pairs = [[0, 2], [1, 4], [2, 3], [0, 3]];
let n = 5;
// Translate pairs to +1/-1 changes in result array
let result = Array(n+1).fill(0); // Add one more dummy entry
for (let pair of pairs) {
result[pair[0]]++;
result[pair[1] + 1]--; // after(!) the range
}
result.pop(); // ignore last dummy entry
// Accumulate changes from left to right
for (let i = 1; i < n; i++) {
result[i] += result[i-1];
}
console.log(result);

Count elements in 1st array less than or equal than elements in 2nd array python

I have an array Aof 21381120 elements ranking from [0,1]. I need to construct a new array B in which the element i contains the number of elements in A less than or equal than A[i].
My attempt:
A = np.random.random(10000) # for reproducibility
g = np.sort(A)
B = [np.sum(g<=element) for element in A]
I am still using a for loop, taking too much time. Since I have to do this several times I was wondering if exists a better way to do it.
EDIT
I gave an example of the array A for reproducibility. This does what is expected to. But I need it to be faster (for arrays having 2e9 elements).
For instance if:
A = [0.1,0.01,0.3,0.5,1]
I expect the output to be
B = [2, 1, 3, 4, 5]
You could use binary search to speed up searching in a sorted array. Binary search in numpy.
A = np.random.rand(10000) # for reproducibility
g = np.sort(A)
B = [np.searchsorted(g, element) for element in A]
Looks like sorting is the way to go because in a sorted array A, the number of elements less than or equal to A[i] is almost i + 1.
However, if an element is repeated, you'll have to look at the nearest element that's to the right of A[i]:
A = [1,2,3,4,4,4,5,6]
^^^^^ A[3] == A[4] == A[5]
Here, the number of elements <= A[3] is 3 + <number of repeated 4's>. Maybe you could roll your own sorting algorithm that would keep track of such repetitions. Or count the repetitions before sorting the array.
Then the final formula would be:
N(<= A[k]) = k + <number of elements equal to A[k]>
So the speed of your code would mainly depend on the speed of the sorting algorithm.

Ruby - Efficient method of checking if sum of two numbers in array equal a value

Here's my problem: I have a list of 28,123 numbers I need to iterate through and an array of 6965 other numbers checking if the sum of two numbers (can be the same number) have equal value to each of the 28,123 numbers. I want to put them in a new array or mark them as true / false. Any solutions I've come up with so far are extremely inefficient.
So a dumbed-down version of what I want is if I have the following: array = [1, 2, 5] and the numbers 1 to 5 would return result = [2, 3, 4] or the array of result = [false, true, true, true, false]
I read this SE question: Check if the sum of two different numbers in an array equal a variable number? but I need something more efficient in my case it seems, or maybe a different approach to the problem. It also doesn't seem to work for two of the same number being added together.
Any help is much appreciated!
non_abundant(n) is a function that returns the first n non_abundant numbers. It executes almost instantaneously.
My Code:
def contains_pair?(array, n)
!!array.combination(2).detect { |a, b| a + b == n }
end
result = []
array = non_abundant(6965)
(1..28123).each do |n|
if array.index(n) == nil
index = array.length - 1
else
index = array.index(n)
end
puts n
if contains_pair?( array.take(index), n)
result << n
end
end
numbers = [1, 2, 5]
results = (1..10).to_a
numbers_set = numbers.each_with_object({}){ |i, h| h[i] = true }
results.select do |item|
numbers.detect do |num|
numbers_set[item - num]
end
end
#=> [2, 3, 4, 6, 7, 10]
You can add some optimizations by sorting your numbers and checking if num is bigger then item/2.
The complexity is O(n*m) where n and m are lengths of two lists.
Another optimization is if numbers list length is less then results list (n << m) you can achieve O(n*n) complexity by calculating all possible sums in numbers list first.
The most inefficient part of your algorithm is the fact that you are re-calculating many possible sums of combinations, 28123 times. You only need to do this once.
Here is a very simple improvement to your code:
array = non_abundant(6965)
combination_sums = array.combination(2).map {|comb| comb.inject(:+)}.uniq
result = (1..28123).select do |n|
combination_sums.include? n
end
The rest of your algorithm seems to be an attempt to compensate for that original performance mistake of re-calculating the sums - which is no longer needed.
There are further optimisations you could potentially make, such as using a binary search. But I'm guessing this improvement will already be sufficient for your needs.

Weighted random selection from array

I would like to randomly select one element from an array, but each element has a known probability of selection.
All chances together (within the array) sums to 1.
What algorithm would you suggest as the fastest and most suitable for huge calculations?
Example:
id => chance
array[
0 => 0.8
1 => 0.2
]
for this pseudocode, the algorithm in question should on multiple calls statistically return four elements on id 0 for one element on id 1.
Compute the discrete cumulative density function (CDF) of your list -- or in simple terms the array of cumulative sums of the weights. Then generate a random number in the range between 0 and the sum of all weights (might be 1 in your case), do a binary search to find this random number in your discrete CDF array and get the value corresponding to this entry -- this is your weighted random number.
The algorithm is straight forward
rand_no = rand(0,1)
for each element in array
if(rand_num < element.probablity)
select and break
rand_num = rand_num - element.probability
I have found this article to be the most useful at understanding this problem fully.
This stackoverflow question may also be what you're looking for.
I believe the optimal solution is to use the Alias Method (wikipedia).
It requires O(n) time to initialize, O(1) time to make a selection, and O(n) memory.
Here is the algorithm for generating the result of rolling a weighted n-sided die (from here it is trivial to select an element from a length-n array) as take from this article.
The author assumes you have functions for rolling a fair die (floor(random() * n)) and flipping a biased coin (random() < p).
Algorithm: Vose's Alias Method
Initialization:
Create arrays Alias and Prob, each of size n.
Create two worklists, Small and Large.
Multiply each probability by n.
For each scaled probability pi:
If pi < 1, add i to Small.
Otherwise (pi ≥ 1), add i to Large.
While Small and Large are not empty: (Large might be emptied first)
Remove the first element from Small; call it l.
Remove the first element from Large; call it g.
Set Prob[l]=pl.
Set Alias[l]=g.
Set pg := (pg+pl)−1. (This is a more numerically stable option.)
If pg<1, add g to Small.
Otherwise (pg ≥ 1), add g to Large.
While Large is not empty:
Remove the first element from Large; call it g.
Set Prob[g] = 1.
While Small is not empty: This is only possible due to numerical instability.
Remove the first element from Small; call it l.
Set Prob[l] = 1.
Generation:
Generate a fair die roll from an n-sided die; call the side i.
Flip a biased coin that comes up heads with probability Prob[i].
If the coin comes up "heads," return i.
Otherwise, return Alias[i].
Here is an implementation in Ruby:
def weighted_rand(weights = {})
raise 'Probabilities must sum up to 1' unless weights.values.inject(&:+) == 1.0
raise 'Probabilities must not be negative' unless weights.values.all? { |p| p >= 0 }
# Do more sanity checks depending on the amount of trust in the software component using this method,
# e.g. don't allow duplicates, don't allow non-numeric values, etc.
# Ignore elements with probability 0
weights = weights.reject { |k, v| v == 0.0 } # e.g. => {"a"=>0.4, "b"=>0.4, "c"=>0.2}
# Accumulate probabilities and map them to a value
u = 0.0
ranges = weights.map { |v, p| [u += p, v] } # e.g. => [[0.4, "a"], [0.8, "b"], [1.0, "c"]]
# Generate a (pseudo-)random floating point number between 0.0(included) and 1.0(excluded)
u = rand # e.g. => 0.4651073966724186
# Find the first value that has an accumulated probability greater than the random number u
ranges.find { |p, v| p > u }.last # e.g. => "b"
end
How to use:
weights = {'a' => 0.4, 'b' => 0.4, 'c' => 0.2, 'd' => 0.0}
weighted_rand weights
What to expect roughly:
sample = 1000.times.map { weighted_rand weights }
sample.count('a') # 396
sample.count('b') # 406
sample.count('c') # 198
sample.count('d') # 0
An example in ruby
#each element is associated with its probability
a = {1 => 0.25 ,2 => 0.5 ,3 => 0.2, 4 => 0.05}
#at some point, convert to ccumulative probability
acc = 0
a.each { |e,w| a[e] = acc+=w }
#to select an element, pick a random between 0 and 1 and find the first
#cummulative probability that's greater than the random number
r = rand
selected = a.find{ |e,w| w>r }
p selected[0]
This can be done in O(1) expected time per sample as follows.
Compute the CDF F(i) for each element i to be the sum of probabilities less than or equal to i.
Define the range r(i) of an element i to be the interval [F(i - 1), F(i)].
For each interval [(i - 1)/n, i/n], create a bucket consisting of the list of the elements whose range overlaps the interval. This takes O(n) time in total for the full array as long as you are reasonably careful.
When you randomly sample the array, you simply compute which bucket the random number is in, and compare with each element of the list until you find the interval that contains it.
The cost of a sample is O(the expected length of a randomly chosen list) <= 2.
This is a PHP code I used in production:
/**
* #return \App\Models\CdnServer
*/
protected function selectWeightedServer(Collection $servers)
{
if ($servers->count() == 1) {
return $servers->first();
}
$totalWeight = 0;
foreach ($servers as $server) {
$totalWeight += $server->getWeight();
}
// Select a random server using weighted choice
$randWeight = mt_rand(1, $totalWeight);
$accWeight = 0;
foreach ($servers as $server) {
$accWeight += $server->getWeight();
if ($accWeight >= $randWeight) {
return $server;
}
}
}
Ruby solution using the pickup gem:
require 'pickup'
chances = {0=>80, 1=>20}
picker = Pickup.new(chances)
Example:
5.times.collect {
picker.pick(5)
}
gave output:
[[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1]]
If the array is small, I would give the array a length of, in this case, five and assign the values as appropriate:
array[
0 => 0
1 => 0
2 => 0
3 => 0
4 => 1
]
"Wheel of Fortune" O(n), use for small arrays only:
function pickRandomWeighted(array, weights) {
var sum = 0;
for (var i=0; i<weights.length; i++) sum += weights[i];
for (var i=0, pick=Math.random()*sum; i<weights.length; i++, pick-=weights[i])
if (pick-weights[i]<0) return array[i];
}
the trick could be to sample an auxiliary array with elements repetitions which reflect the probability
Given the elements associated with their probability, as percentage:
h = {1 => 0.5, 2 => 0.3, 3 => 0.05, 4 => 0.05 }
auxiliary_array = h.inject([]){|memo,(k,v)| memo += Array.new((100*v).to_i,k) }
ruby-1.9.3-p194 > auxiliary_array
=> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4]
auxiliary_array.sample
if you want to be as generic as possible, you need to calculate the multiplier based on the max number of fractional digits, and use it in the place of 100:
m = 10**h.values.collect{|e| e.to_s.split(".").last.size }.max
Another possibility is to associate, with each element of the array, a random number drawn from an exponential distribution with parameter given by the weight for that element. Then pick the element with the lowest such ‘ordering number’. In this case, the probability that a particular element has the lowest ordering number of the array is proportional to the array element's weight.
This is O(n), doesn't involve any reordering or extra storage, and the selection can be done in the course of a single pass through the array. The weights must be greater than zero, but don't have to sum to any particular value.
This has the further advantage that, if you store the ordering number with each array element, you have the option to sort the array by increasing ordering number, to get a random ordering of the array in which elements with higher weights have a higher probability of coming early (I've found this useful when deciding which DNS SRV record to pick, to decide which machine to query).
Repeated random sampling with replacement requires a new pass through the array each time; for random selection without replacement, the array can be sorted in order of increasing ordering number, and k elements can be read out in that order.
See the Wikipedia page about the exponential distribution (in particular the remarks about the distribution of the minima of an ensemble of such variates) for the proof that the above is true, and also for the pointer towards the technique of generating such variates: if T has a uniform random distribution in [0,1), then Z=-log(1-T)/w (where w is the parameter of the distribution; here the weight of the associated element) has an exponential distribution.
That is:
For each element i in the array, calculate zi = -log(T)/wi (or zi = -log(1-T)/wi), where T is drawn from a uniform distribution in [0,1), and wi is the weight of the I'th element.
Select the element which has the lowest zi.
The element i will be selected with probability wi/(w1+w2+...+wn).
See below for an illustration of this in Python, which takes a single pass through the array of weights, for each of 10000 trials.
import math, random
random.seed()
weights = [10, 20, 50, 20]
nw = len(weights)
results = [0 for i in range(nw)]
n = 10000
while n > 0: # do n trials
smallest_i = 0
smallest_z = -math.log(1-random.random())/weights[0]
for i in range(1, nw):
z = -math.log(1-random.random())/weights[i]
if z < smallest_z:
smallest_i = i
smallest_z = z
results[smallest_i] += 1 # accumulate our choices
n -= 1
for i in range(nw):
print("{} -> {}".format(weights[i], results[i]))
Edit (for history): after posting this, I felt sure I couldn't be the first to have thought of it, and another search with this solution in mind shows that this is indeed the case.
In an answer to a similar question, Joe K suggested this algorithm (and also noted that someone else must have thought of it before).
Another answer to that question, meanwhile, pointed to Efraimidis and Spirakis (preprint), which describes a similar method.
I'm pretty sure, looking at it, that the Efraimidis and Spirakis is in fact the same exponential-distribution algorithm in disguise, and this is corroborated by a passing remark in the Wikipedia page about Reservoir sampling that ‘[e]quivalently, a more numerically stable formulation of this algorithm’ is the exponential-distribution algorithm above. The reference there is to a sequence of lecture notes by Richard Arratia; the relevant property of the exponential distribution is mentioned in Sect.1.3 (which mentions that something similar to this is a ‘familiar fact’ in some circles), but not its relationship to the Efraimidis and Spirakis algorithm.
I would imagine that numbers greater or equal than 0.8 but less than 1.0 selects the third element.
In other terms:
x is a random number between 0 and 1
if 0.0 >= x < 0.2 : Item 1
if 0.2 >= x < 0.8 : Item 2
if 0.8 >= x < 1.0 : Item 3
I am going to improve on https://stackoverflow.com/users/626341/masciugo answer.
Basically you make one big array where the number of times an element shows up is proportional to the weight.
It has some drawbacks.
The weight might not be integer. Imagine element 1 has probability of pi and element 2 has probability of 1-pi. How do you divide that? Or imagine if there are hundreds of such elements.
The array created can be very big. Imagine if least common multiplier is 1 million, then we will need an array of 1 million element in the array we want to pick.
To counter that, this is what you do.
Create such array, but only insert an element randomly. The probability that an element is inserted is proportional the the weight.
Then select random element from usual.
So if there are 3 elements with various weight, you simply pick an element from an array of 1-3 elements.
Problems may arise if the constructed element is empty. That is it just happens that no elements show up in the array because their dice roll differently.
In which case, I propose that the probability an element is inserted is p(inserted)=wi/wmax.
That way, one element, namely the one that has the highest probability, will be inserted. The other elements will be inserted by the relative probability.
Say we have 2 objects.
element 1 shows up .20% of the time.
element 2 shows up .40% of the time and has the highest probability.
In thearray, element 2 will show up all the time. Element 1 will show up half the time.
So element 2 will be called 2 times as many as element 1. For generality all other elements will be called proportional to their weight. Also the sum of all their probability are 1 because the array will always have at least 1 element.
I wrote an implementation in C#:
https://github.com/cdanek/KaimiraWeightedList
O(1) gets (fast!), O(n) recalculates, O(n) memory use.

Resources