Adding values of multi dimensional array - arrays

I'm working with chart data. I have three sets of data from three sources, and I'm trying to add them together by their year into a new away. In my example, the years are 0, 1, 2.
visual:
data = [[[year, value], [year, value], [year, value]],
[[year, value], [year, value], [year, value]],
[[year, value], [year, value], [year, value]]]
Here is an example with actual data:
data = [[[0, 1], [1, 2], [2, 3]],
[[0, 4], [1, 5], [2, 6]],
[[0, 7], [1, 8], [2, 9]]]
I'm trying to get the following result:
data = [[0, 12], [1, 15], [2, 18]]
To add to complexity, it won't always be three sets of data, it may be one set or twelve sets, any number.
Any help is greatly appreciated.

Solution:
data.map(&:to_h).reduce({}) {|memo, h| memo.merge(h) {|_,v1,v2| v1 + v2} }.to_a
Explanation:
Step 1: Convert the data array into array of hashes
data_hash = data.map(&:to_h)
#=> [{0=>1, 1=>2, 2=>3}, {0=>4, 1=>5, 2=>6}, {0=>7, 1=>8, 2=>9}]
Step 2: Reduce the array of hash by merging each hash with one another, while ensuring that values are added together for a given key.
reduced_hash = data_hash.reduce({}) {|memo, h| memo.merge(h) {|_,v1,v2| v1 + v2} }
#=> {0=>12, 1=>15, 2=>18}
We use empty hash {} as initial value of memo, and merge each hash present in data_hash array with it - the block to merge will ensure that when a key is being merged, its values are added up so that eventually we end up with sum of all values of that key
Step 3: Use to_a on the hash to get array result
reduced_hash.to_a
#=> [[0, 12], [1, 15], [2, 18]]

Related

get array index from sort in Ruby

I have an array
array_a1 = [9,43,3,6,7,0]
which I'm trying to get the sort indices out of, i.e. the answer should be
array_ordered = [6, 3, 4, 5, 1, 2]
I want to do this as a function, so that
def order (array)
will return array_ordered
I have tried implementing advice from Find the index by current sort order of an array in ruby but I don't see how I can do what they did for an array :(
if there are identical values in the array, e.g.
array_a1 = [9,43,3,6,7,7]
then the result should look like:
array_ordered = [3, 4, 5, 6, 1, 2]
(all indices should be 0-based, but these are 1-based)
You can do it this way:
[9,43,3,6,7,0].
each_with_index.to_a. # [[9, 0], [43, 1], [3, 2], [6, 3], [7, 4], [0, 5]]
sort_by(&:first). # [[0, 5], [3, 2], [6, 3], [7, 4], [9, 0], [43, 1]]
map(&:last)
#=> [5, 2, 3, 4, 0, 1]
First you add index to each element, then you sort by the element and finally you pick just indices.
Note, that array are zero-indexed in Ruby, so the results is less by one comparing to your spec.
You should be able to just map over the sorted array and lookup the index of that number in the original array.
arr = [9,43,3,6,7,0]
arr.sort.map { |n| arr.index(n) } #=> [5, 2, 3, 4, 0, 1]
Or if you really want it 1 indexed, instead of zero indexed, for some reason:
arr.sort.map { |n| arr.index(n) + 1 } #=> [6, 3, 4, 5, 1, 2]
array_a1 = [9,43,3,6,7,0]
array_a1.each_index.sort_by { |i| array_a1[i] }
#=> [5, 2, 3, 4, 0, 1]
If array_a1 may contain duplicates and ties are to be broken by the indices of the elements (the element with the smaller index first), you may modify the calculation as follows.
[9,43,3,6,7,7].each_index.sort_by { |i| [array_a1[i], i] }
#=> [2, 3, 4, 5, 0, 1]
Enumerable#sort_by compares two elements with the spaceship operator, <=>. Here, as pairs of arrays are being compared, it is the method Array#<=> that is used. See especially the third paragraph of that doc.

Array of tuples, sum the values when the the first element is the same

I am trying to sum the elements of an array by grouping by the first element.
ex:
[[1, 8], [3, 16], [1, 0], [1, 1], [1, 1]]
should give
[ {1 => 10}, {3 => 16} ]
It is summing the values in the original array where the first element was 1 and 3. The data structures in the end result don't matter, ex: an array of arrays, an array of hash or just a hash is fine.
Some tries:
k = [[1, 8], [3, 16], [1, 0], [1, 1], [1, 1]]
h = {}
k.inject({}) { |(a,b)| h[a] += b}
#=> undefined method `+' for nil:NilClass
data = [[1, 8], [3, 16], [1, 0], [1, 1], [1, 1]]
data.each_with_object({}) { |(k, v), res| res[k] ||= 0; res[k] += v }
gives
{1=>10, 3=>16}
there is also inject version although it's not so laconic:
data.inject({}) { |res, (k, v)| res[k] ||= 0; res[k] += v; res }
inject vs each_with_object
You're pretty close, some changes are needed on your code:
k.inject({}) do |hash, (a, b)|
if hash[a].nil?
hash[a] = b
else
hash[a] += b
end
hash
end
First of all, you don't need the h variable. #inject accepts an argument, often called the accumulator, which you can change it for each array element and then get as the return. Since you're already passing an empty hash to inject, you don't need the variable.
Next, you have to handle the case where the key doesn't yet exist on the hash, hence the if hash[a].nil?. In that case, we assign the value of b to the hash where the key is a. When the key exists in the hash, we can safely sum the value.
Another thing to notice is that you are using the wrong arguments of the block. When calling #inject, you first receive the accumulator (in this case, the hash), then the iteration element.
Documentation for #inject
k.group_by(&:first).transform_values {|v| v.map(&:last).sum }
You actually used the words "group by" in your question, but you never grouped the array in your code. Here, I first group the inner arrays by their first elements, ending up with:
{ 1 => [[1, 8], [1, 0], [1, 1], [1, 1]], 3 => [[3, 16]] }
Next, I only want the last element of all of the inner arrays, since I already know that the first is always going to be the key, so I use Hash#transform_values to map the two-element arrays to their last element. Lastly, I Enumerable#sum those numbers.

Prevent identical pairs when shuffling and slicing Ruby array

I'd like to prevent producing pairs with the same items when producing a random set of pairs in a Ruby array.
For example:
[1,1,2,2,3,4].shuffle.each_slice(2).to_a
might produce:
[[1, 1], [3, 4], [2, 2]]
I'd like to be able to ensure that it produces a result such as:
[[4, 1], [1, 2], [3, 2]]
Thanks in advance for the help!
arr = [1,1,2,2,3,4]
loop do
sliced = arr.shuffle.each_slice(2).to_a
break sliced if sliced.none? { |a| a.reduce(:==) }
end
Here are three ways to produce the desired result (not including the approach of sampling repeatedly until a valid sample is found). The following array will be used for illustration.
arr = [1,4,1,2,3,2,1]
Use Array#combination and Array#sample
If pairs sampled were permitted to have the same number twice, the sample space would be
arr.combination(2).to_a
#=> [[1, 4], [1, 1], [1, 2], [1, 3], [1, 2], [1, 1], [4, 1], [4, 2],
# [4, 3], [4, 2], [4, 1], [1, 2], [1, 3], [1, 2], [1, 1], [2, 3],
# [2, 2], [2, 1], [3, 2], [3, 1], [2, 1]]
The pairs containing the same value twice--here [1, 1] and [2, 2]--are not wanted so they are simple removed from the above array.
sample_space = arr.combination(2).reject { |x,y| x==y }
#=> [[1, 4], [1, 2], [1, 3], [1, 2], [4, 1], [4, 2], [4, 3],
# [4, 2], [4, 1], [1, 2], [1, 3], [1, 2], [2, 3], [2, 1],
# [3, 2], [3, 1], [2, 1]]
We evidently are to sample arr.size/2 elements from sample_space. Depending on whether this is to be done with or without replacement we would write
sample_space.sample(arr.size/2)
#=> [[4, 3], [1, 2], [1, 3]]
for sampling without replacement and
Array.new(arr.size/2) { sample_space.sample }
#=> [[1, 3], [4, 1], [2, 1]]
for sampling with replacement.
Sample elements of each pair sequentially, Method 1
This method, like the next, can only be used to sample with replacement.
Let's first consider sampling a single pair. We could do that by selecting the first element of the pair randomly from arr, remove all instances of that element in arr and then sample the second element from what's left of arr.
def sample_one_pair(arr)
first = arr.sample
[first, second = (arr-[first]).sample]
end
To draw a sample of arr.size/2 pairs we there execute the following.
Array.new(arr.size/2) { sample_one_pair(arr) }
#=> [[1, 2], [4, 3], [1, 2]]
Sample elements of each pair sequentially, Method 2
This method is a very fast way of sampling large numbers of pairs with replacement. Like the previous method, it cannot be used to sample without replacement.
First, compute the cdf (cumulative distribution function) for drawing an element of arr at random.
counts = arr.group_by(&:itself).transform_values { |v| v.size }
#=> {1=>3, 4=>1, 2=>2, 3=>1}
def cdf(sz, counts)
frac = 1.0/sz
counts.each_with_object([]) { |(k,v),a|
a << [k, frac * v + (a.empty? ? 0 : a.last.last)] }
end
cdf_first = cdf(arr.size, counts)
#=> [[1, 0.429], [4, 0.571], [2, 0.857], [3, 1.0]]
This means that there is a probability of 0.429 (rounded) of randomly drawing a 1, 0.571 of drawing a 1 or a 4, 0.857 of drawing a 1, 4 or 2 and 1.0 of drawing one of the four numbers. We therefore can randomly sample a number from arr by obtaining a (pseudo-) random number between zero and one (p = rand) and then determine the first element of counts_cdf, [n, q] for which p <= q:
def draw_random(cdf)
p = rand
cdf.find { |n,q| p <= q }.first
end
draw_random(counts_cdf) #=> 1
draw_random(counts_cdf) #=> 4
draw_random(counts_cdf) #=> 1
draw_random(counts_cdf) #=> 1
draw_random(counts_cdf) #=> 2
draw_random(counts_cdf) #=> 3
In simulation models, incidentally, this is the standard way of generating pseudo-random variates from discrete probability distributions.
Before drawing the second random number of the pair we need to modify cdf_first to reflect that fact that the first number cannot be drawn again. Assuming there will be many pairs to generate randomly, it is most efficient to construct a hash cdf_second whose keys are the first values drawn randomly for the pair and whose values are the corresponding cdf's.
cdf_second = counts.keys.each_with_object({}) { |n, h|
h[n] = cdf(arr.size - counts[n], counts.reject { |k,_| k==n }) }
#=> {1=>[[4, 0.25], [2, 0.75], [3, 1.0]],
# 4=>[[1, 0.5], [2, 0.833], [3, 1.0]],
# 2=>[[1, 0.6], [4, 0.8], [3, 1.0]],
# 3=>[[1, 0.5], [4, 0.667], [2, 1.0]]}
If, for example, a 2 is drawn for the first element of the pair, the probability is 0.6 of drawing a 1 for the second element, 0.8 of drawing a 1 or 4 and 1.0 of drawing a 1, 4, or 3.
We can then sample one pair as follows.
def sample_one_pair(cdf_first, cdf_second)
first = draw_random(cdf_first)
[first, draw_random(cdf_second[first])]
end
As before, to sample arr.size/2 values with replacement, we execute
Array.new(arr.size/2) { sample_one_pair }
#=> [[2, 1], [3, 2], [1, 2]]
With replacement, you may get results like:
unique_pairs([1, 1, 2, 2, 3, 4]) # => [[4, 1], [1, 2], [1, 3]]
Note that 1 gets chosen three times, even though it's only in the original array twice. This is because the 1 is "replaced" each time it's chosen. In other words, it's put back into the collection to potentially be chosen again.
Here's a version of Cary's excellent sample_one_pair solution without replacement:
def unique_pairs(arr)
dup = arr.dup
Array.new(dup.size / 2) do
dup.shuffle!
first = dup.pop
second_index = dup.rindex { |e| e != first }
raise StopIteration unless second_index
second = dup.delete_at(second_index)
[first, second]
end
rescue StopIteration
retry
end
unique_pairs([1, 1, 2, 2, 3, 4]) # => [[4, 3], [1, 2], [2, 1]]
This works by creating a copy of the original array and deleting elements out of it as they're chosen (so they can't be chosen again). The rescue/retry is in there in case it becomes impossible to produce the correct number of pairs. For example, if [1, 3] is chosen first, and [1, 4] is chosen second, it becomes impossible to make three unique pairs because [2, 2] is all that's left; the sample space is exhausted.
This should be slower than Cary's solution (with replacement) but faster (on average) than the posted solutions (without replacement) that require looping and retrying. Welp, chalk up another point for "always benchmark!" I was wrong about all most of my assumptions. Here are the results on my machine with an array of 16 numbers ([1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7, 8, 9, 9, 10]):
cary_with_replacement
93.737k (± 2.9%) i/s - 470.690k in 5.025734s
mwp_without_replacement
187.739k (± 3.3%) i/s - 943.415k in 5.030774s
mudasobwa_without_replacement
129.490k (± 9.4%) i/s - 653.150k in 5.096761s
EDIT: I've updated the above solution to address Stefan's numerous concerns. In hindsight, the errors are obvious and embarrassing! On the plus side, the revised solution is now faster than mudasobwa's solution, and I've confirmed that the two solutions have the same biases.
You can check if there any mathes and shuffle again:
a = [1,1,2,2,3,4]
# first time shuffle
sliced = a.shuffle.each_slice(2).to_a
# checking if there are matches and shuffle if there are
while sliced.combination(2).any? { |a, b| a.sort == b.sort } do
sliced = a.shuffle.each_slice(2).to_a
end
It is unlikely, be aware about possibility of infinity loop

How do I slice an array in ruby into sub arrays of a specified length?

I'd like to split an array into sub arrays of a specified length.
I know that .each_slice will chunk an array into equal length subarrays with the remainder leftover like so:
a = [1,2,3,4,5,6,7,8,9,10]
a.each_slice(3).to_a => [[1,2,3],[4,5,6],[7,8,9],[10]]
However, say I want the output like this:
=> [[1],[2,3],[4,5,6],[7,8,9,10]]
Is there a method in ruby for slicing an array into different specified lengths depending on the arguments you give it?
Try this
a = [1,2,3,4,5,6,7,8,9,10]
slices = [1,2,3,4].map { |n| a.shift(n) }
This slices the array into pieces
NB, this mutates the original array.
I cannot see how to improve on #akuhn's answer, but here are a couple of other methods that could be used.
a = [1,2,3,4,5,6,7,8,9,10,11]
slice_sizes = [1,2,3,4]
#1 Stab out slices
def variable_slice(a, slice_sizes)
last = 0
slice_sizes.each_with_object([]) do |n,arr|
arr << a[last,n]
last += n
end
end
variable_slice(a, slice_sizes)
#=> [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10]]
#2 Use recursion
def variable_slice(a, slice_sizes)
return [] if slice_sizes.empty?
i, *rest = slice_sizes
[a.first(i)].concat variable_slice(a[i..-1], rest)
end
variable_slice(a, slice_sizes)
#=> [[1], [2, 3], [4, 5, 6], [7, 8, 9, 10]]

Determining if a collection has more than one max value

Right now I'm doing this, and it works:
groups = [[1, 1, 1], [2, 2]]
groups.select { |g| g.size == groups.max.size }.size
# => 1 # a clear majority
groups = [[1, 1], [2, 2]]
groups.select { |g| g.size == groups.max.size }.size
# => 2 # needs to be passed to another filter
but I have a suspicion there's a cleaner way.
You can do this snippet:
groups.group_by(&:size)[groups.max.size].size
Let me quickly explain what this does. I apologise in advance for the bad wording as "group" is a rather overloaded term here...
What it does, is first to group the arrays by size. This returns a hash:
groups = [[1, 1, 1], [2, 2]]
grouped = groups.group_by(&:size)
# => {3=>[[1, 1, 1]], 2=>[[2, 2]]}
Then, you take the array of group arrays containing exactly as many elements as the largest group
largest_list = grouped[groups.max.size]
# => [[2, 2]]
Now, you can simple get the size of this array to get the number of groups which have this length:
largest_list.size
# => 1
The reason why your approach is rather slow is that you calculate groups.max.size in your inner loop each time again.

Resources