Say I have the following two arrays:
a = [1, 0, 2, 1, 6]
b = [0, 5, 5, 6, 1]
I want to create (or modify a or b) an array with the values inside each relative index of the array to be added together, like:
[1, 5, 7, 7, 7]
Is there a elegant (and fast) way to do this without looping over every index in the first array and adding from the second. I have a feeling that map/reduce/inject might be a way to go here, but these methods have always seemed a bit "magical" to me and I never really understood them.
You could zip and then use reduce:
p a.zip(b).map{|v| v.reduce(:+) }
#=> [1, 5, 7, 7, 7]
Or, if you're sure that array a and b will always be of equal length:
p a.map.with_index { |v, i| v + b[i] }
#=> [1, 5, 7, 7, 7]
This is one of the thing you could do.
[[4,5,6], [10,25,16]].transpose.map {|x| x.reduce(:+)}
Even inject would also do the same.
[[10,20,30],[24,52,62]].transpose.map {|a| a.inject(:+)}
and to have more understanding. Please have a look at ruby: sum corresponding members of two or more arrays
a.each_index.map { |i| a[i]+b[i] }
# => [1, 5, 7, 7, 7]
All bases seem to be covered by other answers. These are submitted for interest only:
c = b.cycle
#=> #<Enumerator: [0, 5, 5, 6, 1]:cycle>
a.map { |e| e + c.next }
#=> [1, 5, 7, 7, 7]
And another way:
a.map { |e| e + b.rotate!.last }
#=> [1, 5, 7, 7, 7]
BENCHMARKS
Benchmarks (which I am not doing myself any favours by showing) for larger arrays.
require 'fruity'
a = (1..10_000).to_a.shuffle
b = a.shuffle
compare do
sagar { ar = a.dup; br = b.dup; c = br.cycle; ar.map { |e| e + c.next } }
sagar_2 { ar = a.dup; br = b.dup; ar.map { |e| e + br.rotate!.last } }
cary { ar = a.dup; br = b.dup; ar.each_index.map { |i| ar[i]+br[i] } }
surya { ar = a.dup; br = b.dup; ar.zip(br).map{|v| v.reduce(:+) } }
surya_2 { ar = a.dup; br = b.dup; ar.map.with_index { |v, i| v + br[i] } }
sinsuren { ar = a.dup; br = b.dup; [ar, br].transpose.map {|x| x.reduce(:+)} }
flamine { ar = a.dup; br = b.dup; ar.map{|i| br.shift.to_i + i }}
stefan { ar = a.dup; br = b.dup; ar.zip(br).map { |i, j| i + j }}
end
#Running each test once. Test will take about 3 seconds.
#flamine is similar to surya_2
#surya_2 is similar to stefan
#stefan is similar to cary
#cary is faster than sinsuren by 3x ± 1.0
#sinsuren is similar to surya
#surya is faster than sagar by 2x ± 0.1
#sagar is faster than sagar_2 by 15x ± 1.0
Ran this a few times, sometimes flamine is outright fastest: flamine is faster than surya_2 by 10.000000000000009% ± 10.0%. The caveat being that flamine's technique modifies b to an empty array.
I like this variant, but its work correctly only if a.size >= b.size
a.map{|i| b.shift.to_i + i }
Related
I'm working to update the SVG::Graph gem, and have made many improvements to my version, but have found a bottleneck with multiple array sorting.
There is a "sort_multiple" function built in, which keeps an array of arrays (all of equal size) sorted by the first array in the group.
The issue I have is that this sort works well on truly random data, and really badly on sorted, or almost sorted data:
def sort_multiple( arrys, lo=0, hi=arrys[0].length-1 )
if lo < hi
p = partition(arrys,lo,hi)
sort_multiple(arrys, lo, p-1)
sort_multiple(arrys, p+1, hi)
end
arrys
end
def partition( arrys, lo, hi )
p = arrys[0][lo]
l = lo
z = lo+1
while z <= hi
if arrys[0][z] < p
l += 1
arrys.each { |arry| arry[z], arry[l] = arry[l], arry[z] }
end
z += 1
end
arrys.each { |arry| arry[lo], arry[l] = arry[l], arry[lo] }
l
end
this routine appears to use a variant of the Lomuto partition scheme from wikipedia: https://en.wikipedia.org/wiki/Quicksort#Lomuto_partition_scheme
I have an array of 5000+ numbers, which is previously sorted, and this function adds about 1/2 second per chart.
I have modified the "sort_multiple" routine with the following:
def sort_multiple( arrys, lo=0, hi=arrys[0].length-1 )
first = arrys.first
return arrys if first == first.sort
if lo < hi
...
which has "fixed" the problem with sorted data, but I was wondering if there is any way to utilise the better sort functions built into ruby to get this sort to work much quicker. e.g. do you think I could utilise a Tsort to speed this up? https://ruby-doc.org/stdlib-2.6.1/libdoc/tsort/rdoc/TSort.html
looking at my benchmarking, the completely random first group appears to be very fast.
Current benchmarking:
def sort_multiple( arrys, lo=0, hi=arrys[0].length-1 )
if lo < hi
p = partition(arrys,lo,hi)
sort_multiple(arrys, lo, p-1)
sort_multiple(arrys, p+1, hi)
end
arrys
end
def partition( arrys, lo, hi )
p = arrys[0][lo]
l = lo
z = lo+1
while z <= hi
if arrys[0][z] < p
l += 1
arrys.each { |arry| arry[z], arry[l] = arry[l], arry[z] }
end
z += 1
end
arrys.each { |arry| arry[lo], arry[l] = arry[l], arry[lo] }
l
end
first = (1..5400).map { rand }
second = (1..5400).map { rand }
unsorted_arrys = [first.dup, second.dup, Array.new(5400), Array.new(5400), Array.new(5400)]
sorted_arrys = [first.sort, second.dup, Array.new(5400), Array.new(5400), Array.new(5400)]
require 'benchmark'
Benchmark.bmbm do |x|
x.report("unsorted") { sort_multiple( unsorted_arrys.map(&:dup) ) }
x.report("sorted") { sort_multiple( sorted_arrys.map(&:dup) ) }
end
results:
Rehearsal --------------------------------------------
unsorted 0.070699 0.000008 0.070707 ( 0.070710)
sorted 0.731734 0.000000 0.731734 ( 0.731742)
----------------------------------- total: 0.802441sec
user system total real
unsorted 0.051636 0.000000 0.051636 ( 0.051636)
sorted 0.715730 0.000000 0.715730 ( 0.715733)
#EDIT#
Final accepted solution:
def sort( *arrys )
new_arrys = arrys.transpose.sort_by(&:first).transpose
new_arrys.each_index { |k| arrys[k].replace(new_arrys[k]) }
end
I have an array of 5000+ numbers, which is previously sorted, and this function adds about 1/2 second per chart.
Unfortunately, algorithms implemented in Ruby can become quite slow. It's often much faster to delegate the work to the built-in methods that are implemented in C, even if it comes with an overhead.
To sort a nested array, you could transpose it, then sort_by its first element, and transpose again afterwards:
arrays.transpose.sort_by(&:first).transpose
It works like this:
arrays #=> [[3, 1, 2], [:c, :a, :b]]
.transpose #=> [[3, :c], [1, :a], [2, :b]]
.sort_by(&:first) #=> [[1, :a], [2, :b], [3, :c]]
.transpose #=> [[1, 2, 3], [:a, :b, :c]]
And although it creates several temporary arrays along the way, the result seems to be an order of magnitude faster than the "unsorted" variant:
unsorted 0.035297 0.000106 0.035403 ( 0.035458)
sorted 0.474134 0.003065 0.477199 ( 0.480667)
transpose 0.001572 0.000082 0.001654 ( 0.001655)
In the long run, you could try to implement your algorithm as a C extension.
I confess I don't fully understand the question and don't have the time to study the code at the link, but it seems that you have one sorted array that you are repeatedly mutating only slightly, and with each change you may mutate several other arrays, each a little or a lot. After each set of mutations you re-sort the first array and then rearrage each of the other arrays consistent with the changes in indices of elements in the first array.
If, for example, the first array were
arr = [2,4,6,8,10]
and the change to arr were to replace the element at index 1 (4) with 9 and the element at index 3 (8) with 3, arr would become [2,9,6,3,10], which, after re-sorting, would be [2,3,6,9,10]. We could do that as follows:
new_arr, indices = [2,9,6,3,10].each_with_index.sort.transpose
#=> [[2, 3, 6, 9, 10], [0, 3, 2, 1, 4]]
Therefore,
new_arr
#=> [2, 3, 6, 9, 10]
indices
#=> [0, 3, 2, 1, 4]
the intermediate calculation being
[2,9,6,3,10].each_with_index.sort
#=> [[2, 0], [3, 3], [6, 2], [9, 1], [10, 4]]
Considering that
new_array == [2,9,6,3,10].values_at(*indices)
#=> true
we see that each of the other arrays, after having been mutated, can be sorted to conform with the sorting of indices in the first array with the following method, which is quite fast.
def sort_like_first(a, indices)
a.values_at(*indices)
end
For example,
a = [5,4,3,1,2]
a.replace(sort_like_first a, indices)
a #=> [5, 1, 3, 4, 2]
a = %w|dog cat cow pig owl|
a.replace(sort_like_first a, indices)
a #=> ["dog", "pig", "cow", "cat", "owl"]
In fact, it's not necessary to sort each of the other arrays until they are required in the calculations.
I would now like to consider a special case, namely, when only a single element in the first array is to be changed.
Suppose (as before)
arr = [2,4,6,8,10]
and the element at index 3 (8) is to be replaced with 5, resulting in [2,4,6,5,10]. A fast sort can be done with the following method, which employs a binary search.
def new_indices(arr, replace_idx, replace_val)
new_loc = arr.bsearch_index { |n| n >= replace_val } || arr.size
indices = (0..arr.size-1).to_a
index_removed = indices.delete_at(replace_idx)
new_loc -= 1 if new_loc > replace_idx
indices.insert(new_loc, index_removed)
end
arr.bsearch_index { |n| n >= replace_val } returns nil if n >= replace_val #=> false for all n. It is for that reason I have tacked on || arr.size.
See Array#bsearch_index, Array#delete_at and Array#insert.
Let's try it. If
arr = [2,4,6,8,10]
replace_idx = 3
replace_val = 5
then
indices = new_indices(arr, replace_idx, replace_val)
#=> [0, 1, 3, 2, 4]
Only now can we replace the element of arr at index replace_idx.
arr[replace_idx] = replace_val
arr
#=> [2, 4, 6, 5, 10]
We see that the re-sorted array is as follows.
arr.values_at(*indices)
#=> [2, 4, 5, 6, 10]
The other arrays are sorted as before, using sort_like_first:
a = [5,4,3,1,2]
a.replace(sort_like_first(a, indices))
#=> [5, 4, 1, 3, 2]
a = %w|dog cat cow pig owl|
a.replace(sort_like_first(a, indices))
#=> ["dog", "cat", "pig", "cow", "owl"]
Here's a second example.
arr = [2,4,6,8,10]
replace_idx = 3
replace_val = 12
indices = new_indices(arr, replace_idx, replace_val)
#=> [0, 1, 2, 4, 3]
arr[replace_idx] = replace_val
arr
#=> [2, 4, 6, 12, 10]
The first array sorted is therefore
arr.values_at(*indices)
#=> [2, 4, 6, 10, 12]
The other arrays are sorted as follows.
a = [5,4,3,1,2]
a.replace(sort_like_first a, indices)
a #=> [5, 4, 3, 2, 1]
a = %w|dog cat cow pig owl|
a.replace(sort_like_first a, indices)
a #=> ["dog", "cat", "cow", "owl", "pig"]
In Ruby, how can one multiply every element in one array by every element in another array, such that:
a = [1,2,3]
b = [4,5,6]
c = a*b = [4,5,6,8,10,12,12,15,18]
For a nice abstraction, can get cartesian product using product:
a.product(b).map { |aa, bb| aa * bb }
This solution makes use of Matrix methods to compute (and then flatten) the outer product of two vectors.
require 'matrix'
(Matrix.column_vector(a) * Matrix.row_vector(b)).to_a.flatten
#=> [4, 5, 6, 8, 10, 12, 12, 15, 18]
Like the other two answers to date, this produces a temporary array, which when flattened (if not already flattened) contains a.size**2 elements. If a is so large that this results in a storage problem, you could use a pair of enumerators instead:
a.each_with_object([]) { |aa,arr| b.each { |bb| arr << aa*bb } }
#=> [4, 5, 6, 8, 10, 12, 12, 15, 18]
The enumerators are as follows.
enum_a = a.each_with_object([])
#=> #<Enumerator: [1, 2, 3]:each_with_object([])>
aa, arr = enum_a.next
#=> [1, []]
aa, arr = enum_a.next
#=> [2, []]
...
enum_b = b.each
#=> #<Enumerator: [4, 5, 6]:each>
bb = enum_b.next
#=> 4
bb = enum_b.next
#=> 5
...
See Enumerator#next. This is how enumerators pass elements to their blocks.
The method Enumerable#each_with_object is very convenient and not as complex as it may initially seem. For the most part it just saves two lines of code from the following.
arr = []
a.each { |aa| b.each { |bb| arr << aa*bb } }
arr
Tried with following,
a.product(b).map { |x| x.inject(&:*) }
Amazingly following also solve it,
a.map { |x| b.map(&x.method(:*)) }.flatten
This is not beautiful but returns what you want.
a.map{|aa| b.map{|bb| bb * aa}}.flatten
Using Ruby 2.4, I have an array of unique, ordered numbers, for example
[1, 7, 8, 12, 14, 15]
How do I find the first two elements whose difference is 1? For example, the above array the answer to that is "7" and "8".
You could use each_cons and find to get the first element from the array of pairs where the second element less the first one is equal to 1:
p [1, 7, 8, 12, 14, 15].each_cons(2).find { |a, b| b - a == 1 }
# => [7, 8]
Here are three more ways.
#1
def first_adjacent_pair(arr)
(arr.size-2).times { |i| return arr[i, 2] if arr[i+1] == arr[i].next }
nil
end
first_adjacent_pair [1, 7, 8, 12, 14, 15] #=> [7,8]
first_adjacent_pair [1, 7, 5, 12, 14, 16] #=> nil
#2
def first_adjacent_pair(arr)
enum = arr.to_enum # or arr.each
loop do
curr = enum.next
nxt = enum.peek
return [curr, nxt] if nxt == curr.next
end
nil
end
enum.peek raises a StopIteration exception when the enumerator enum has generated its last element with the preceding enum.next. The exception is handled by Kernel#loop by breaking out of the loop, after which nil is returned. See also Object#to_enum, Enumerator#next and Enumerator#peek.
#3
def first_adjacent_pair(arr)
a = [nil, arr.first]
arr.each do |n|
a.rotate!
a[1] = n
return a if a[1] == a[0] + 1
end
nil
end
See Array#rotate!.
Simple example
X = [1, 7, 8, 12, 14, 15]
X.each_with_index do |item, index|
if index < X.count - 1
if (X[index+1]-X[index] == 1)
puts item
end
end
end
Here's an alternate method provided for educational purposes:
arr = [1, 7, 8, 12, 14, 15]
arr.each_cons(2).map {|v|v.reduce(:-)}.index(-1)
One way to do this:
a.each_with_index { |e, i| break [ e, a[i.next] ] if a[i.next] == e.next }
#=> [7, 8]
Unlike chunk or each_cons this doesn't create an array of arrays. It also breaks as soon as a pair is found.
Benchmarks
require 'fruity'
arr = ((1...1000)).to_a.reverse + [1,2]
def first_adjacent_pair(arr)
idx = arr.each_index.drop(1).find { |i| (arr[i-1]-arr[i]).abs == 1 }
idx ? arr[idx-1, 2] : nil
end
def first_adjacent_pair2(arr)
enum = arr.to_enum
loop do
curr = enum.next
nxt = enum.peek
return [curr, nxt] if (curr-nxt).abs == 1
end
nil
end
compare do
iceツ { ar = arr.dup; ar.each_with_index { |e, i| break [ e, ar[i.next] ] if ar[i.next] == e.next } }
cary { ar = arr.dup; first_adjacent_pair(ar) }
cary2 { ar = arr.dup; first_adjacent_pair2(ar) }
seb { ar = arr.dup; ar.each_cons(2).find{|a,b| b-a == 1} }
end
#Running each test 64 times. Test will take about 1 second.
#cary2 is faster than cary by 3x ± 0.1
#cary is faster than iceツ by 3x ± 0.1 (results differ: [999, 998] vs [1, 2])
#iceツ is faster than seb by 30.000000000000004% ± 10.0%
I have an array #ary = [1, 3, 4, 2, 7, 8, 9] and I want to know how many possibilities of combination can add equal to 9.
I should have four possibilities can add equal to 9 [1,8]、[2, 3, 4]、[9]、[2, 7],but in my code, I just can know two of possibilities and just can show one possibility in this problem.
def sums (num, target)
random1 = num.sample
random2 = num.sample
if random1+random2 == target
ary1 = [random1, random2]
end
end
If you're interested in the combinations themselves as opposed to just the count:
(1..a.size).flat_map { |n| a.combination(n).to_a }
.keep_if { |c| c.inject(:+) == 9 }
#=> [[9], [1, 8], [2, 7], [3, 4, 2]]
You can use Array#combination:
(1..ary.size).inject(0) do |a, e|
a + ary.combination(e).count { |e| e.sum == 9 }
end
#=> 4
You can use inject(:+) instead of sum if your ruby version is lower than 2.4.
There are two arrays:
A = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
B = [3, 4, 1, 5, 2, 6]
I want to sort B in a way that for all the elements of B that exists in A, sort the elements in the order that is in array A.
The desired sorted resulted would be
B #=> [1, 2, 3, 4, 5, 6]
I have tried to do
B = B.sort_by { |x| A.index }
but it does not work.
This question differs from the possible duplicates because it deals with presence of elements in the corresponding array and no hashes are present here.
It perfectly works:
▶ A = [1,3,2,6,4,5,7,8,9,10]
▶ B = [3,4,1,5,2,6]
▶ B.sort_by &A.method(:index)
#⇒ [1, 3, 2, 6, 4, 5]
If there could be elements in B that are not present in A, use this:
▶ B.sort_by { |e| A.index(e) || Float::INFINITY }
I would start by checking what elements from B exist in A :
B & A
and then sort it:
(B & A).sort_by { |e| A.index(e) }
First consider the case where every element of B is in A, as with the question's example:
A = [1,2,3,4,5,6,7,8,9,10]
B = [3,6,1,5,1,2,1,6]
One could write the following, which requires only a single pass through A (to construct g1) and a single pass through B.
g = A.each_with_object({}) { |n,h| h[n] = 1 }
#=> {1=>1, 2=>1, 3=>1, 4=>1, 5=>1, 6=>1, 7=>1, 8=>1, 9=>1, 10=>1}
B.each_with_object(g) { |n,h| h[n] += 1 }.flat_map { |k,v| [k]*(v-1) }
#=> [1, 1, 1, 2, 3, 5, 6, 6]
If there is no guarantee that all elements of B are in A, and any that are not are to be placed at the end of the sorted array, one could change the calculation of g slightly.
g = (A + (B-A)).each_with_object({}) { |n,h| h[n] = 1 }
This requires one more pass through A and through B.
Suppose, for example,
A = [2,3,4,6,7,8,9]
and B is unchanged. Then,
g = (A + (B-A)).each_with_object({}) { |n,h| h[n] = 1 }
#=> {2=>1, 3=>1, 4=>1, 6=>1, 7=>1, 8=>1, 9=>1, 1=>1, 5=>1}
B.each_with_object(g) { |n,h| h[n] += 1 }.flat_map { |k,v| [k]*(v-1) }
#=> [2, 3, 6, 6, 1, 1, 1, 5]
This solution demonstrates the value of a controversial change to hash properties that were made in Ruby v1.9: hashes would thereafter be guaranteed to maintain key-insertion order.
1 I expect one could write g = A.product([1]).to_h, but the doc Array#to_h does not guarantee that the keys in the hash returned will have the same order as they do in A.
You just missed x in A.index, so the query should be:
B = B.sort_by { |x| A.index(x) }