Algorithm for a diversified sort - arrays

I'm looking for a way to implement a diversified sort. Each cell contains a weight value along with an enum type. I would like to sort it in a way that it will make the weight value dynamic according to the types of elements that were already chosen, giving priority to those 'less chosen' so far. I would like to control the diversity factor, so that when setting it with a high value, it'll produce a fully diverse results array, and when giving a low value it will provide an almost 'regular' sorted array.
This doesn't sound like a very specific use case, so if there are any references to known algorithms, that will also be great.
Update:
According to Ophir suggestion, this might be a basic wrapper:
// these will be the three arrays, one per type
$contentTypeA, $contentTypeB, $contentTypeC;
// sort each by value
sort($contentTypeA);
sort($contentTypeB);
sort($contentTypeC);
// while i didn't get the amount I want or there aren't any more options to chose from
while ($amountChosen < 100 && (count($contentTypeA) + count($contentTypeB) + count($contentTypeC) > 0)) {
$diversifiedContent[] = selectBest($bestA, $bestB, $bestC, &$contentTypeA, &$contentTypeB, &$contentTypeC);
$amountChosen++;
}
$diversifiedContent = array_slice($diversifiedContent, 0, 520);
return $diversifiedContent;
}
function selectBest($bestA, $bestB, $bestC, &$contentTypeA, &$contentTypeB, &$contentTypeC) {
static $typeSelected;
$diversifyFactor = 0.5;
if (?) {
$typeSelected['A']++;
array_shift($contentTypeA);
return $bestA;
}
else if (?) {
$typeSelected['B']++;
array_shift($contentTypeB);
return $bestA;
}
else if (?) {
$typeSelected['C']++;
array_shift($contentTypeC);
return $bestA;
}
}

Your definition is very general terms, not in mathematical terms, so I doubt if you can find a close solution that matches exactly what you want.
I can suggest this simple approach:
Sort each type separately. Then merge the lists by iteratively taking the maximum value in the list of highest priority, where priority is the product of the value and a "starvation" factor for that type. The starvation factor will be a combination of how many steps ignored that type, and the diversity factor. The exact shape of this function depends on your application.

Heres an idea:
class item(object):
def __init__(self, enum_type, weight):
self.enum_type = enum_type
self.weight = weight
self.dyn_weight = weight
def __repr__(self):
return unicode((self.enum_type, self.weight, self.dyn_weight))
def sort_diverse(lst, factor):
# first sort
by_type = sorted(lst, key=lambda obj: (obj.enum_type, obj.weight))
cnt = 1
for i in xrange(1, len(lst)):
current = by_type[i]
previous = by_type[i-1]
if current.enum_type == previous.enum_type:
current.dyn_weight += factor * cnt
cnt += 1
else:
cnt = 1
return sorted(by_type, key=lambda obj: (obj.dyn_weight, obj.enum_type))
Try this example:
lst = [item('a', 0) for x in xrange(10)] + [item('b', 1) for x in xrange(10)] + [item('c', 2) for x in xrange(10)]
print sort_diverse(lst, 0) # regular sort
print sort_diverse(lst, 1) # partially diversified
print sort_diverse(lst, 100) # completely diversified
Depending on your needs, you might want to use a more sophisticated weight update function.
This algorithm is basically O(nlogn) time complexity and O(n) space complexity as it requires two sorts and two copies of the list.

Related

random index of CuArray with condition in Julia

Suppose I have a CuArray with random zeros and ones and I want to get a random index of CuArray corresponding to value one. For instance,
m = 100;
A = CuArray(rand([0, 1], m));
i = rand(1:m);
while A[i]!=1
i = rand(1:m);
end
Is there a function so that I can not use while looping?
Your construction of A has the following equivalent representation:
using Distributions
n_ones = rand(Binomial(m, 0.5))
one_inds = shuffle(1:m)[1:n_ones]
A = zeros(Int, m)
A[one_inds] .= 1
That is, you first choose the number of ones you are going to set (from a binomial distribution, since you have m independent choices), and then select without repetition that many indices (by just taking the init of all indices, shuffled).
Written this way, choosing a random index of a one is just
rand(one_inds)

How to find contiguous subarray of integers in an array from n arrays such that the sum of elements of such contiguous subarrays is minimum

Input: n arrays of integers of length p.
Output: An array of p integers built by copying contiguous subarrays of the input arrays into matching indices of the output, satisfying the following conditions.
At most one subarray is used from each input array.
Every index of the output array is filled from exactly one subarray.
The output array has the minimum possible sum.
Suppose I have 2 arrays:
[1,7,2]
[2,1,8]
So if I choose a subarray [1,7] from array 1 and subarray [8] from array 2. since these 2 subarrays are not overlapping for any index and are contiguous. We are also not taking any subarray twice from an array from which we have already chosen a subarray.
We have the number of elements in the arrays inside the collection = 2 + 1 = 3, which is the same as the length of the individual array (i.e. len(array 1) which is equal to 3). So, this collection is valid.
The sum here for [1,7] and [8] is 1 + 7 + 8 = 16
We have to find a collection of such subarrays such that the total sum of the elements of subarrays is minimum.
A solution to the above 2 arrays would be a collection [2,1] from array 1 and [2] from array 2.
This is a valid collection and the sum is 2 + 1 + 2 = 5 which is the minimum sum for any such collection in this case.
I cannot think of any optimal or correct approach, so I need help.
Some Ideas:
I tried a greedy approach by choosing minimum elements from all array for a particular index since the index is always increasing (non-overlapping) after a valid choice, I don't have to bother about storing minimum value indices for every array. But this approach is clearly not correct since it will visit the same array twice.
Another method I thought was to start from the 0th index for all arrays and start storing their sum up to k elements for every array since the no. of arrays are finite, I can store the sum upto k elements in an array. Now I tried to take a minimum across these sums and for a "minimum sum", the corresponding subarray giving this sum (i.e. k such elements in that array) can be a candidate for a valid subarray of size k, thus if we take this subarray, we can add a k + 1-th element corresponding to every array into their corresponding sum and if the original minimum still holds, then we can keep on repeating this step. When the minima fail, we can consider the subarray up to the index for which minima holds and this will be a valid starting subarray. However, this approach will also clearly fail because there could exist another subarray of size < k giving minima along with remaining index elements from our subarray of size k.
Sorting is not possible either, since if we sort then we are breaking consecutive condition.
Of course, there is a brute force method too.
I am thinking, working through a greedy approach might give a progress in the approach.
I have searched on other Stackoverflow posts, but couldn't find anything which could help my problem.
To get you started, here's a recursive branch-&-bound backtracking - and potentially exhaustive - search. Ordering heuristics can have a huge effect on how efficient these are, but without mounds of "real life" data to test against there's scant basis for picking one over another. This incorporates what may be the single most obvious ordering rule.
Because it's a work in progress, it prints stuff as it goes along: all solutions found, whenever they meet or beat the current best; and the index at which a search is cut off early, when that happens (because it becomes obvious that the partial solution at that point can't be extended to meet or beat the best full solution known so far).
For example,
>>> crunch([[5, 6, 7], [8, 0, 3], [2, 8, 7], [8, 2, 3]])
displays
new best
L2[0:1] = [2] 2
L1[1:2] = [0] 2
L3[2:3] = [3] 5
sum 5
cut at 2
L2[0:1] = [2] 2
L1[1:3] = [0, 3] 5
sum 5
cut at 2
cut at 2
cut at 2
cut at 1
cut at 1
cut at 2
cut at 2
cut at 2
cut at 1
cut at 1
cut at 1
cut at 0
cut at 0
So it found two ways to get a minimal sum 5, and the simple ordering heuristic was effective enough that all other paths to full solutions were cut off early.
def disp(lists, ixs):
from itertools import groupby
total = 0
i = 0
for k, g in groupby(ixs):
j = i + len(list(g))
chunk = lists[k][i:j]
total += sum(chunk)
print(f"L{k}[{i}:{j}] = {chunk} {total}")
i = j
def crunch(lists):
n = len(lists[0])
assert all(len(L) == n for L in lists)
# Start with a sum we know can be beat.
smallest_sum = sum(lists[0]) + 1
smallest_ixs = [None] * n
ixsofar = [None] * n
def inner(i, sumsofar, freelists):
nonlocal smallest_sum
assert sumsofar <= smallest_sum
if i == n:
print()
if sumsofar < smallest_sum:
smallest_sum = sumsofar
smallest_ixs[:] = ixsofar
print("new best")
disp(lists, ixsofar)
print("sum", sumsofar)
return
# Simple greedy heuristic: try available lists in the order
# of smallest-to-largest at index i.
for lix in sorted(freelists, key=lambda lix: lists[lix][i]):
L = lists[lix]
newsum = sumsofar
freelists.remove(lix)
# Try all slices in L starting at i.
for j in range(i, n):
newsum += L[j]
# ">" to find all smallest answers;
# ">=" to find just one (potentially faster)
if newsum > smallest_sum:
print("cut at", j)
break
ixsofar[j] = lix
inner(j + 1, newsum, freelists)
freelists.add(lix)
inner(0, 0, set(range(len(lists))))
How bad is brute force?
Bad. A brute force way to compute it: say there are n lists each with p elements. The code's ixsofar vector contains p integers each in range(n). The only constraint is that all occurrences of any integer that appears in it must be consecutive. So a brute force way to compute the total number of such vectors is to generate all p-tuples and count the number that meet the constraints. This is woefully inefficient, taking O(n**p) time, but is really easy, so hard to get wrong:
def countb(n, p):
from itertools import product, groupby
result = 0
seen = set()
for t in product(range(n), repeat=p):
seen.clear()
for k, g in groupby(t):
if k in seen:
break
seen.add(k)
else:
#print(t)
result += 1
return result
For small arguments, we can use that as a sanity check on the next function, which is efficient. This builds on common "stars and bars" combinatorial arguments to deduce the result:
def count(n, p):
# n lists of length p
# for r regions, r from 1 through min(p, n)
# number of ways to split up: comb((p - r) + r - 1, r - 1)
# for each, ff(n, r) ways to spray in list indices = comb(n, r) * r!
from math import comb, prod
total = 0
for r in range(1, min(n, p) + 1):
total += comb(p-1, r-1) * prod(range(n, n-r, -1))
return total
Faster
Following is the best code I have for this so far. It builds in more "smarts" to the code I posted before. In one sense, it's very effective. For example, for randomized p = n = 20 inputs it usually finishes within a second. That's nothing to sneeze at, since:
>>> count(20, 20)
1399496554158060983080
>>> _.bit_length()
71
That is, trying every possible way would effectively take forever. The number of cases to try doesn't even fit in a 64-bit int.
On the other hand, boost n (the number of lists) to 30, and it can take an hour. At 50, I haven't seen a non-contrived case finish yet, even if left to run overnight. The combinatorial explosion eventually becomes overwhelming.
OTOH, I'm looking for the smallest sum, period. If you needed to solve problems like this in real life, you'd either need a much smarter approach, or settle for iterative approximation algorithms.
Note: this is still a work in progress, so isn't polished, and prints some stuff as it goes along. Mostly that's been reduced to running a "watchdog" thread that wakes up every 10 minutes to show the current state of the ixsofar vector.
def crunch(lists):
import datetime
now = datetime.datetime.now
start = now()
n = len(lists[0])
assert all(len(L) == n for L in lists)
# Start with a sum we know can be beat.
smallest_sum = min(map(sum, lists)) + 1
smallest_ixs = [None] * n
ixsofar = [None] * n
import threading
def watcher(stop):
if stop.wait(60):
return
lix = ixsofar[:]
while not stop.wait(timeout=600):
print("watch", now() - start, smallest_sum)
nlix = ixsofar[:]
for i, (a, b) in enumerate(zip(lix, nlix)):
if a != b:
nlix.insert(i,"--- " + str(i) + " -->")
print(nlix)
del nlix[i]
break
lix = nlix
stop = threading.Event()
w = threading.Thread(target=watcher, args=[stop])
w.start()
def inner(i, sumsofar, freelists):
nonlocal smallest_sum
assert sumsofar <= smallest_sum
if i == n:
print()
if sumsofar < smallest_sum:
smallest_sum = sumsofar
smallest_ixs[:] = ixsofar
print("new best")
disp(lists, ixsofar)
print("sum", sumsofar, now() - start)
return
# If only one input list is still free, we have to take all
# of its tail. This code block isn't necessary, but gives a
# minor speedup (skips layers of do-nothing calls),
# especially when the length of the lists is greater than
# the number of lists.
if len(freelists) == 1:
lix = freelists.pop()
L = lists[lix]
for j in range(i, n):
ixsofar[j] = lix
sumsofar += L[j]
if sumsofar >= smallest_sum:
break
else:
inner(n, sumsofar, freelists)
freelists.add(lix)
return
# Peek ahead. The smallest completion we could possibly get
# would come from picking the smallest element in each
# remaining column (restricted to the lists - rows - still
# available). This probably isn't achievable, but is an
# absolute lower bound on what's possible, so can be used to
# cut off searches early.
newsum = sumsofar
for j in range(i, n): # pick smallest from column j
newsum += min(lists[lix][j] for lix in freelists)
if newsum >= smallest_sum:
return
# Simple greedy heuristic: try available lists in the order
# of smallest-to-largest at index i.
sortedlix = sorted(freelists, key=lambda lix: lists[lix][i])
# What's the next int in the previous slice? As soon as we
# hit an int at least that large, we can do at least as well
# by just returning, to let the caller extend the previous
# slice instead.
if i:
prev = lists[ixsofar[i-1]][i]
else:
prev = lists[sortedlix[-1]][i] + 1
for lix in sortedlix:
L = lists[lix]
if prev <= L[i]:
return
freelists.remove(lix)
newsum = sumsofar
# Try all non-empty slices in L starting at i.
for j in range(i, n):
newsum += L[j]
if newsum >= smallest_sum:
break
ixsofar[j] = lix
inner(j + 1, newsum, freelists)
freelists.add(lix)
inner(0, 0, set(range(len(lists))))
stop.set()
w.join()
Bounded by DP
I've had a lot of fun with this :-) Here's the approach they were probably looking for, using dynamic programming (DP). I have several programs that run faster in "smallish" cases, but none that can really compete on a non-contrived 20x50 case. The runtime is O(2**n * n**2 * p). Yes, that's more than exponential in n! But it's still a minuscule fraction of what brute force can require (see above), and is a hard upper bound.
Note: this is just a loop nest slinging machine-size integers, and using no "fancy" Python features. It would be easy to recode in C, where it would run much faster. As is, this code runs over 10x faster under PyPy (as opposed to the standard CPython interpreter).
Key insight: suppose we're going left to right, have reached column j, the last list we picked from was D, and before that we picked columns from lists A, B, and C. How can we proceed? Well, we can pick the next column from D too, and the "used" set {A, B, C} doesn't change. Or we can pick some other list E, the "used" set changes to {A, B, C, D}, and E becomes the last list we picked from.
Now in all these cases, the details of how we reached state "used set {A, B, C} with last list D at column j" make no difference to the collection of possible completions. It doesn't matter how many columns we picked from each, or the order in which A, B, C were used: all that matters to future choices is that A, B, and C can't be used again, and D can be but - if so - must be used immediately.
Since all ways of reaching this state have the same possible completions, the cheapest full solution must have the cheapest way of reaching this state.
So we just go left to right, one column at a time, and remember for each state in the column the smallest sum reaching that state.
This isn't cheap, but it's finite ;-) Since states are subsets of row indices, combined with (the index of) the last list used, there are 2**n * n possible states to keep track of. In fact, there are only half that, since the way sketched above never includes the index of the last-used list in the used set, but catering to that would probably cost more than it saves.
As is, states here are not represented explicitly. Instead there's just a large list of sums-so-far, of length 2**n * n. The state is implied by the list index: index i represents the state where:
i >> n is the index of the last-used list.
The last n bits of i are a bitset, where bit 2**j is set if and only if list index j is in the set of used list indices.
You could, e.g., represent these by dicts mapping (frozenset, index) pairs to sums instead, but then memory use explodes, runtime zooms, and PyPy becomes much less effective at speeding it.
Sad but true: like most DP algorithms, this finds "the best" answer but retains scant memory of how it was reached. Adding code to allow for that is harder than what's here, and typically explodes memory requirements. Probably easiest here: write new to disk at the end of each outer-loop iteration, one file per column. Then memory use isn't affected. When it's done, those files can be read back in again, in reverse order, and mildly tedious code can reconstruct the path it must have taken to reach the winning state, working backwards one column at a time from the end.
def dumbdp(lists):
import datetime
_min = min
now = datetime.datetime.now
start = now()
n = len(lists)
p = len(lists[0])
assert all(len(L) == p for L in lists)
rangen = range(n)
USEDMASK = (1 << n) - 1
HUGE = sum(sum(L) for L in lists) + 1
new = [HUGE] * (2**n * n)
for i in rangen:
new[i << n] = lists[i][0]
for j in range(1, p):
print("working on", j, now() - start)
old = new
new = [HUGE] * (2**n * n)
for key, g in enumerate(old):
if g == HUGE:
continue
i = key >> n
new[key] = _min(new[key], g + lists[i][j])
newused = (key & USEDMASK) | (1 << i)
for i in rangen:
mask = 1 << i
if newused & mask == 0:
newkey = newused | (i << n)
new[newkey] = _min(new[newkey],
g + lists[i][j])
result = min(new)
print("DONE", result, now() - start)
return result

How to calculate efficiently the index of an array, where the cumulative sum exceeded a threshold in Scala - Spark?

Suppose we have an array of integers, of 100 elements.
val a = Array(312, 102, 95, 255, ...)
I want to find the index (say k) of the array where the cumulative sum of the first k+1 elements is greater than a certain threshold, but for the first k element is less.
Because of the high number of elements in the array, I estimated a lower and upper index where in the between the k index should be:
k_Lower <= k <= k_upper
My question is, what is the best way to find this k index?
I tried it with a while loop when k_lower = 30; k_upper = 47 and the threshold = 20000
var sum = 0
var k = 30
while (k <= 47 && sum <= 20000) {
sum = test.take(k).sum
k += 1
}
print(k-2)
I obtained the right answer, but I'm pretty sure that there is a more efficient or "Scala-ish" solution for this and I'm really new in Scala. Also I have to implement this in Spark.
Another question:
To optimize the method of finding the k index, I thought of using binary search, where the min and max values are k_lower, respective k_upper. But my attempt to implement this was unsuccessful. How should I do this?
I am using Scala 2.10.6 and Spark 1.6.0
Update!
I thought this approach is a good solution for my problem, but now I think, that I approached it wrongly. My actual problem is the following:
I have a bunch of JSON-s, which are loaded into Spark as an RDD with
val eachJson = sc.textFile("JSON_Folder/*.json")
I want to split the data into several partitions based on their size. The concatenated JSON-s size should be under a threshold. My idea was to go through the RDD one by one, calculate the size of a JSON and add it to an accumulator. When the accumulator is greater than the threshold, then I remove the last JSON and I obtain a new RDD with all the JSON-s until that, and I do it again with the remaining JSON-s. I read about tail recursion which could be a solution for this, but I wasn't able to implement it, so I tried to solve it differently. I map-ed the sizes for each JSON, and I obtained and RDD[Int]. And I managed to get all the indexes of this array, where the cumulative sum exceeded the threshold:
def calcRDDSize(rdd: RDD[String]): Long = {
rdd.map(_.getBytes("UTF-8").length.toLong)
.reduce(_ + _) //add the sizes together
}
val jsonSize = eachJson.map(s => s.getBytes("UTF-8").length)
val threshold = 20000
val totalSize = calcRDDSize(eachJson)
val numberOfPartitions = totalSize/threshold
val splitIndexes = scala.collection.mutable.ArrayBuffer.empty[Int]
var i = 0
while (i < numberOfPartitions)
{
splitIndexes += jsonSize.collect().toStream.scanLeft(0){_ + _}.takeWhile(_ < (i+1)*threshold).length-1
i = i + 1
}
However, I don't like this solution, because in the while loop I go through several times on the Stream and this is not really efficient. And now I have the indexes where I have to split the RDD, but I don't know how to split is.
I would to this with scanLeft and further optimize this using a lazy collection
a
.toStream
.scanLeft(0){_ + _}
.tail
.zipWithIndex
.find{case(cumsum,i) => cumsum > limit}

How to sample from a Scala array efficiently

I want to sample from a Scala array, the sample size can be much larger than the length of the array. How can I do this efficiently? By using the following code the running time is linear to the sample size, when the sample size is very big it is slow if we need to do the sampling many times:
def getSample(dataArray: Array[Double], sampleSize: Int, seed: Int): Array[Double] =
{
val arrLength = dataArray.length
val r = new scala.util.Random(seed)
Array.fill(sampleSize)(dataArray(r.nextInt(arrLength)))
}
val myArr= Array(1.0,5.0,9.0,4.0,7.0)
getSample(myArr, 100000, 28)
The probability that any given element of an array of length $n$ appears at least once in a sample of size $k$ is $1-(1-1/n)^k$. If this value is close to 1, which occurs when $k$ is large compared to $n$, then the following algorithm might be a good choice depending on your needs:
import org.apache.commons.math3.random.MersennseTwister
import org.apache.commons.math3.distribution.BinomialDistribution
def getSampleCounts[T](data: Array[T], k: Int, seed: Long): Array[Int] = {
val rng = new MersenneTwister(seed)
val counts = new Array[Int](data.length)
var i = k
do {
val j = new BinomialDistribution(rng.nextLong(), i, 1.0/i)
counts(i) = j
i -= j
} while (i > 0)
counts
}
Note that this algorithm does not return a sample. Instead it returns an Array[Int] whose $i$-th entry is equal to the number of times data(i) appears in the random sample. This may not be suitable for all applications, but for some use cases having the sample in the form of some sort of Iterable over (value, count) pairs (which can be obtained by data.view.zip(getSampleCounts(data, k, seed)), for example) is actually very convenient since it often enables us to do a computation once for groups of samples (since they are equal.) For example, suppose I had an expensive function f: T => Double and I wanted to compute the sample mean of f applied to a random sample of size $k$ draw from data. Then we could do the following:
data.view.zip(getSampleCounts(data, k, seed)).map({case (x, count) => f(x)*count}).sum/k
This computation for the sample mean evaluates f $n$ instead of $k$ times (recall that we are assuming that $k$ is large compared to $n$.)
Note that getSampleCounts will loop at most $n$ times where $n$ is data.length. Also, sampling from the binomial distribution in each iteration, assuming this is done in a reasonable fashion in the apache.commons.math3 library, should have complexity no worse than $O(\log k)$ (inverse CDF method and binary search.) So the complexity of the above algorithm is $O(n \log k)$ where $n$ is data.length and $k$ is the number of samples you want to draw.
There is no way around it. If you need to take N elements with constant time element access the complexity will be O(n) (linear) no matter what.
You can deffer/amortize the cost by making it lazy. For instance you can return a Stream or Iterator that evaluates each element as you access it. This will help you save on memory usage if you can fold that stream as you are consuming it. In other words you can skip the copy part and work directly with initial array - not always possible, depends on the task.
To make this sampling program run faster, use Akka actor framework to run the sampling jobs in parallel.
Create a master actor for distributing the sampling works to Worker actors and also to concatenate the elements from different workers. So each Worker actor would prepare/collect a fixed number of sample elements and give back the resulting collection as an immutable array to the master. Upon receiving the 'WorkDone' user-defined message from Worker, the Master actor concatenates the elements into the final collection.
it is easy with a list. Use the following implicit function
object ListImplicits {
implicit class SampledArray[T](in: List[T]) {
def sample(n: Int, seed:Option[Long]=None): List[T] = {
seed match {
case Some(s) => Random.setSeed(s)
case _ => // nothing
}
Random.shuffle(in).take(n)
}
}
}
And then import the object and use collection conversions to switch from Array to list (slight overhead):
import ListImplicits.SampledArray
val n = 100000
val list = (0 to n).toList.map(i => Random.nextInt())
val array = list.toArray
val t0 = System.currentTimeMillis()
array.toList.sample(5).toArray
val t1 = System.currentTimeMillis()
list.sample(5)
val t2 = System.currentTimeMillis()
println( "Array (conversion) => delta = " + (t1-t0) + " ms") // 10 ms
println( "List => delta = " + (t2-t1) + " ms") // 8 ms

Find pairs with distance in ruby array

I have a big array with a sequence of values.
To check if the values in place x have an influence on the values on place x+distance
I want to find all the pairs
pair = [values[x], values[x+1]]
The following code works
pairs_with_distance = []
values.each_cons(1+distance) do |sequence|
pairs_with_distance << [sequence[0], sequence[-1]]
end
but it looks complicated and I wonder if if I make it shorter and clearer
You can make the code shorter by using map directly:
pairs_with_distance = values.each_cons(1 + distance).map { |seq|
[seq.first, seq.last]
}
I prefer something like the example below, because it has short, readable lines of code, and because it separates the steps -- an approach that allows you to give a meaningful names to intermediate calculations (groups in this case). You can probably come up with better names based on the real domain of the application.
values = [11,22,33,44,55,66,77]
distance = 2
groups = values.each_cons(1 + distance)
pairs = groups.map { |seq| [seq.first, seq.last] }
p pairs

Resources