Minimize function in adjacent items of an array - arrays

I have an array (arr) of elements, and a function (f) that takes 2 elements and returns a number.
I need a permutation of the array, such that f(arr[i], arr[i+1]) is as little as possible for each i in arr. (and it should loop, ie. it should also minimize f(arr[arr.length - 1], arr[0]))
Also, f works sort of like a distance, so f(a,b) == f(b,a)
I don't need the optimum solution if it's too inefficient, but one that works reasonable well and is fast since I need to calculate them pretty much in realtime (I don't know what to length of arr is, but I think it could be something around 30)

What does "such that f(arr[i], arr[i+1]) is as little as possible for each i in arr" mean? Do you want minimize the sum? Do you want to minimize the largest of those? Do you want to minimize f(arr[0],arr[1]) first, then among all solutions that minimize this, pick the one that minimizes f(arr[1],arr[2]), etc., and so on?
If you want to minimize the sum, this is exactly the Traveling Salesman Problem in its full generality (well, "metric TSP", maybe, if your f's indeed form a metric). There are clever optimizations to the naive solution that will give you the exact optimum and run in reasonable time for about n=30; you could use one of those, or one of the heuristics that give you approximations.
If you want to minimize the maximum, it is a simpler problem although still NP-hard: you can do binary search on the answer; for a particular value d, draw edges for pairs which have f(x,y)
If you want to minimize it lexiocographically, it's trivial: pick the pair with the shortest distance and put it as arr[0],arr[1], then pick arr[2] that is closest to arr[1], and so on.
Depending on where your f(,)s are coming from, this might be a much easier problem than TSP; it would be useful for you to mention that as well.

You're not entirely clear what you're optimizing - the sum of the f(a[i],a[i+1]) values, the max of them, or something else?
In any event, with your speed limitations, greedy is probably your best bet - pick an element to make a[0] (it doesn't matter which due to the wraparound), then choose each successive element a[i+1] to be the one that minimizes f(a[i],a[i+1]).
That's going to be O(n^2), but with 30 items, unless this is in an inner loop or something that will be fine. If your f() really is associative and commutative, then you might be able to do it in O(n log n). Clearly no faster by reduction to sorting.

I don't think the problem is well-defined in this form:
Let's instead define n fcns g_i : Perms -> Reals
g_i(p) = f(a^p[i], a^p[i+1]), and wrap around when i+1 > n
To say you want to minimize f over all permutations really implies you can pick a value of i and minimize g_i over all permutations, but for any p which minimizes g_i, a related but different permatation minimizes g_j (just conjugate the permutation). So therefore it makes no sense to speak minimizing f over permutations for each i.

Unless we know something more about the structure of f(x,y) this is an NP-hard problem. Given a graph G and any vertices x,y let f(x,y) be 1 if there is no edge and 0 if there is an edge. What the problem asks is an ordering of the vertices so that the maximum f(arr[i],arr[i+1]) value is minimized. Since for this function it can only be 0 or 1, returning a 0 is equivalent to finding a Hamiltonian path in G and 1 is saying that no such path exists.
The function would have to have some sort of structure that disallows this example for it to be tractable.

Related

a minimal partition of a rectangular array into rectangular subarrays of constant value

let A be an MxN matrix with entries a_ij in {0, ..., n-1}.
we can think of the entries in A as a rectangular grid that has been n-colored.
I am interested in partitioning each colored region into rectangles, in such a way that the number of rectangles is minimized. That is, I want to produce n sets of quadruples
L_k = {(i, j, w, h) | a_xy = k forall i <= x < i + w, j <= y < j + h}
satisfying the condition that every a_ij belongs to exactly one rectangle and all of the rectangles are disjoint. Furthermore, the sum
L_0 + ... + L_(n-1) is minimized.
Obviously, minimizing each of the L_k can be done independently, but there is also a requirement that this happen extremely fast. Assume this is a real-time application. It may be the case that since the sets are disjoint, sharing information between the L_ks speeds things up more than doing everything in parallel. n can be small (say, <100) and M and N can be large.
I assume there is a dynamic programming approach to this, or maybe there is a way to rephrase it as a graph problem, but it is not immediately obvious to me how to approach this.
EDIT:
There seems to be some confusion about what I mean. Here is a picture to help illustrate.
Imagine this is a 10x10 matrix with red = 0, green = 1, and blue = 2. draw the black boxes like so, minimizing the number of boxes. The output here would be
L_0 = {(0,8,2,2),(1,7,2,1),(2,8,1,1),(4,5,4,2),(6,7,2,2)}
L_1 = {(0,0,4,4),(4,0,6,2),(6,2,2,3),(8,8,2,2)}
L_2 = {(0,4,4,3),(0,7,1,1),(2,9,6,1),(3,7,3,2),(4,2,2,4),(8,2,2,6)}
One thing to immediately do is to note that you can separate the problem into individual instances of connected regions of colors. From there, the post linked in the article explains how you can use a maximal matching to construct the optimal solution.
But it's probably quite hard to implement that (and judging by your tag of C, even more hard). So I recommend one of two strategies: Backtracking, or greedy.
To do backtracking, you will recurse on the set of tiles which are not yet covered (I assume this makes sense, as you have listed all integer coordinates. This changes but not massively otherwise). Take the highest, leftmost uncovered tile, and loop over all possible rectangles which contain it (there are only ~n^2 of them, and hopefully less). Then recurse.
To optimize this, you will want to prune. An easy way to prune this is to stop recursing if you have already seen a solution with a better answer. This pruning is known as branch and bound.
You can also just quit early in the backtracking, if you only need an approximate answer. Since you mentioned "real-time application," it might be okay if you're only off by a little.
Continuing with the idea of approximation, you could also do something similar by just greedily picking the largest rectangle that you can at the moment. This is annoying to implement, but doable. You can also combine this with recursion and backing out early.

Optimal combination of “baskets” to best approximate target basket?

Say I have some array of length n where arr[k] represents how much of object k I want. I also have some arbitrary number of arrays which I can sum integer multiples of in any combination - my goal being to minimise the sum of the absolute differences across each element.
So as a dumb example if my target was [2,1] and my options were A = [2,3] and B = [0,1], then I could take A - 2B and have a cost of 0
I’m wondering if there is an efficient algorithm for approximating something like this? It has a weird knapsack-y flavour to is it maybe just intractable for large n? It doesn’t seem very amenable to DP methods
This is the (NP-hard) closest vector problem. There's an algorithm due to Fincke and Pohst ("Improved methods for calculating vectors of short length in a lattice, including a complexity analysis"), but I haven't personally worked with it.

computing function of neighbors efficiently on lattice

I'm studying the Ising model, and I'm trying to efficiently compute a function H(σ) where σ is the current state of an LxL lattice (that is, σ_ij ∈ {+1, -1} for i,j ∈ {1,2,...,L}). To compute H for a particular σ, I need to perform the following calculation:
where ⟨i j⟩ indicates that sites σ_i and σ_j are nearest neighbors and (suppose) J is a constant.
A couple of questions:
Should I store my state σ as an LxL matrix or as an L2 list? Is one better than the other for memory accessing in RAM (which I guess depends on the way I'm accessing elements...)?
In either case, how can I best compute H?
Really I think this boils down to how can I access (and manipulate) the neighbors of every state most efficiently.
Some thoughts:
I see that if I loop through each element in the list or matrix that I'll be double counting, so is there a "best" way to return the unique neighbors?
Is there a better data structure that I'm not thinking of?
Your question is a bit broad and a bit confusing for me, so excuse me if my answer is not the one you are looking for, but I hope it will help (a bit).
An array is faster than a list when it comes to indexing. A matrix is a 2D array, like this for example (where N and M are both L for you):
That means that you first access a[i] and then a[i][j].
However, you can avoid this double access, by emulating a 2D array with a 1D array. In that case, if you want to access element a[i][j] in your matrix, you would now do, a[i * L + j].
That way you load once, but you multiply and add your variables, but this may still be faster in some cases.
Now as for the Nearest Neighbor question, it seems that you are using a square-lattice Ising model, which means that you are working in 2 dimensions.
A very efficient data structure for Nearest Neighbor Search in low dimensions is the kd-tree. The construction of that tree takes O(nlogn), where n is the size of your dataset.
Now you should think if it's worth it to build such a data structure.
PS: There is a plethora of libraries implementing the kd-tree, such as CGAL.
I encountered this problem during one of my school assignments and I think the solution depends on which programming language you are using.
In terms of efficiency, there is no better way than to write a for loop to sum neighbours(which are actually the set of 4 points{ (i+/-1,j+/-1)} for a given (i,j). However, when simd(sse etc) functions are available, you can re-express this as a convolution with a 2d kernel {0 1 0;1 0 1;0 1 0}. so if you use a numerical library which exploits simd functions you can obtain significant performance increase. You can see the example implementation of this here(https://github.com/zawlin/cs5340/blob/master/a1_code/denoiseIsingGibbs.py) .
Note that in this case, the performance improvement is huge because to evaluate it in python I need to write an expensive for loop.
In terms of work, there is in fact some waste as the unecessary multiplications and sum with zeros at corners and centers. So whether you can experience performance improvement depends quite a bit on your programming environment( if you are already in c/c++, it can be difficult and you need to use mkl etc to obtain good improvement)

Solving MaxDouble Slice Kadane's Algorithm Variation

I was trying to sharpen my skills by solving the Codality problems. I reached this one: https://codility.com/programmers/lessons/9-maximum_slice_problem/max_double_slice_sum/
I actually theoretically understand the solution:
Use Kadane's Algorithm on the array and store the sum at every index.
Reverse the array and do the same.
Find a point where the sum of both is max by looping over both result sets one at a time.
The max is the max double slice.
My question is not so much about how to solve the problem. My question is about how does one imagine that this will be way in which this problem can be solved. There are at-least 3 different concepts that need to be made use of:
The understanding that if all elements in the array are positive, or negative it is a different case than when there are some positive and negative elements in the array.
Kadane's Algorithm
Going over the array forward and reversed.
Despite all of this, Codality has tagged this problem as "Painless".
My questions is am I missing something? It seems hard that I would be able to solve this problem without knowing some of these concepts.
Is there a technique where I can start from scratch and very basic concepts and work my way up to the concepts required to solve this problem. Or is it that I am expected to know these concepts before even starting the problem?
How can I prepare my self to solve such problems where I don't know the required concepts in the future?
I think you are overthinking the problem, that's why you find it more difficult than it is:
The understanding that if all elements in the array are positive, or negative it is a different case than when there are some positive and negative elements in the array.
It doesn't have to be a different case. You might be able to come up with an algorithm that doesn't care about this distinction and works anyway.
You don't need to start by understanding this distinction, so don't think about it until or even if you have to.
Kadane's Algorithm
Don't think of an algorithm, think of what the problem requires. Usually that 10+ paragraph problem statement can be expressed in much less.
So let's see how we can simplify the problem statement.
It first defines a slice as a triplet (x, y, z). It's defined at the sum of elements starting at x+1, ending at z-1 and not containing y.
Then it asks for the maximum sum slice. If we need the maximum slice, do we need x and z in the definition? We might as well let it start and end anywhere as long as it gets us the maximum sum, no?
So redefine a slice as a subset of the array that starts anywhere, goes up to some y-1, continues from y+1 and ends anywhere. Much simpler, isn't it?
Now you need the maximum such slice.
Now you might be thinking that you need, for each y, the maximum sum subarray that starts at y+1 and the maximum sum subarray that ends at y-1. If you can find these, you can update a global max for each y.
So how do you do this? This should now point you towards Kadane's algorithm, which does half of what you want: it computes the maximum sum subarray ending at some x. So if you compute it from both sides, for each y, you just have to find:
kadane(y - 1) + kadane_reverse(y + 1)
And compare with a global max.
No special cases for negatives and positives. No thinking "Kadane's!" as soon as you see the problem.
The idea is to simplify the requirement as much as possible without changing its meaning. Then you use your algorithmic and deductive skills to reach a solution. These skills are honed with time and experience.

Most mutually distant k elements (clustering?)

I have a simple machine learning question:
I have n (~110) elements, and a matrix of all the pairwise distances. I would like to choose the 10 elements that are most far apart. That is, I want to
Maximize:
Choose 10 different elements.
Return min distance over (all pairings within the 10).
My distance metric is symmetric and respects the triangle inequality.
What kind of algorithm can I use? My first instinct is to do the following:
Cluster the n elements into 20
clusters.
Replace each cluster with just the
element of that cluster that is
furthest from the mean element of
the original n.
Use brute force to solve the
problem on the remaining 20
candidates. Luckily, 20 choose 10 is
only 184,756.
Edit: thanks to etarion's insightful comment, changed "Return sum of (distances)" to "Return min distance" in the optimization problem statement.
Here's how you might approach this combinatorial optimization problem by taking the convex relaxation.
Let D be an upper triangular matrix with your distances on the upper triangle. I.e. where i < j, D_i,j is the distance between elements i and j. (Presumably, you'll have zeros on the diagonal, as well.)
Then your objective is to maximize x'*D*x, where x is binary valued with 10 elements set to 1 and the rest to 0. (Setting the ith entry in x to 1 is analogous to selecting the ith element as one of your 10 elements.)
The "standard" convex optimization thing to do with a combinatorial problem like this is to relax the constraints such that x need not be discrete valued. Doing so gives us the following problem:
maximize y'*D*y
subject to: 0 <= y_i <= 1 for all i, 1'*y = 10
This is (morally) a quadratic program. (If we replace D with D + D', it'll become a bona fide quadratic program and the y you get out should be no different.) You can use an off-the-shelf QP solver, or just plug it in to the convex optimization solver of your choice (e.g. cvx).
The y you get out need not be (and probably won't be) a binary vector, but you can convert the scalar values to discrete ones in a bunch of ways. (The simplest is probably to let x be 1 in the 10 entries where y_i is highest, but you might need to do something a little more complicated.) In any case, y'*D*y with the y you get out does give you an upper bound for the optimal value of x'*D*x, so if the x you construct from y has x'*D*x very close to y'*D*y, you can be pretty happy with your approximation.
Let me know if any of this is unclear, notation or otherwise.
Nice question.
I'm not sure if it can be solved exactly in an efficient manner, and your clustering based solution seems reasonable. Another direction to look at would be local search method such as simulated annealing and hill climbing.
Here's an obvious baseline I would compare any other solution against:
Repeat 100 times:
Greedily select the datapoint that whose removal decreases the objective function the least and remove it.

Resources