Assume we have got an array arr with all initial values 0. Now we are given n operations - an operation consists of two numbers a b. It means that we are adding +1 to the value of arr[a] and adding -1 to the value of arr[b].
Moreover, we can swap numbers in some operations, what means that we will add -1 to arr[a] and +1 to arr[b].
We want to achieve a situation, in which all values of arr are equal to 0 even after all these operations. We are wondering if that is possible, and, if yes, what operations should we swap to achieve that.
Any thoughts on that?
Some example input:
3
1 2
3 2
3 1
should result in
YES
R
N
R
where R means to reverse that operation, and N not to do it.
input:
3
1 2
2 3
3 2
results in answer NO.
Let each of the array element be a vertex in a graph and the operation (a,b) be a edge from vertex a to b (there might be multiple edges between same vertices). Now traveling from the vertex a means decrease array element a and traveling to the vertex a means increase array element a. Now if each vertex has an even number of edges and you find a cyclic path that visits each edge exactly once you will have a zero total sum in the array.
Such a path is called Eulerian cycle (Wikipedia). From the Wikipedia: An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. As in your case only all the disconnected sub graphs need to have an Eulerian cycle it is enough to count how many times each array index appears and if the count is even for each one of them there is always way to obtain zero total in the array.
If you want to find out which operations to reverse, you need to find one of such paths and check which directions you travel the edges.
Just count the number of times the indices appear. If all indices appear in even numbers, then the answer is YES.
You can prove it by construction. You will need to build a pair list from the original pair list. The goal is to build the list such that you can match every index that appears on the left with an index that appears on the right.
Go from the first pair to the last. For each pair, try to match an index that appears an odd number of times.
For example, in your first example, each index appears twice, so the answer is YES. To build the list, you start with (1,2). Then you look at the pair (3,2) and you know that 2 appears once on the right, so you swap it to have 2 on the left: (2,3). For the last pair, you have (3,1) which matches 1 and 3 that appear only once so far.
Note that at the end, you can always find a matching pair, because each number appears in an even number. Each number should have a match.
In the second example, 2 appears three times. So the answer is NO.
Related
Given an array of n integers, all numbers are unique exception one of them.
The repeating number repeats n/2 times if n is even
The repeating number repeats (n-1)/2 or (n+1)/2 times if n is odd
The repeating number is not adjacent to itself in the array
Write a program to find the repeating number without using extra space.
This is how I tried to solve the problem.
If n is even, then there are n/2 repeating elements. Also, the repeating elements should not be adjacent. So if say there are 6 elements, 3 elements are repeated. The elements can be either at indices 0,2 and 4 or 1,3 and 5. So if I just check if any element repeats at index 0 and 2, and then at index 1 and 3, I can get the repeating element.
If n is odd, then there are 2 choices.
If (n+1)/2 elements are repeating, then we can just check indices 0 and 2. For example say there are 7 elements, 4 of them are repeated, then repeating elements have to be at indices 0,2,4 and 6.
However I cannot find a way to find the (n-1)/2 repeating elements when n is odd. I have thought of using xor and sums but can't find a way.
Let us call the element that repeats as the "majority".
Boyer–Moore majority vote algorithm can help here. The algorithm finds an element that occurs repeatedly for more than half of the elements of the input if any such element exists.
But in your case the situation is interesting. The majority may not occur more than half the times. All elements are unique except the repeating one and repeating numbers are not adjacent. Also, majority element exists for sure.
So,
Run majority vote algorithm on numbers at even index in the array. Makes a second pass through the input array to verify that the element reported by the algorithm really is a majority.
If in the above step we don't get the majority element, you can repeat the above procedure for numbers at odd index in the array. You can do this second step a bit more smartly because we know for sure that majority element exists. So, any number that repeats would be the result.
In the implementation of above, there is a good scope for small optimizations.
I think I should not explain the majority vote algorithm here. If you want me to, let me know. Apparently, without knowing this majority algorithm we should be able to do it with some counting logic (which would most probably end up the same as the majority algorithm). But just that it's a standard algorithm, we can leverage it.
I would like to make pairs out of X number of database objects.
Order of pairs does not matter.
Pairs will be made over multiple rounds.
I do not want pairs to be repeated in a subsequent round.
If I have:
A
B
C
D
The first round might be:
AB
CD
Second round might be:
AD
CB
Third round might be:
AC
DB
And there would be no other possibilities.
So for 4 elements, I could do 3 rounds before I have to repeat a pair.
What is the formula to help me with this for any number of elements?
Related How do I get the total number of unique pairs of a set in the database?
You can use round-robin tournament algorithm to generate all possible pairs. Your example shows r-r algo in action: make two rows of elements, fix the first one (A) and rotate other elements in cyclic manner.
Note that N elements form N*(N-1)/2 pairs, and (N-1) rounds are needed to generate all of them
I have a m x m two-dimensional array and I want to randomly select a sequence of n elements. The elements have to be adjacent (not diagonally). What is a good approach here? I though about a depth-first search from a random starting point but that seemed a little bit overkill for such a simple problem.
If I get this right, you are looking for sequence like continuous numbers ?
When i simplyfy this:
9 4 3
0 7 2
5 6 1
So when the 1 is selected, you'd like to have path from 1 to 4 right ? I personally think that Depth-First search would be the best choice. It's not that hard, it's actually pretty simple. Imagine you select number 2. You'll remember position of number 2 and then you can look for lowest numbers until there are any. When you are done with this part, you just do the same for higher numbers.
You have two stacks one for possible ways and another one for final path.
When going through the array, you are just poping from possibilities and pushing right ones into the final stack.
The best approach would be finding the lowest possible number without saving anything and then just looking for higher numbers and storing them so at the end you'll get stack from the highest number to the lowest.
If I get that wrong and you mean just like selecting elements that are "touching" like (from my table) 9 0 7 6, which means that the content doesn't matter, then you can do it simple by picking one number, storing all possibilities (every element around it) and then pick random number from 0 to size of that stored values. When you select one, you remove it from these stored values but you keep them. Then you run this on the new element and you just add these new elements to stored elements so the random will always pick surrounding around these selected numbers.
The premise of this problem is a sorted array of strings that's interspersed (in no particular order) with empty strings, like so:
["at", "", "", "", "ball", ""]
My algorithm came down to finding the midpoint, and arbitrarily running my pointer left (or right) until it landed on a non-empty string so that I can then execute a binary search.
The solutions recommended checking both left and right elements until we land on a non-empty string. Once we have a non-empty string, we can proceed with binary search.
The solutions way on average will land much quicker on a non-empty string, but it requires more calculations to get there. Consequently, I'm struggling to compare/contrast the time costs of each approach.
Which is the more optimal approach?
I guess the question is: when you land on an empty string, what algorithm visits less elements?
Suppose you have an sequence of N empty strings. With the suggested approach, if you land on the N/2th, you will visit N elements before finding a non empty string.
if you consider landing in the following positions, for each positions you end up visiting two less elements (one on the left and one on the right). So, your number of visited elements as a function of the landing position is:
{2, ... N-4, N-2, N, N-2, N-4, ...}.
If you visit only the element in a direction, your number of elements as a function of the position is {N, N-1, N-2...1}
Assuming that the probability of landing at any position in a range of empty string is the same, knowing that the sum of the first K number is
K*(K+1)
sum(1,K) = ------
2
and the average of the first K numbers is
K*(K+1)
avg(1,K) = ------ = K/2 + 1
2*K
The average in the first case is 2 * ((N/2)/2 + 1) = N/2 + 2
The average in the second case is is N/2 + 1
So, it seems to me in terms of complexity the two approaches are the same.
On a glimpse I would say that the strength of binary search is to always cut the array to search in half by querying the center element. With gaps, that is no longer possible as the center element can be a gap. So what we would want to do is find the closest non-gap element to the center. In order to do so we'd look alternatingly left and right.
position 1 2 3 4 5 6 7 8 9
value A B _ U _ _ _ T Z
Say we are looking for value B. We hit position 5 ( = (1+9)/10 ) which is a gap. If the algorithm were to always go right, then we'd walk on till position 8 and have the range to search restricted to 1-8 thus.
If on the other hand we had looked right, then left (etc.) then we'd have found position 4, which is much closer to the center and the range to search would be more restricted (1-4 in this example). Of course we can always make up an example where the always-look-right algorithm works better (e.g. when looking for T in above example :-), but generally it will be best to try to get as close to the center as possible, which is what the alternatingly-right-left solution does.
It was also suggested in a comment to remove gaps and you answered you'd have to read the whole array for this. This is true, but if you want to search the same range multiple times, that may be the fastest approach still, because you'd build that gapless arrray just once.
We'd build a new array containing the values and the original position and this array can be searched with pure binary search. Search this array several times and at one time it will pay to have built this new array.
position 1 2 3 4 5
orig. pos. 1 2 4 8 9
value A B U T Z
Even with empty strings, it is possible to perform a similar search as a binary search. When you visit an empty string, you should continue the binary search in one of the sides arbitrarily, then save that information in the stack, i.e. store whether this was a random direction or a wise direction. If at a certain point, the algorithm understands that was a wrong random direction, then tests the other direction with binary search and updates the stack of choice. If it was the right direction then just updates that stack and continues as normal binary search. This may lead to a linear time algorithm, however, depending on the distribution of the empty spaces, it may have an average of O (log n).
Suppose you have an array of integers (for eg. [1 5 3 4 6]). The elements are rearranged according to the following rule. Every element can hop forward (towards left) and slide the elements in those indices over which it hopped. The process starts with element in second index (i.e. 5). It has a choice to hop over element 1 or it can stay in its own position.If it does choose to hop, element 1 slides down to index 2. Let us assume it does choose to hop and our resultant array will then be [5 1 3 4 6]. Element 3 can now hop over 1 or 2 positions and the process repeats. If 3 hops over one position the array will now be [5 3 1 4 6] and if it hops over two positions it will now be [3 5 1 4 6].
It is very easy to show that all possible permutation of the elements is possible in this way. Also any final configuration can be reached by an unique set of occurrences.
The question is, given a final array and a source array, find the total number of hops required to arrive at the final array from the source. A O(N^2) implementation is easy to find, however I believe this can be done in O(N) or O(NlogN). Also if it is not possible to do better than O(N2) it will be great to know.
For example if the final array is [3,5,1,4,6] and the source array [1,5,3,4,6], the answer will be 3.
My O(N2) algorithm is like this: you loop over all the positions of the source array from the end, since we know that is the last element to move. Here it will be 6 and we check its position in the final array. We calculate the number of hops necessary and need to rearrange the final array to put that element in its original position in the source array. The rearranging step goes over all the elements in the array and the process loops over all the elements, hence O(N^2). Using Hashmap or map can help in searching, but the map needs to be updated after every step which makes in O(N^2).
P.S. I am trying to model correlation between two permutations in a Bayesian way and this is a sub-problem of that. Any ideas on modifying the generative process to make the problem simpler is also helpful.
Edit: I have found my answer. This is exactly what Kendall Tau distance does. There is an easy merge sort based algorithm to find this out in O(NlogN).
Consider the target array as an ordering. A target array [2 5 4 1 3] can be seen as [1 2 3 4 5], just by relabeling. You only have to know the mapping to be able to compare elements in constant time. On this instance, to compare 4 and 5 you check: index[4]=2 > index[5]=1 (in the target array) and so 4 > 5 (meaning: 4 must be to the right of 5 at the end).
So what you really have is just a vanilla sorting problem. The ordering is just different from the usual numerical ordering. The only thing that changes is your comparison function. The rest is basically the same. Sorting can be achieved in O(nlgn), or even O(n) (radix sort). That said, you have some additional constraints: you must sort in-place, and you can only swap two adjacent elements.
A strong and simple candidate would be selection sort, which will do just that in O(n^2) time. On each iteration, you identify the "leftiest" remaining element in the "unplaced" portion and swap it until it lands at the end of the "placed" portion. It can improve to O(nlgn) with the use of an appropriate data structure (priority queue for identifying the "leftiest" remaining element in O(lgn) time). Since nlgn is a lower bound for comparison based sorting, I really don't think you can do better than that.
Edit: So you're not interested in the sequence of swaps at all, only the minimum number of swaps required. This is exactly the number of inversions in the array (adapted to your particular needs: "non natural ordering" comparison function, but it doesn't change the maths). See this answer for a proof of that assertion.
One way to find the number of inversions is to adapt the Merge Sort algorithm. Since you have to actually sort the array to compute it, it turns out to be still O(nlgn) time. For an implementation, see this answer or this (again, remember that you'll have to adapt).
From your answer I assume number of hops is total number of swaps of adjacent elements needed to transform original array to final array.
I suggest to use something like insert sort, but without insertion part - data in arrays will not be altered.
You can make queue t for stalled hoppers as balanced binary search tree with counters (number of elements in subtree).
You can add element to t, remove element from t, balance t and find element position in t in O(log C) time, where C is the number of elements in t.
Few words on finding position of element. It consists of binary search with accumulation of skipped left sides (and middle elements +1 if you keep elements on branches) counts.
Few words on balancing/addition/removal. You have to traverse upward from removed/added element/subtree and update counters. Overall number of operations still hold at O(log C) for insert+balance and remove+balance.
Let's t is that (balanced search tree) queue, p is current original array index, q is final array index, original array is a, final array is f.
Now we have 1 loop starting from left side (say, p=0, q=0):
If a[p] == f[q], then original array element hops over the whole queue. Add t.count to the answer, increment p, increment q.
If a[p] != f[q] and f[q] is not in t, then insert a[p] into t and increment p.
If a[p] != f[q] and f[q] is in t, then add f[q]'s position in queue to answer, remove f[q] from t and increment q.
I like the magic that will ensure this process will move p and q to ends of arrays in the same time if arrays are really permutations of one array. Nevertheless you should probably check p and q overflows to detect incorrect data as we have no really faster way to prove data is correct.