Minimum number of adjacent swaps - arrays

This is one of the question from online written test.
Books numbered from (1...N) have arrived to a warehouse.
The books are said to be best arranged if a book “i” is present only to the left of book “i+1” (for all i, 1 <= i <= N-1) and that book N is present to the left of book 1. [Yes! Any cyclic sorted sequence is a best arrangement]
Books received are in a random order.Now your task is to find out the minimal number of moves required to achieve the best arrangement described above.
Note that only valid move is to choose a pair of adjacent books and have them switch places.
For Example if the books were initially in the order 3 5 4 2 1
Solution can be
a. First swap the second pair of books: { result : 3 4 5 2 1 }
b. Swap the rightmost pair: { result : 3 4 5 1 2 }
So, in 2 moves we achieve the best arrangement.
I tried but not able to find out solution for this.First I though that i will divide the array in two arrays and then I will apply insertion sort on both the arrays but that is also not working.
Please help me to find out a algo for this question.

N,1 can be anywhere in the sequence. eg 1..5, could be 3,4,5,1,2. So the first digit could be 1..5, ie 5x as complicated as Previous question. So, you'll have to do it 5 times. Use a sort algorithm that has a replaceable compare function.
So for the 3rd test the compare would be:-
// returns <0, 0 or >0
int compare(a,b){
return ((b+N-3)%N) - ((a+N-3)%N);
}

Related

Daily Coding Problem 260 : Reconstruct a jumbled array - Intuition?

I'm going through the question below.
The sequence [0, 1, ..., N] has been jumbled, and the only clue you have for its order is an array representing whether each number is larger or smaller than the last. Given this information, reconstruct an array that is consistent with it.
For example, given [None, +, +, -, +], you could return [1, 2, 3, 0, 4].
I went through the solution on this post but still unable to understand it as to why this solution works. I don't think I would be able to come up with the solution if I had this in front of me during an interview. Can anyone explain the intuition behind it? Thanks in advance!
This answer tries to give a general strategy to find an algorithm to tackle this type of problems. It is not trying to prove why the given solution is correct, but lying out a route towards such a solution.
A tried and tested way to tackle this kind of problem (actually a wide range of problems), is to start with small examples and work your way up. This works for puzzles, but even so for problems encountered in reality.
First, note that the question is formulated deliberately to not point you in the right direction too easily. It makes you think there is some magic involved. How can you reconstruct a list of N numbers given only the list of plusses and minuses?
Well, you can't. For 10 numbers, there are 10! = 3628800 possible permutations. And there are only 2⁹ = 512 possible lists of signs. It's a very huge difference. Most original lists will be completely different after reconstruction.
Here's an overview of how to approach the problem:
Start with very simple examples
Try to work your way up, adding a bit of complexity
If you see something that seems a dead end, try increasing complexity in another way; don't spend too much time with situations where you don't see progress
While exploring alternatives, revisit old dead ends, as you might have gained new insights
Try whether recursion could work:
given a solution for N, can we easily construct a solution for N+1?
or even better: given a solution for N, can we easily construct a solution for 2N?
Given a recursive solution, can it be converted to an iterative solution?
Does the algorithm do some repetitive work that can be postponed to the end?
....
So, let's start simple (writing 0 for the None at the start):
very short lists are easy to guess:
'0++' → 0 1 2 → clearly only one solution
'0--' → 2 1 0 → only one solution
'0-+' → 1 0 2 or 2 0 1 → hey, there is no unique outcome, though the question only asks for one of the possible outcomes
lists with only plusses:
'0++++++' → 0 1 2 3 4 5 6 → only possibility
lists with only minuses:
'0-------'→ 7 6 5 4 3 2 1 0 → only possibility
lists with one minus, the rest plusses:
'0-++++' → 1 0 2 3 4 5 or 5 0 1 2 3 4 or ...
'0+-+++' → 0 2 1 3 4 5 or 5 0 1 2 3 4 or ...
→ no very obvious pattern seem to emerge
maybe some recursion could help?
given a solution for N, appending one sign more?
appending a plus is easy: just repeat the solution and append the largest plus 1
appending a minus, after some thought: increase all the numbers by 1 and append a zero
→ hey, we have a working solution, but maybe not the most efficient one
the algorithm just appends to an existing list, no need to really write it recursively (although the idea is expressed recursively)
appending a plus can be improved, by storing the largest number in a variable so it doesn't need to be searched at every step; no further improvements seem necessary
appending a minus is more troublesome: the list needs to be traversed with each append
what if instead of appending a zero, we append -1, and do the adding at the end?
this clearly works when there is only one minus
when two minus signs are encountered, the first time append -1, the second time -2
→ hey, this works for any number of minuses encountered, just store its counter in a variable and sum with it at the end of the algorithm
This is in bird's eye view one possible route towards coming up with a solution. Many routes lead to Rome. Introducing negative numbers might seem tricky, but it is a logical conclusion after contemplating the recursive algorithm for a while.
It works because all changes are sequential, either adding one or subtracting one, starting both the increasing and the decreasing sequences from the same place. That guarantees we have a sequential list overall. For example, given the arbitrary
[None, +, -, +, +, -]
turned vertically for convenience, we can see
None 0
+ 1
- -1
+ 2
+ 3
- -2
Now just shift them up by two (to account for -2):
2 3 1 4 5 0
+ - + + -
Let's look at first to a solution which (I think) is easier to understand, formalize and demonstrate for correctness (but I will only explain it and not demonstrate in a formal way):
We name A[0..N] our input array (where A[k] is None if k = 0 and is + or - otherwise) and B[0..N] our output array (where B[k] is in the range [0, N] and all values are unique)
At first we see that our problem (find B such that B[k] > B[k-1] if A[k] == + and B[k] < B[k-1] if A[k] == -) is only a special case of another problem:
Find B such that B[k] == max(B[0..k]) if A[k] == + and B[k] == min(B[0..k]) if A[k] == -.
Which generalize from "A value must larger or smaller than the last" to "A value must be larger or smaller than everyone before it"
So a solution to this problem is a solution to the original one as well.
Now how do we approach this problem?
A greedy solution will be sufficient, indeed is easy to demonstrate that the value associated with the last + will be the biggest number in absolute (which is N), the one associated with the second last + will be the second biggest number in absolute (which is N-1) ecc...
And in the same time the value associated with the last - will be the smallest number in absolute (which is 0), the one associated with the second last - will be the second smallest (which is 1) ecc...
So we can start filling B from right to left remembering how many + we have seen (let's call this value X), how many - we have seen (let's call this value Y) and looking at what is the current symbol, if it is a + in B we put N-X and we increase X by 1 and if it is a - in B we put 0+Y and we increase Y by 1.
In the end we'll need to fill B[0] with the only remaining value which is equal to Y+1 and to N-X-1.
An interesting property of this solution is that if we look to only the values associated with a - they will be all the values from 0 to Y (where in this case Y is the total number of -) sorted in reverse order; if we look to only the values associated with a + they will be all the values from N-X to N (where in this case X is the total number of +) sorted and if we look at B[0] it will always be Y+1 and N-X-1 (which are equal).
So the - will have all the values strictly smaller than B[0] and reverse sorted and the + will have all the values strictly bigger than B[0] and sorted.
This property is the key to understand why the solution proposed here works:
It consider B[0] equals to 0 and than it fills B following the property, this isn't a solution because the values are not in the range [0, N], but it is possible with a simple translation to move the range and arriving to [0, N]
The idea is to produce a permutation of [0,1...N] which will follow the pattern of [+,-...]. There are many permutations which will be applicable, it isn't a single one. For instance, look the the example provided:
[None, +, +, -, +], you could return [1, 2, 3, 0, 4].
But you also could have returned other solutions, just as valid: [2,3,4,0,1], [0,3,4,1,2] are also solutions. The only concern is that you need to have the first number having at least two numbers above it for positions [1],[2], and leave one number in the end which is lower then the one before and after it.
So the question isn't finding the one and only pattern which is scrambled, but to produce any permutation which will work with these rules.
This algorithm answers two questions for the next member of the list: get a number who’s both higher/lower from previous - and get a number who hasn’t been used yet. It takes a starting point number and essentially create two lists: an ascending list for the ‘+’ and a descending list for the ‘-‘. This way we guarantee that the next member is higher/lower than the previous one (because it’s in fact higher/lower than all previous members, a stricter condition than the one required) and for the same reason we know this number wasn’t used before.
So the intuition of the referenced algorithm is to start with a referenced number and work your way through. Let's assume we start from 0. The first place we put 0+1, which is 1. we keep 0 as our lowest, 1 as the highest.
l[0] h[1] list[1]
the next symbol is '+' so we take the highest number and raise it by one to 2, and update both the list with a new member and the highest number.
l[0] h[2] list [1,2]
The next symbol is '+' again, and so:
l[0] h[3] list [1,2,3]
The next symbol is '-' and so we have to put in our 0. Note that if the next symbol will be - we will have to stop, since we have no lower to produce.
l[0] h[3] list [1,2,3,0]
Luckily for us, we've chosen well and the last symbol is '+', so we can put our 4 and call is a day.
l[0] h[4] list [1,2,3,0,4]
This is not necessarily the smartest solution, as it can never know if the original number will solve the sequence, and always progresses by 1. That means that for some patterns [+,-...] it will not be able to find a solution. But for the pattern provided it works well with 0 as the initial starting point. If we chose the number 1 is would also work and produce [2,3,4,0,1], but for 2 and above it will fail. It will never produce the solution [0,3,4,1,2].
I hope this helps understanding the approach.
This is not an explanation for the question put forward by OP.
Just want to share a possible approach.
Given: N = 7
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Go from 0 to N
[1] fill all '-' starting from right going left.
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Answer: 2 1 0
[2] fill all the vacant places i.e [X & +] starting from left going right.
Index: 0 1 2 3 4 5 6 7
Pattern: X + - + - + - + //X = None
Answer: 3 4 5 6 7
Final:
Pattern: X + - + - + - + //X = None
Answer: 3 4 2 5 1 6 0 7
My answer definitely is too late for your problem but if you need a simple proof, you probably would like to read it:
+min_last or min_so_far is a decreasing value starting from 0.
+max_last or max_so_far is an increasing value starting from 0.
In the input, each value is either "+" or "-" and for each increase the value of max_so_far or decrease the value of min_so_far by one respectively, excluding the first one which is None. So, abs(min_so_far, max_so_far) is exactly equal to N, right? But because you need the range [0, n] but max_so_far and min_so_far now are equal to the number of "+"s and "-"s with the intersection part with the range [0, n] being [0, max_so_far], what you need to do is to pad it the value equal to min_so_far for the final solution (because min_so_far <= 0 so you need to take each value of the current answer to subtract by min_so_far or add by abs(min_so_far)).

Binary search modification

I have been attempting to solve following problem. I have a sequence of positive
integer numbers which can be very long (several milions of elements). This
sequence can contain "jumps" in the elements values. The aforementioned jump
means that two consecutive elements differs each other by more than 1.
Example 01:
1 2 3 4 5 6 7 0
In the above mentioned example the jump occurs between 7 and 0.
I have been looking for some effective algorithm (from time point of view) for
finding of the position where this jump occurs. This issue is complicated by the
fact that there can be a situation when two jumps are present and one of them
is the jump which I am looking for and the other one is a wrap-around which I
am not looking for.
Example 02:
9 1 2 3 4 6 7 8
Here the first jump between 9 and 1 is a wrap-around. The second jump between
4 and 6 is the jump which I am looking for.
My idea is to somehow modify the binary search algorithm but I am not sure whether it is possible due to the wrap-around presence. It is worthwhile to say that only two jumps can occur in maximum and between these jumps the elements are sorted. Does anybody have any idea? Thanks in advance for any suggestions.
You cannot find an efficient solution (Efficient meaning not looking at all numbers, O(n)) since you cannot conclude anything about your numbers by looking at less than all. For example if you only look at every second number (still O(n) but better factor) you would miss double jumps like these: 1 5 3. You can and must look at every single number and compare it to it's neighbours. You could split your workload and use a multicore approach but that's about it.
Update
If you have the special case that there is only 1 jump in your list and the rest is sorted (eg. 1 2 3 7 8 9) you can find this jump rather efficiently. You cannot use vanilla binary search since the list might not be sorted fully and you don't know what number you are searching but you could use an abbreviation of the exponential search which bears some resemblance.
We need the following assumptions for this algorithm to work:
There is only 1 jump (I ignore the "wrap around jump" since it is not technically between any following elements)
The list is otherwise sorted and it is strictly monotonically increasing
With these assumptions we are now basically searching an interruption in our monotonicity. That means we are searching the case when 2 elements and b have n elements between them but do not fulfil b = a + n. This must be true if there is no jump between the two elements. Now you only need to find elements which do not fulfil this in a nonlinear manner, hence the exponential approach. This pseudocode could be such an algorithm:
let numbers be an array of length n fulfilling our assumptions
start = 0
stepsize = 1
while (start < n-1)
while (start + stepsize > n)
stepsize -= 1
stop = start + stepsize
while (numbers[stop] != numbers[start] + stepsize)
// the number must be between start and stop
if(stepsize == 1)
// congratiulations the jump is at start to start + 1
return start
else
stepsize /= 2
start += stepsize
stepsize *= 2
no jump found

No. of paths in integer array

There is an integer array, for eg.
{3,1,2,7,5,6}
One can move forward through the array either each element at a time or can jump a few elements based on the value at that index. For e.g., one can go from 3 to 1 or 3 to 7, then one can go from 1 to 2 or 1 to 2(no jumping possible here), then one can go 2 to 7 or 2 to 5, then one can go 7 to 5 only coz index of 7 is 3 and adding 7 to 3 = 10 and there is no tenth element.
I have to only count the number of possible paths to reach the end of the array from start index.
I could only do it recursively and naively which runs in exponential time.
Somebody plz help.
My recommendation: use dynamic programming.
If this key word is sufficient and you want the challenge to find a possible solution on your own, dont read any further!
Here a possible DP-algorithm on the example input {3,1,2,7,5,6}. It will be your job to adjust on the general problem.
create array sol length 6 with just zeros in it. the array will hold the number of ways.
sol[5] = 1;
for (i = 4; i>=0;i--) {
sol[i] = sol[i+1];
if (i+input[i] < 6 && input[i] != 1)
sol[i] += sol[i+input[i]];
}
return sol[0];
runtime O(n)
As for the directed graph solution hinted in the comments :
Each cell in the array represents a node. Make an directed edge from each node to the node accessable. Basically you can then count more easily the number of ways by just looking at the outdegrees on the nodes (since there is no directed cycle) however it is a lot of boiler plate to actual program it.
Adjusting the recursive solution
another solution would be to pruning. This is basically equivalent to the DP-algorithm. The exponentiel time comes from the fact, that you calculate values several times. Eg function is recfunc(index). The initial call recFunc(0) calls recFunc(1) and recFunc(3) and so on. However recFunc(3) is bound to be called somewhen again, which leads to a repeated recursive calculation. To prune this you add a Map to hold all already calculated values. If you make a call recFunc(x) you lookup in the map if x was already calculated. If yes, return the stored value. If not, calculate, store and return it. This way you get a O(n) too.

Dynamic Programming Problem.. Array Partitioning..

The question says,
That given an array of size n, we have to output/partition the array into subsets which sum to N.
For E,g,
I/p arr{2,4,5,7}, n=4, N(sum) = 7(given)
O/p = {2,5}, {7}
I saw similar kind of problem/explanation in the url Dynamic Programming3
And I have the following queries in the pdf:-
How could we find the subsets which sum to N, as the logic only tells whether the subset exist or not?
Also, if we change the question a bit, can we find two subsets which has equal average using the same ideology?
Can anybody thrown some light on this Dynamic Programming problem.. :)
Thanks in Advance..
You can try to process recursively:
Given a SORTED array X={x1 ... xn} xi !=0 and an intger N.
First find all the possibilities "made" with just one element:
here if N=xp, eliminate all xi s.t i>=p
second find all the possibilities made with 2 elements:
{ (x1,x2) .... (xp-2,xp-1)}
Sort by sum and elminate all the sums >=N
and you had the rules: xi cannot go with xj when xi+xj >= N
Third with 3 elments:
You create all the part that respect the above rule.
And idem step 2
etc...
Example:
X={1,2,4,7,9,10} N=9
step one:
{9}
X'={1,2,4,7,9}
step 2: cannot chose 9 and 10
X={(1,2) (1,4) (2,4) (1,7) (2,7) (4,7)}
{2,7}
X'={(1,2) (1,4) (2,4) (1,7)}
step 3: 4 and 2 cannot go with 7:
X={(1,2,4)}
no sol
{9} {2,7} are the only solutions
This diminishes the total number of comparaison (that would be 2^n = 2^6=64) you only did : 12 comparaisons
hope it helps
Unfortunately, this is a very difficult problem. Even determining if there exists a single subset summing to your target value is NP-Complete.
If the problem is more restricted, you might be able to find a good algorithm. For example:
Do the subsets have to be contiguous?
Can you ignore subsets with more than K values?
Are the array values guaranteed to be positive?
Are the array values guaranteed to be distinct? What about differing from the other values by at least some constant factor?
Is there some bound on the difference between the smallest and largest value?
The proposed algorithm stores only a single bit of information in the temporary array T[N], namely whether it's reachable at all. Obviously, you can store more information at each index [N], such as the values C[i] used to get there. (It's a variation of the "Dealing with Unlimited Copies" chapter in the PDF)

Given a string of red and blue balls, find min number of swaps to club the colors together

We are given a string of the form: RBBR, where R - red and B - blue.
We need to find the minimum number of swaps required in order to club the colors together. In the above case that answer would be 1 to get RRBB or BBRR.
I feel like an algorithm to sort a partially sorted array would be useful here since a simple sort would give us the number of swaps, but we want the minimum number of swaps.
Any ideas?
This is allegedly a Microsoft interview question according to this.
Take one pass over the string and count the number of reds (#R) and the number of blues (#B). Then take a second pass counting the number of reds in the first #R balls (#r) and the number of blue balls in the first #B balls (#b). The lesser of (#R - #r) and (#B - #b) will be the minimum number of swaps needed.
We are given the string S that we have to convert to the final string F = R^a B^b or B^b R^a. The number of differences between S and F should be even because for every misplaced R there will be a complementary misplaced B. So why not find the minimum number of differences between S and both possible F's and divide that by 2?
For example, you're given S = RBRRBRBR which should convert to
RRRRRBBB
or
BBBRRRRR
Comparing the differences between S and F for each character for each possibility, there are 4 differences for each possible final string so regardless the minimum is 2 swaps.
Let's look at your example. You know that the end state will be RRBB or BBRR. In other words, the end state is always nRmB or mBnR, where n is the number of R's and m is the number o B's in your string.
Since the end state is defined, maybe some sort of path-finding algorithm would be a good aproach for this? How about considering each swap as a state-change and thinking of a heuristic function to aproximate the number of left over swaps needed.
I'm just throwing an idea in the air, but I hope this helps.
Start with two indices simultaneously from the right and left end of the string. Advance the left index until you find an R. Advance the right index backwards until you find a B. Swap them. Repeat until the left index meets the right index, and count the swaps. Then, do the same, but look for B on the left and R on the right. The minimum is the lower of both swap counts.
I think the number of swaps can be derived from the number of inversions required to sort the vector. This is the example of doing the same with permutation vector.
This isn't a technical answer, but I looked at this more intuitively.
RRBBBBR is can be reduced to RBR, since a group of R's can be moved as a single block. This means that the array is really just a N sets of RB.
The only thing that matters is the number of N sets of RB blocks (including incomplete blocks for the last one).
RBR -> 1 swap to get to RRB (2 sets of RB block, RB and R)
RBRB-> 1 swap to get to RRBB (2 full sets of RB blocks)
RBRBRB-> 2 swaps to get to RRRBBB (3 full sets of RB blocks)
RBRBRBRB -> 4 sets of RB = 3 swaps
So to generalize this, the number of swaps needed = N sets of RB block (including incomplete blocks) and subtract 1.

Resources