I was at an interview, and the interviewer asked me to make a function that would find a list of operations that would sort an array in the fewest operations.
The allowed operations were swapping any two numbers in the array.
For example, given the array [0,3,4,1], two possible answers would be
[swap(1,2), swap(2,3)]
[swap(2,3), swap(1,2)]
I had already seen a question like that here, so I solved it with a modified version of the solution there.
However, after that, the interviewer changed the question a bit.
The goal was still to find the shortest list of operations to sort the array, but now the operations were rotating the array to left, rotating the array to the right, and swapping the numbers at index 0 and 1.
I tried to solve it with backtracking and a hash table, but my implementation did not work, and I also feel that there is a general solution to this problem given any k operations. For example, what if we were allowed to swap the two numbers in the middle or to rotate the array by two elements to the left but only one element to the right?
How would you solve this?
Related
I have been searching for a visual representation of Merge Sort in the form of stack logic. Since we call the function recursively, I cannot shape in my mind that how we merge the sorted subarray further. As far as I understood, we first divide the problem into sub-problems. We do this by calling the MergeSort algorithm which works recursively by calling itself with the start and end index of the divided sub-arrays. Then we call the Merge function which sorts the array elements. Yet what I cannot understand is that after making the recursive calls, we come to the point of sorting the elements. In that part, we create 2 arrays with the size of the parameters that we sent from the function that works to divide the arrays into little pieces.
Well, after sorting the smallest array, how do we combine the new sorted array with the one that we had sorted?
If there is a problem with my question, I am really sorry. Since I'm confused and lost about this part, I have many questions in my mind. Therefore, I may not be clear.
Here is the visual representation of merge sort, please take a look and you will clearly understand the logic of merge sort. Link: https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/visualize/
I've come across a problem that at first looks like the well known maximum sum subarray problem but there's a twist that makes things much more complex.
Suppose you have two arrays each containing the same amount of "1"s and "-1"s. Additionally, suppose each "1" in the first array has a corresponding or sibling "1" in the second array. Same for each "-1". The task is to find the optimal subarrays, one in the first array and one in the second, such that their combined sum is the largest (maximal) with the added constraint that an element in one subarray only counts towards the sum if the other subarray contains its sibling.
Anyone know what kind of problem this is? It looks like it could be a graph problem in disguise but I'm not sure which. If there's optimal substructure here I don't see that either. I know it can be solved by complete search but surely there's a faster way.
Below is an example of the setup to the problem.
Here the optimal solution is subarray [2..9] in the first array with subarray [4..9] in the second array for a sum of 8.
I've found answers to similar problems, but none of them exactly described my problem.
so on the risk of being down-voted to hell I was wondering if there is a standard method to solve my problem. Further, there's a chance that I'm asking the wrong question. Maybe the problem can be solved more efficiently another way.
So here's some background:
I'm looping through a list of particles. Each particle has a list of it's neighboring particles. Now I need to create a list of unique particle pairs of mutual neightbours.
Each particle can be identified by an integer number.
Should I just build a list of all the pair's including duplicates and use some kind of sort & comparator to eliminate duplicates or should I try to avoid adding duplicates into my list in the first place?
Performance is really important to me. I guess most of the loops may be vectorized and threaded. On average each particle has around 15 neighbours and I expect, that there will be 1e6 particles at most.
I do have some ideas, but I'm not an experienced coder and I don't want to waste 1 week to test every single method by benchmarking different situations just to find out that there's already a standard meyjod for my problem.
Any suggestions?
BTW: I'm using C.
Some pseudo-code
for i in nparticles
particle=particles[i]; //just an array containing the "index" of each particle
//each particle has a neightbor-list
for k in neighlist[i] //looping through all the neighbors
//k represent the index of the neighbor of particle "i"
if the pair (i,k) or (k,i) is not already in the pair-list, add it. otherwise don't
Sorting the elements each iteration is not a good idea since comparison sort is O(n log n) complex.
The next best thing would be to store the items in a search tree, better yet binary search tree, and better yet self equalizing binary search tree, you can find implementations on GitHub.
Even better solution would give an access time of O(1), you can achieve this in 2 different ways one is a simple identity array, where at each slot you would save say a pointer to item if there is on at this id or some flag defining that current id is empty. This is very fast but wasteful. You'll need O(N) memory.
The best solution in my opinion would be to use a set or a has-map. Which are basically the same because sets can be implemented using hash-map.
Here is a github project with c hash-map implementation.
And stack overflow answer to a similar question.
I'm struggling with one question on my assignment regarding bottom up merge sort.
Bottom Up merge sort divides an array into sub-arrays of two and sorts its members, then combines (merges) every two consecutive sub-arrays into another set of 4-sized sub-arrays, and so on until there are two arrays of size n/2, that merge into a completely sorted array.
I completely understand the algorithm but I'm having trouble with proving it formally using induction.
I'm supposed to prove its correctness under the assumption that n is a power of 2.
then I'm asked to calculate and prove its run time which is also by induction.
my current progress consists of proving that at each iteration i the number of sub arrays that are sorted is n/2^i,I'm not getting anywhere with that maybe because I'm looking at it in a wrong way.
Any guidance on how to prove this using induction?
I have been given an array and I'm asked to find out the number of Swaps required to sort the array using Bubble Sort
Now we know that, we can find the comparisons by n(n-1)/2 but what I need is the number of actual swaps
My first instinct was to use bubble sort and with each swap(), I incremented a Swap variable. But the time complexity of this is a very slow process and I'd like your help to find an optimized way to solve my dilemma
P.S.: I also need to compare whether it is faster to sort it in ascending or descending.... Sorting it twice doubles the time.
Edit:
Sorry if I wan't clear enough. I want to find the swaps without using Bubble Sort at all.
Regard the applied swap() to a[i] and a[i+1] as a bubble-up of a[i].
Now, asking how many swaps are going to happen is the same as asking how many bubble-up operations are going to happen. Well, and how many do we have of those?
Each a[i] will bubble-up for each position j > i, where a[j]<a[i]. In words a[i] will bubble-up for each position to the right, where the elements value at that position is smaller than a[i] itself. A pair of elements satisfying this condition is what is known as an inversion of a[].
So reformulating your question we could ask: What is the total number of inversions in a[]? (a.k.a. What is the inversion number of a[]?)
This is a well known problem and besides some obvious approaches running in O(n^2) a typical approach to this problem is to tweak merge-sort a bit in order to find this number. And since merge-sort runs in O(n*log(n)) you get the same running time to find the inversion number of a[].
Now that you know that you can tweak merge-sort, I suggest you give it a try on your own on how to do it exactly.
Hint: The main question you have to answer is: When positioning a single element during a merge-step of two arrays, how many inversion did I fix? Then simply add up all of those.
In case you still are stuck after giving it some thought, you can have a look at some full blown solutions here:
http://www.geeksforgeeks.org/counting-inversions/
Counting inversions in an array