Related
So, I was asked this question in an interview:
There are two friends playing a game in which they select a number from an array containing n positive numbers. Both friends select one number at a time, and both the players play the game optimally. And you have to find out what's the maximum sum (of numbers that are being selected) that you could obtain after the game ends. The constraints that were given after I gave the answer to the same question without constraints were:
On the first move of both players, they can select any number.
Apart from first move, they can only select the number which is adjacent to the previous number in the given array and which hasn't been selected by the first and second player up until that moment in the game. (Clarification as edit)
And if a player is not able to make a move, he/she stops playing. And the game ends when both players cannot make a move.
Now, the solution that I gave was:
Make a structure containing the value as well as the index of the value in the input array.
Make an array of the previous structure and store the values of first step in this array.
Sort this array in non-decreasing order on the basis of values.
Start selecting a value in a greedy manner and print the maximum value.
They were looking more for a pseudo-code though I can code it too. But, the interviewer said this will fail for some cases. I thought a lot on which cases this will fail but couldn't find any. Therefore, I need help in this question.
Also, if possible, please include a pseudo-code of what I can do to improve this.
Edit: I guess I wasn't clear enough in my question, specifically in 2nd point. What the interviewer meant was:
If it is not the first move of the player, he has to choose a number which is adjacent to one of the number he already selected in previous moves.
Also, yes, both the players play the game optimally and they choose numbers turn by turn.
Edit2: So, the same question was asked from my friend but it was modified a little. Instead of an array, what he was given was a graph. So, like in my case, I can select only the indices that are adjacent to my previously selected indices, what he was given was an undirected graph (adjacency list as input) and he could select only those vertices in a particular move which are directly connected to any of the previously selected vertex.
For eg:
Let's say the number of positive integers is 3. The value of those integers are 4, 2, 4 and also, if I name the positive integers by A, B and C, then,
A - B
B - C
The above was the example that my friend was given and the answer to the above would be 6. Can you just point me in the right direction as to how I can begin with this? Thanks!
Notice that if you make the first move at index x, if your opponent plays optimally, their first move will have to be at index x-1 or x+1. Otherwise, they will have elements that they could have picked but didn't. To see this, consider two non-adjacent starting points:
-------y-------------x-------
Eventually they will both take elements from the array and end up with something like:
yyyyyyyyyyyyyyxxxxxxxxxxxxxxx
So you can relocate the starting points to the middle yx, obtaining the same solution.
So, assume you move first at x. Let:
s_left_x = a[0] + ... + a[x]
s_right_x = a[x] + ... a[n - 1]
s_left_y = a[0] + ... + a[x - 1]
s_right_y = a[x + 1] + ... + a[n - 1]
Let's say you want to win the game: have a larger sum than your opponent at the end. If your opponent picks x + 1, you want s_left_x > s_right_y, and if your opponent picks x - 1, you want s_right_x > s_left_y. This is ideally, in order to win. It's not always possible to win though, and your question doesn't ask how to win, but rather how to get the largest sum.
Since your opponent will play optimally, he will force you into the worst case. So for each x as your first move, the best you can do is min(s_left_x, s_right_x). Pick the maximum of this expression for each index x, which you can find in O(1) for each index after some precomputations.
Ok, I think this is solution, that is formulated more briefly:
1st player must pick the item that bisects the array in a way two resulting arrays' sums difference is less, than picked item's value.
If it's achievable, p1 wins, if not, p2 wins.
Obviously, on his 1st move p2 must choose the item next to p1's, as it is the only way for him to get maximum sum. He chooses the item on the side, where the sum of remaining items is bigger. This will be the maximum sum p2 can get also.
p1's maximum sum will be the sum of remaining items (items that are on the side, that p2 has not chosen plus item p1 picked in first move).
As the OP mentioned that both player play the game optimally,I am going to present an algorithm under this assumption.
Definitely if both players play optimally,then definitely the sum they obtain at the end would be maximum,otherwise they are not playing optimally.
There are two different cases here:
I make the first move and pick element at position x
Now because we have to obey the condition that only adjacent elements could be picked,let me define two arrays here.
left[x]: It is the sum of elements that can be obtained by adding
array[0],array[1]....array[x-1],the elements left to x.
right[x]: It is the sum of elements that can be obtained by adding
array[x+1],array[x+2]....array[n-1],the elements right to x.
Now,because the other player also plays optimally,what he will do is he will check what I can possibly achieve and he finds that,I could achieve the following:
array[x] + left[x] = S1
OR
array[x] + right[x] = S2
So what the other player does is finds the minimum of the S1 and S2.
If S1 < S2 this means that if the other player picks element at x+1,he just took away the better part of array from us because now we are left with a lesser sum S1
If S1 > S2 this means that if the other player picks element at x-1,he just took away the better part of array from us because now we are left with a lesser sum S2
Because I am also playing optimally I would in the very first move pick such x which has minimum absolute value of (right[x]-left[x]),so that even if our opponent takes the better part of array from us,he is only able to take away minimum
Therefore,If both players play optimally, the maximum sums obtained are:
Update.
x + left[x] and right[x]//when second player in his first move picks x+1
Therefore,in this case the moves made are:
Player 1:Picks element at position x,x-1,x-2....0.
Player 2:Picks element at position x+1,x+2,....n-1
Because each player has to pick adjacent element to the previously picked element by him.
OR
x + right[x] and left[x]//when second player in his first move picks x-1
Therefore,in this case the moves made are:
Player 1:Picks element at position x,x+1,x+2....n-1.
Player 2:Picks element at position x-1,x-2,....0.
Because each player has to pick adjacent element to the previously picked element by him.
where x is such that we obtain minimum absolute value of (right[x]-left[x]).
Since OP insisted on posting pseudocode,here is one:
Computing the left and right array.
for(i = 0 to n-1)
{
if(i==0)
left[i]=0;
else left[i] = left[i] + array[i-1];
j = n-1-i;
if(j==n-1)
right[j]=0;
else right[j]= right[j] + array[j+1];
}
The left and right arrays initially have 0 in all positions.
Computing max_sums.
Find_the_max_sums()
{
min = absoulte_value_of(right[0]-left[0])
x = 0;
for(i = 1 to n-1)
{
if( absolute_value_of(right[i]-left[i]) < min)
{
min = absolute_value_of(right[i]-left[i]);
x=i;
}
}
}
Clearly Both space and time complexity of this algorithm is linear.
To complete #Ivlad's anwser, there exists a strategy for which p1 will never lose (there may be a draw in the worst case). She can always find a x such that the sum she obtains is not smaller than p2's sum.
The proof goes as follows -- sorry this more math that coding. I consider an array of positive numbers (up to a constant translation, there is no loss of generality here).
I denote a = [a[0],...,a[n]] the array under consideration and for any k<=l, S[k,l] = a[k]+...+a[l] the sum of all terms form rank k to rank l. By convention, S[k,l]=0 if k>l.
As explained in previous comments, picking up a[k+1] guarantees p1 to win if:
a[k+1] >= max( S[1,k]-S[k+2,n] , S[k+2,n]-S[1,k] )
Let consider D[k] = S[1,k] - S[k+1,n], the difference between the sum of the first k terms and the sum of other terms. The sequence D is decreasing, negative for k=0 and positive for k=n. We can an index i such that:
D[i] <= 0
D[i+1] => 0
(in fact, one of the two above inequalities must be strict).
Substituting for S and noticing that S[1,k+1]=S[1,k] + a[k+1] and S[k+1,n]=S[k+2,n] + a[k+1], the two inequalities in D imply:
S[1,i] <= S[i+2,n] + a[i+1]
S[1,i] + a[i+1] >= S[i+2,n]
or equivalently:
a[i+1] >= S[1,i] - S[i+2,n]
a[i+1] >= S[i+2,n] - S[1,i]
In other words, choosing a[i+1] is strategy with which p1 cannot lose.
In fact, this strategy offers the largest payoff for p1. Note that it is not unique even if all terms of the array are strictly positive. Consider for example a=[1,2,3], where p1 can indifferently choose 2 or 3.
Given your constraints, the maximum sum in any game will always be the addition of a number of adjacent values, from a given split position S to either position 1 or position N. Player 1 chooses an initial split point S and Player 2 chooses his side of the array to sum (so Player 2 also chooses Player 1's side). One player will add from 1 to S (or S-1) and the other from S (or S+1) to N. The higher sum wins.
In order to win the game, Player 1 must find a split position S such that both addtions from 1 to S-1 and from S+1 to N are strictly smaller than the sum of the other side plus the value of S. This way no matter what side Player 2 chooses to add, its sum will be smaller.
Pseudocode:
1. For each position S from 1 to N, repeat:
1.1. Add values from 1 to S-1 (set to zero if S is 1), assign to S1.
1.2. Add values from S+1 to N (set to zero if S is N), assign to SN.
1.3. If S1 is smaller than S+SN and SN is smaller than S+S1, then S is the winning position for Player 1. If not, repeat.
2. If you have found no winning position, then whatever you choose Player 2 can win choosing in turn an optimal position.
I am trying to write an sorting algorithm which takes an input array and produces the sorted array. Following are the constraints for an algorithm.
Time complexity = $O(n \log n)$
Only one comparison operation throughout the algorithm.
We have a function which can provide an median for a set of three elements.
I have tried finding a solution and following is my result.
we use the median function to get the smallest and the largest pair.
Example: Give an array A[1..n], we get the median of the first three elements, lets call it a set and we receive a $Median$. In next Iteration we remove the received median from the set and add next element to the set and again call Median function. This step when repeated over the length produces a pair of the largest and the smallest element for the entire array.
We use this pair, use the comparison operation and place them at position A[0] & A[n-1].
We repeat the same operation over the array A[1..n-2] to get another pair of the largest and smallest.
We take the median with the A[0] and newly received pair.
Median Value is placed at the A[1].
We take the median with the A[n-1] and newly received pair.
Median Value is placed at the A[n-2].
Step 3~7 are repeated to get a sorted array.
This algorithm satisfies the condition 2 & 3 but not the time complexity. I hope if someone can provide some guidance how to proceed further.
Quicksort (presented in reverse order) works like this. Suppose you have an array:
[1, 4, 5, 2, 3]
Quicksort in the abstract basically works by sliding towards the middle of the array from both the left and the right side. As we slide inwards, we want to swap items such that big things get moved to the right, and small things get moved to the left. Eventually we should have an array where all the small stuff is on the left, and all the big stuff is on the right.
The upshot of this process also guarantees that one element will be placed in the correct location (because everything to the left of it will be smaller, and everything to the right will be bigger, so it must be in the right position). That value is called the pivot. The first step of quicksort is to ensure the pivot is in the right place.
The way we do this is by selecting a random element to be our pivot - the item we wan to put into it's correct place. For this simple example we'll just use the last number (3). The pivot is our comparison value.
Once we have selected our pivot/comparison value, we then monitor the left-most element (1), and the right-most element (3). We'll call these the left-pointer and the right-pointer. The left-pointers job is to slide towards the middle of the array, stopping when it finds something that is larger than the pivot. The right pointer does the same thing, but it slides inward looking for values less than the pivot. In code:
while (true) {
while (array[++left] < pivot);
while (array[--right] > pivot) if (left == right) break;
if (left >= right) break; // If the pointers meet then exit the loop
swap(array[left], array[right]); // Swap the values otherwise.
}
So in our example above, when the left-pointer hits (4) it recognizes that that is higher than our pivot element and stops moving. The right pivot does the same thing from the right side, but stops when it hits (2) because that's lower than the pivot. When both sides stop, we do a swap, so we end up with:
[1, 2, 5, 4, 3]
Notice that we are getting closer to sorted. We continue to move both pointers inward until they both point to the same element, or they cross - whichever comes first. When that happens, we make one final step, which is to replace the pivot element (3) with whatever point the left/right-pointers are pointing to, which in this case would be (5) because they would both stop right in the middle. Then we swap, so that we get:
[1, 2, 3, 4, 5]
(Notice that we swap the original pivot (3) with the value pointed to by both sides (5))
This whole process is called a partition. In code it looks like this:
int partition(int *array, int lBound, int rBound) {
int pivot = array[rBound]; // Use the last element as a pivot
int left = lBound - 1; // Point to one before the first element
int right = rBound; // Point to the last element;
// We'll break out of the loop explicity
while (true) {
while (array[++left] < pivot);
while (array[--right] > pivot) if (left == right) break;
if (left >= right) break; // If the pointers meet then exit the loop
swap(array[left], array[right]); // Swap the pointers otherwise.
}
swap(array[left], array[rBound]); // Move the pivot item into its correct place
return left; // The left element is now in the right place
}
It's important to note that although the partition step fully sorted our array in this example, that's not ordinarily the point of the partition step. The point of the paritition step is to put one element into it's correct place, and to ensure that everything left of that element is less and everything to the right is more. Or in other words, to move the pivot value into its correct location and then guarantee that everything left of the pivot is smaller than it, and everything to the right is bigger. So although in this example the array was completely sorted, in general we can only guarantee that one item and one item only is in the correct location (and everything to the left and right is bigger/smaller respectively). This is why the partition method above returns left, because it tells the calling function that this one element is in the correct location (and the array has been correctly partitioned).
That is if we start with an array like:
[1, 7, 5, 4, 2, 9, 3]
Then the partition step would return something like this:
[1, 3, 2, [4], 7, 5, 9]
Where [4] is the only value guaranteed to be in the right place, but everything to the left is smaller than [4] and everything to the right is bigger (though not necessarily sorted!).
The second step is to perform this step recursively. That is, if we can put one element into it's correct location, then we should be able to eventually put all items into their correct location. That is the quicksort function. In code:
int *quicksort(int *array, int lBound, int rBound) {
if (lBound >= rBound) return array; // If the array is size 1 or less - return.
int pivot = partition(array, lBound, rBound); // Split the array in two.
quicksort(array, lBound, pivot - 1); // Sort the left size. (Recursive)
quicksort(array, pivot + 1, rBound); // Sort the right side. (Recursive)
return array;
}
Notice that the first step is to ensure that we have an array side of at least 2. It doesn't make sense to process anything smaller than that so we return if that condition isn't met. The next step is to call our partition function which will split the array according to the process outlined above. Once we know that the array has one element that is in correct position, we simply call quicksort again, but this time on the left side of the pivot, and then again on the right side of the pivot. Notice we don't include the pivot because the partition is guaranteed to put that into the correct location!
If we continue to call quicksort recursively, eventually we'll halve the array and partition it until we get arrays of size-one (which by definition is already sorted). So we partition, then halve, partition, halve, etc. until the entire array is sorted (in place). This gives us a sort in O(n lg(n)) time. Cool!
Here's a quick example of it's use:
int main() {
int array [] {1, 0, 2, 9, 3, 8, 4, 7, 5, 6};
quicksort(array, 0, 9); // Sort from zero to 9.
// Display the results
for (int index = 0; index != 10; ++index) {
cout << array[index] << endl;
}
return 0;
}
A good visual demonstration can be found here: http://www.youtube.com/watch?v=Z5nSXTnD1I4
Steps 1 and 2 are indeed the first steps of a correct solution. Once you know the smallest and largest elements, though, the median oracle is a comparison oracle; if you want to compare a[i] with a[j], then a[i] < a[j] exactly when a[i] = median(a[i], a[j], a[0]) where a[0] is the smallest element. So you can run straight quicksort or mergesort or whathaveyou.
A frequent question that props up during array manipulation exercises is to rotate a two dimensional array by 90 degrees. There are a few SO posts that answer how to do it in a variety of programming languages. My question is to clarify one of the answers that is out there and explore what sort of thought-process is required in order to get to the answer in an organic manner.
The solution to this problem that I found goes as follows:
public static void rotate(int[][] matrix,int n)
{
for( layer = 0;layer < n/2;++layer){
int first = layer;
int last = n -1 - layer;
for(int i = first;i<last;++i){
int offset = i - first;
int top = matrix[first][i];
matrix[first][i] = matrix[last-offset][first];
matrix[last-offset][first] = matrix[last][last-offset];
matrix[last][last-offset] = matrix[i][last];
matrix[i][last] = top;
}
}
}
I have somewhat of an idea what the code above is trying to do, it is swapping out the extremities/corners by doing a four-way swap and doing the same for the other cells separated by some offset.
Stepping through this code I know it works, what I do not get is the mathematical basis for the above given algorithm. What is the rationale behind the 'layer','first','last' and the offset?
How did 'last' turn out to be n-1-layer? Why is the offset i-first? What is the offset in the first place?
If somebody could explain the genesis of this algorithm and step me through the thought process to come up with the solution, that will be great.
Thanks
The idea is to break down the big task (rotating a square matrix) into smaller tasks.
First, a square matrix can be broken into concentric square rings. The rotation of a ring is independent from the rotation of other rings, so to rotate the matrix just rotate each of the rings, one by one. In this case, we start at the outermost ring and work inward. We count the rings using layer (or first, same thing), and stop when we get to the middle, which is why it goes up to n/2. (It is worth checking to make sure this will work for odd and even n.) It is useful to keep track of the "far edge" of the ring, using last = n - 1 - layer. For instance, in a 5x5 matrix, the first ring starts at first=0 and ends at last=4, the second ring starts at first=1 and ends at last=3 and so on.
How to rotate a ring? Walk right along the top edge, up along the left edge, left along the bottom edge and down along the right edge, all at the same time. At each step swap the four values around. The coordinate that changes is i, and the number of steps is offset. For example, when walking around the second ring, i goes {1,2,3} and offset goes {0,1,2}.
I already solved most the questions posted here, all but the longest path one. I've read the Wikipedia article about longest paths and it seems any easy problem if the graph was acyclic, which mine is not.
How do I solve the problem then? Brute force, by checking all possible paths? How do I even begin to do that?
I know it's going to take A LOT on a Graph with ~18000. But I just want to develop it anyway, cause it's required for the project and I'll just test it and show it to the instructor on a smaller scale graph where the execution time is just a second or two.
At least I did all tasks required and I have a running proof of concept that it works but there's no better way on cyclic graphs. But I don't have clue where to start checking all these paths...
The solution is to brute force it. You can do some optimizations to speed it up, some are trivial, some are very complicated. I doubt you can get it to work fast enough for 18 000 nodes on a desktop computer, and even if you can I have no idea how. Here's how the bruteforce works however.
Note: Dijkstra and any of the other shortest path algorithms will NOT work for this problem if you are interested in an exact answer.
Start at a root node *root*
Let D[i] = longest path from node *root* to node i. D[*root*] = 0, and the others are also 0.
void getLongestPath(node, currSum)
{
if node is visited
return;
mark node as visited;
if D[node] < currSum
D[node] = currSum;
for each child i of node do
getLongestPath(i, currSum + EdgeWeight(i, node));
mark node as not visited;
}
Let's run it by hand on this graph: 1 - 2 (4), 1 - 3 (100), 2 - 3 (5), 3 - 5 (200), 3 - 4 (7), 4 - 5 (1000)
Let the root be 1. We call getLongestPath(1, 0);
2 is marked as visited and getLongestPath(2, 4); is called
D[2] = 0 < currSum = 4 so D[2] = 4.
3 is marked as visited and getLongestPath(3, 4 + 5); is called
D[3] = 0 < currSum = 9 so D[3] = 9.
4 is marked as visited and getLongestPath(4, 9 + 7); is called
D[4] = 0 < currSum = 16 so D[4] = 16.
5 is marked as visited and getLongestPath(5, 16 + 1000); is called
D[5] = 0 < currSum = 1016 so D[5] = 1016.
getLongestPath(3, 1016 + 200); is called, but node 3 is marked as visited, so nothing happens.
Node 5 has no more child nodes, so the function marks 5 as not visited and backtracks to 4. The backtracking will happen until node 1 is hit, which will end up setting D[3] = 100 and updating more nodes.
Here's how it would look iteratively (not tested, just a basic idea):
Let st be a stack, the rest remains unchanged;
void getLongestPath(root)
{
st.push(pair(root, 0));
while st is not empty
{
topStack = st.top();
if topStack.node is visited
goto end;
mark topStack.node as visited;
if D[topStack.node] < topStack.sum
D[topStack.node = topStack.sum;
if topStack.node has a remaining child (*)
st.push(pair(nextchild of topStack.node, topStack.sum + edge cost of topStack.node - nextchild))
end:
mark topStack.node as not visited
st.pop();
}
}
(*) - this is a bit of a problem - you have to keep a pointer to the next child for each node, since it can change between different iterations of the while loop and even reset itself (the pointer resets itself when you pop the topStack.node node off the stack, so make sure to reset it). This is easiest to implement on linked lists, however you should use either int[] lists or vector<int> lists, so as to be able to store the pointers and have random access, because you will need it. You can keep for example next[i] = next child of node i in its adjacency list and update that accordingly. You might have some edge cases and might need to different end: situations: a normal one and one that happens when you visit an already visited node, in which case the pointers don't need to be reset. Maybe move the visited condition before you decide to push something on the stack to avoid this.
See why I said you shouldn't bother? :)
Is there any way of finding out the start of a loop in a link list using not more than two pointers?
I do not want to visit every node and mark it seen and reporting the first node already been seen.Is there any other way to do this?
Step1: Proceed in the usual way, you will use to find the loop, i.e.
Have two pointers, increment one in single step and other in two steps, If they both meet in sometime, there is a loop.
Step2: Freeze one pointer where it was and increment the other pointer in one step counting the steps you make and when they both meet again, the count will give you the length of the loop (this is same as counting the number of elements in a circular link list).
Step3: Reset both pointers to the start of the link list, increment one pointer to the length of loop times and then start the second pointer. increment both pointers in one step and when they meet again, it will be the start of the loop (this is same as finding the nth element from the end of the link list).
MATHEMATICAL PROOF + THE SOLUTION
Let 'k' be the number of steps from HEADER to BEGINLOOP.
Let 'm' be the number of steps from HEADER to MEETPOINT.
Let 'n' be the number of steps in the loop.
Also, consider two pointers 'P' and 'Q'. Q having 2x speed than P.
SIMPLE CASE: When k < N
When pointer 'P' would be at BEGINLOOP (i.e. it would have traveled 'k' steps), Q would have traveled '2k' steps. So, effectively, Q is ahead of '2k-k = k' steps from P when P enters the loop, and hence, Q is 'n-k' steps behind the BEGINLOOP now.
When P would have moved from BEGINLOOP to MEETPONT, it would have traveled 'm-k' steps. In that time, Q would have traveled '2(m-k)' steps. But, since they met, and Q started 'n-k' steps behind the BEGINLOOP, so, effectively,
'2(m-k) - (n-k)' should be equal to '(m-k)'
So,
=> 2m - 2k - n + k = m - k
=> 2m - n = m
=> n = m
THAT MEANS, P and Q meet at the point equal to the number of steps (or multiple to be general, see the case mentioned below) in the loop. Now, at the MEETPOINT, both P and Q are 'n-(m-k)' steps behind, i.e, 'k' steps behind ,as we saw n=m.
So, if we start P from HEADER again, and Q from the MEETPOINT but this time with the pace equal to P, P and Q will now be meeting at BEGINLOOP only.
GENERAL CASE: Say, k = nX + Y, Y < n
(Hence, k%n = Y)
When pointer 'P' would be at BEGINLOOP (i.e. it would have traveled 'k' steps), Q would have traveled '2k' steps. So, effectively, Q is ahead of '2k-k = k' steps from P when P enters the loop. But, please note 'k' is greater than 'n', which means Q would have made multiple rounds of the loop. So, effectively, Q is 'n-(k%n)' steps behind the BEGINLOOP now.
When P would have moved from BEGINLOOP to MEETPOINT, it would have traveled 'm-k' steps. (Hence, effectively, MEETPOINT would be at '(m-k)%n' steps ahead of BEGINLOOP now.) In that time, Q would have traveled '2(m-k)' steps. But, since they met, and Q started 'n-(k%n)' steps behind the BEGINLOOP, so, effectively, new position of Q (which is '(2(m-k) - (n-k/%n))%n' from BEGINLOOP) should be equal to the new position of P (which is '(m-k)%n' from BEGIN LOOP).
So,
=> (2(m - k) - (n - k%n))%n = (m - k)%n
=> (2(m - k) - (n - k%n))%n = m%n - k%n
=> (2(m - k) - (n - Y))%n = m%n - Y (as k%n = Y)
=> 2m%n - 2k%n - n%n + Y%n = m%n - Y
=> 2m%n - Y - 0 + Y = m%n - Y (Y%n = Y as Y < n)
=> m%n = 0
=> 'm' should be multiple of 'n'
First we try to find out, is there any loop in list or not. If loop exists then we try to find out starting point of loop. For this we use two pointers namely slowPtr and fastPtr. In first detection (checking for loop), fastPtr moves two steps at once but slowPtr moves by one step ahead at once.
slowPtr 1 2 3 4 5 6 7
fastPtr 1 3 5 7 9 5 7
It is clear that if there is any loop in list then they will meet at point (Point 7 in above image), because fastPtr pointer is running twice faster than other one.
Now, we come to second problem of finding starting point of loop.
Suppose, they meet at Point 7 (as mentioned in above image). Then, slowPtr comes out of loop and stands at beginning of list means at Point 1 but fastPtr still at meeting point (Point 7). Now we compare both pointers value, if they same then it is starting point of loop otherwise we move one step at ahead (here fastPtr is also moving by one step each time) and compare again till we find same point.
slowPtr 1 2 3 4
fastPtr 7 8 9 4
Now one question comes in mind, how is it possible. So there is good mathematical proof.
Suppose:
m => length from starting of list to starting of loop (i.e 1-2-3-4)
l => length of loop (i.e. 4-5-6-7-8-9)
k => length between starting of loop to meeting point (i.e. 4-5-6-7)
Total distance traveled by slowPtr = m + p(l) +k
where p => number of repetition of circle covered by slowPtr
Total distance traveled by fastPtr = m + q(l) + k
where q => number of repetition of circle covered by fastPtr
Since,
fastPtr running twice faster than slowPtr
Hence,
Total distance traveled by fastPtr = 2 X Total distance traveled by slowPtr
i.e
m + q(l) + k = 2 * ( m + p(l) +k )
or, m + k = q(l) - p(l)
or, m + k = (q-p) l
or, m = (q-p) l - k
So,
If slowPtr starts from beginning of list and travels "m" length then, it will reach to Point 4 (i.e. 1-2-3-4)
and
fastPtr start from Point 7 and travels " (q-p) l - k " length then, it will reach to Point 4 (i.e. 7-8-9-4),
because "(q-p) l" is a complete circle length with " (q-p) " times.
More detail here
Proceed in the usual way you will use to find the loop. ie. Have two pointers, increment one in single step(slowPointer) and other in two steps(fastPointer), If they both meet in sometime, there is a loop.
As you might would have already realized that meeting point is k Step before the head of the loop.
where k is size of non-looped part of the list.
now move slow to head of the loop
keep Fast at collision point
each of them are k STep from the loop start (Slow from start of the list where as fast is k step before the head of the loop- Draw the pic to get the clarity)
Now move them at same speed - They must meet at loop start
eg
slow=head
while (slow!=fast)
{
slow=slow.next;
fast=fast.next;
}
This is code to find start of loop in linked List :
public static void findStartOfLoop(Node n) {
Node fast, slow;
fast = slow = n;
do {
fast = fast.next.next;
slow = slow.next;
} while (fast != slow);
fast = n;
do {
fast = fast.next;
slow = slow.next;
}while (fast != slow);
System.out.println(" Start of Loop : " + fast.v);
}
There are two way to find the loops in a link list.
1. Use two pointer one advance one step and other advance two steps if there is loop, in some point both pointer get the same value and never reach to null. But if there is no loop pointer reaches to null in one point and both pointer never get the same value. But in this approach we can get there is a loop in the link list but we can't tell where exactly starting the loop. This is not the efficient way as well.
Use a hash function in such a way that the value should be unique. Incase if we are trying to insert the duplicate it should through the exception. Then travel through each node and push the address into the hash. If the pointer reach to null and no exception from the hash means there is no cycle in the link list. If we are getting any exception from hash means there is a cycle in the list and that is the link from which the cycle is starting.
Well I tried a way by using one pointer... I tried the method in several data sets.... As the memory for each of the nodes of a linked list are allocated in an increasing order, so while traversing the linked list from the head of the linked list, if the address of a node becomes larger than the address of the node it is pointing to, we can determine there is a loop, as well as the beginning element of the loop.
The best answer I have found was here:
tianrunhe: find-loop-starting-point-in-a-circular-linked-list
'm' being distance between HEAD and START_LOOP
'L' being loop length
'd' being distance between MEETING_POINT and START_LOOP
p1 moving at V, and p2 moving at 2*V
when the 2 pointers meet: distance run is = m+ n*L -d = 2*(m+ L -d)
=> which means (not mathematicaly demonstrated here) that if p1 starts from HEAD & p2 starts from MEETING_POINT & they move at same pace, they will meet # START_LOOP
Refer to this link for comprehensive answer.
Proceed in the usual way you will use to find the loop. ie. Have two pointers, increment one in single step and other in two steps, If they both meet in sometime, there is a loop.
Keep one of the pointers fixed get the total number of nodes in the loop say L.
Now from this point(increment second pointer to the next node in the loop) in the loop reverse the linked list and count the number of nodes traversed, say X.
Now using the second pointer(loop is broken) from the same point in the loop travrse the linked list and count the number of nodes remaining say Y
The loop begins after the ((X+Y)-L)\2 nodes. Or it starts at the (((X+Y)-L)\2+1)th node.
Proceed in the usual way you will use to find the loop. ie. Have two pointers, increment one in single step and other in two steps, If they both meet in sometime, there is a loop.
Keep one of the pointers fixed get the total number of nodes in the loop say L.
Now from this point(increment second pointer to the next node in the loop) in the loop reverse the linked list and count the number of nodes traversed, say X.
Now using the second pointer(loop is broken) from the same point in the loop travrse the linked list and count the number of nodes remaining say Y
The loop begins after the ((X+Y)-L)\2 nodes. Or it starts at the (((X+Y)-L)\2+1)th node.
void loopstartpoint(Node head){
Node slow = head.next;;
Node fast = head.next.next;
while(fast!=null && fast.next!=null){
slow = slow.next;
fast = fast.next.next;
if(slow==fast){
System.out.println("Detected loop ");
break;
}
}
slow=head;
while(slow!=fast){
slow= slow.next;
fast = fast.next;
}
System.out.println("Starting position of loop is "+slow.data);
}
Take two pointers, one fast and one slow. The slow pointer moves one node at a time, the fast pointer moves two nodes at a time, ultimately they'll meet and you'll find the loop.
Now comes the fun part, how do you detect the loop? That's simple as well. Let me ask you another fun question first: How will you go about searching for the n-x the node in the list in one pass? The simple answer will be to take two pointers, one at the head, one at x steps ahead of the head and move them at the same speed, when the second pointer hits the end, the first one will be at n-x.
As many other people have mathematically proved in this thread if one pointer moves at twice the speed of one pointer, the distance from start to the point at where they meet is going to be a multiple of the length of the list.
Why is this the case??
As fast pointer is moving at twice the speed of slow pointer, can we agree that:
Distance travelled by fast pointer = 2 * (Distance travelled
by slow pointer)
now:
If m is the length of the loop(nodes in the loop) that has no cyle
If n is the actual length of the loop.
x is the number of cycles slow pointer made
y is the number of cycles fast pointer made.
And K is the distance from the start of the loop to the point of
meeting
Note that this point is the only piece of length in the path of both
the pointers that are going to be extra after their number of cycles
of the loop. So besides this k rest of what they travelled are
cycles of the loop and the initial distance from the head to the
start of the loop. Hence, both are travelling m+n*(numbers of cycles
they made) + k distance after their cycles at which they met each
other. So, we can say that:
(m + nx + k) = 2(m + n*y + k)
When you solve this mathematically you'll discover that m+k is a
multiple of the length of the loop that is n. i.e. [m + k = (x-2y)*n]
So, if you maintain a distance that is a multiple of the length and
move eventually you'll meet again at the start of the loop. Why?
Can't they meet somewhere else? Well fast is already at k and slow
is at the head, if they both travel m distance as k+m is the
multiple of n, fast would be at the start of the loop. While if slow
travels m distance it'll meet k as m is the distance from head to
start of the loop as we originally assumed m to be.
Hence, it is mathematically proved that the distance which both the
the fast and slow pointer will have to travel if set the slow pointer to
head again after detecting the loop and make them both travel at the
The same speed is going to be m.
public class Solution {
public ListNode detectCycle(ListNode head) {
if(head==null||head.next==null)return null;
ListNode slow = head;
ListNode fast = head;
while(fast.next!=null&&fast.next.next!=null){
slow = slow.next;
fast = fast.next.next;
if(fast==slow){
slow=head;
while(slow!=fast){
slow=slow.next;
fast=fast.next;
}
return slow;
}
}
return null;
}
}
Pythonic code solution based on #hrishikeshmishra solution
def check_loop_index(head):
if head == None or head.next == None:
return -1
slow = head.next
if head.next.next == None:
return -1
fast = head.next.next
# searching for loops exists or not
while fast and fast.next:
if slow==fast:
break
slow = slow.next
fast = fast.next.next
# checking if there is loop
if slow != fast:
return -1
# reseting the slow to head and creating index
index = 0
slow = head
# incrementing slow and fast by 1 step and incrmeenting index, if they meet
# hen thats the index of node where loop starts
while slow!=fast:
slow = slow.next
fast = fast.next
index+=1
return index
detect loop
copy each element's address into set. If duplicate is found that's the start of the loop
I have heard this exact question as an interview quiz question.
The most elegant solution is:
Put both pointers at the first element (call them A and B)
Then keep looping::
Advance A to the next element
Advance A to the next element again
Advance B to the next element
Every time you update a pointer, check if A and B are identical.
If at some point pointers A and B are identical, then you have a loop.
Problem with this approach is that you may end up moving around the loop twice, but no more than twice with pointer A
If you want to actually find the element that has two pointers pointing to it, that is more difficult. I'd go out of a limb and say its impossible to do with just two pointers unless you are willing to repeat following the linked list a large number of times.
The most efficient way of doing it with more memory, would be to put the pointers to the elements in and array, sort it, and then look for a repeat.