Interview question:
Given a sorted array of this form :
1,2,3,4,5,6,7,8,9
( A better example would be 10,20,35,42,51,66,71,84,99 but let's use above one)
Convert it to the following low high form without using extra memory or a standard library
1,9,2,8,3,7,4,6,5
A low-high form means that we use the smallest followed by highest. Then we use the second smallest and second-highest.
Initially, when he asked, I had used a secondary array and used the 2 pointer approach. I kept one pointer in front and the second pointer at last . then one by one I copied left and right data to my new array and then moved left as left ++ and right as --right till they cross or become same.
After this, he asked me to do it without memory.
My approach to solving it without memory was on following lines . But it was confusing and not working
1) swap 2nd and last in **odd** (pos index 1)
1,2,3,4,5,6,7,8,9 becomes
1,9,3,4,5,6,7,8,2
then we reach even
2) swap 3rd and last in **even** (pos index 2 we are at 3 )
1,9,3,4,5,6,7,8,2 becomes (swapped 3 and 2_ )
1,9,2,4,5,6,7,8,3
and then sawp 8 and 3
1,9,2,4,5,6,7,8,3 becomes
1,9,2,4,5,6,7,3,8
3) we reach in odd (pos index 3 we are at 4 )
1,9,2,4,5,6,7,3,8
becomes
1,9,2,8,5,6,7,3,4
4) swap even 5 to last
and here it becomes wrong
Let me start by pointing out that even registers are a kind of memory. Without any 'extra' memory (other than that occupied by the sorted array, that is) we don't even have counters! That said, here goes:
Let a be an array of n > 2 positive integers sorted in ascending order, with the positions indexed from 0 to n-1.
From i = 1 to n-2, bubble-sort the sub-array ranging from position i to position n-1 (inclusive), alternatively in descending and ascending order. (Meaning that you bubble-sort in descending order if i is odd and in ascending order if it is even.)
Since to bubble-sort you only need to compare, and possibly swap, adjacent elements, you won't need 'extra' memory.
(Mind you, if you start at i = 0 and first sort in ascending order, you don't even need a to be pre-sorted.)
And one more thing: as there was no talk of it in your question, I will keep very silent on the performance of the above algorithm...
We will make n/2 passes and during each pass we will swap each element, from left to right, starting with the element at position 2k-1, with the last element. Example:
pass 1
V
1,2,3,4,5,6,7,8,9
1,9,3,4,5,6,7,8,2
1,9,2,4,5,6,7,8,3
1,9,2,3,5,6,7,8,4
1,9,2,3,4,6,7,8,5
1,9,2,3,4,5,7,8,6
1,9,2,3,4,5,6,8,7
1,9,2,3,4,5,6,7,8
pass 2
V
1,9,2,3,4,5,6,7,8
1,9,2,8,4,5,6,7,3
1,9,2,8,3,5,6,7,4
1,9,2,8,3,4,6,7,5
1,9,2,8,3,4,5,7,6
1,9,2,8,3,4,5,6,7
pass 3
V
1,9,2,8,3,4,5,6,7
1,9,2,8,3,7,5,6,4
1,9,2,8,3,7,4,6,5
1,9,2,8,3,7,4,5,6
pass 4
V
1,9,2,8,3,7,4,5,6
1,9,2,8,3,7,4,6,5
This should take O(n^2) swaps and uses no extra memory beyond the counters involved.
The loop invariant to prove is that the first 2k+1 positions are correct after iteration k of the loop.
Alright, assuming that with constant space complexity, we need to lose some of our time complexity, the following algorithm possibly works in O(n^2) time complexity.
I wrote this in python. I wrote it as quickly as possible so apologies for any syntactical errors.
# s is the array passed.
def hi_low(s):
last = len(s)
for i in range(0, last, 2):
if s[i+1] == None:
break
index_to_swap = last
index_to_be_swapped = i+1
while s[index_to_be_swapped] != s[index_to_swap]:
# write your own swap func here
swap(s[index_to_swap], s[index_to_swap-1])
index_to_swap -=1
return s
Quick explanation:
Suppose the initial list given to us is:
1 2 3 4 5 6 7 8 9
So in our program, initially,
index_to_swap = last
meaning that it is pointing to 9, and
index_to_be_swapped = i+1
is i+1, i.e one step ahead of our current loop pointer. [Also remember we're looping with a difference of 2].
So initially,
i = 0
index_to_be_swapped = 1
index_to_swap = 9
and in the inner loop what we're checking is: until the values in both of these indexes are same, we keep on swapping
swap(s[index_to_swap], s[index_to_swap-1])
so it'll look like:
# initially:
1 2 3 4 5 6 7 8 9
^ ^---index_to_swap
^-----index_to_be_swapped
# after 1 loop
1 2 3 4 5 6 7 9 8
^ ^-----index_to_swap
^----- index_to_be_swapped
... goes on until
1 9 2 3 4 5 6 7 8
^-----index_to_swap
^-----index_to_be_swapped
Now, the inner loop's job is done, and the main loop is run again with
1 9 2 3 4 5 6 7 8
^ ^---- index_to_swap
^------index_to_be_swapped
This runs until it's behind 2.
So the outer loop runs for almost n\2 times, and for each outer loop the inner loop runs for almost n\2 times in the worst case so the time complexity if n/2*n/2 = n^2/4 which is the order of n^2 i.e O(n^2).
If there are any mistakes please feel free to point it out.
Hope this helps!
It will work for any sorted array
let arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
let i = arr[0];
let j = arr[arr.length - 1];
let k = 0;
while(k < arr.length) {
arr[k] = i;
if(arr[k+1]) arr[k+1] = j;
i++;
k += 2;
j--;
}
console.log(arr);
Explanation: Because its a sorted array, you need to know 3 things to produce your expected output.
Starting Value : let i = arr[0]
Ending Value(You can also find it with the length of array by the way): let j = arr[arr.length -1]
Length of Array: arr.length
Loop through the array and set the value like this
arr[firstIndex] = firstValue, arr[thirdIndex] = firstValue + 1 and so on..
arr[secondIndex] = lastValue, arr[fourthIndex] = lastValue - 1 and so on..
Obviously you can do the same things in a different way. But i think that's the simplest way.
Related
I want to circle through the array multiple times. When I reach the last index, the next index should be the first one.
For example, I have an array of 6 elements
array1 = [1,2,3,4,5,6]
and I have K = 4. K will be the number of elements that I will skip.
In the above example, I will start from array1[0] and skip K elements including the array1[0] element.
So if I skip 4 elements, I will reach array1[4]. If I skip K elements once more, I should skip array1[4], array1[5], array1[0] and array1[1] and reach array1[2]. This process will repeat itself N times.
I tried searching for the solution online because I cannot think of a way to move through the array in circle. I found one solution that says to use modulo operator like this
print a[3 % len(a)]
but I cannot understand this since I am just starting out with python.
Understanding what modulo is will be helpful https://en.wikipedia.org/wiki/Modulo
To sum up, in this exercise you don't care how many times you went through the array. You only care about "at which position of the current iteration you are" lets say. Therefore, a modulo operation using the length of the array as modulo will give you the remainder of such division, which is exactly what you are looking for.
Example:
arr = [1,2,3,4,5]
k = 27
arrlength = len(arr) # 5
reminder = k % arrlength # 27 % 5 = 2
arr[reminder] # 3
So, the modulo operator returns the remainder from the division between two numbers.
Example:
6 % 2 = 0 # because 6/2 = 3 with no remainder
6 % 5 = 1 # because 6/5 = 1 (integer part) plus remainder 1
6 % 7 = 6 # because 7 doesn't fit in 6, so all the dividend goes into the remainder
So your problem can be solved by something like this:
arr = [1,2,3,4,5,6]
N = 5
step = 4
for i in range(5):
print(arr[(i+1)*step%len(arr)])
where N is the number of elements you want to print
This is the same as creating an extended list such as:
b = arr * 1000
and print each element in range(step,(N+1)*step,step).
Of course this method is not optimal since you don't now how many arrays arr you have to concatenate in order not to go out of bounds.
Hope it helped
This was an interview question.
I was given an array of n+1 integers from the range [1,n]. The property of the array is that it has k (k>=1) duplicates, and each duplicate can appear more than twice. The task was to find an element of the array that occurs more than once in the best possible time and space complexity.
After significant struggling, I proudly came up with O(nlogn) solution that takes O(1) space. My idea was to divide range [1,n-1] into two halves and determine which of two halves contains more elements from the input array (I was using Pigeonhole principle). The algorithm continues recursively until it reaches the interval [X,X] where X occurs twice and that is a duplicate.
The interviewer was satisfied, but then he told me that there exists O(n) solution with constant space. He generously offered few hints (something related to permutations?), but I had no idea how to come up with such solution. Assuming that he wasn't lying, can anyone offer guidelines? I have searched SO and found few (easier) variations of this problem, but not this specific one. Thank you.
EDIT: In order to make things even more complicated, interviewer mentioned that the input array should not be modified.
Take the very last element (x).
Save the element at position x (y).
If x == y you found a duplicate.
Overwrite position x with x.
Assign x = y and continue with step 2.
You are basically sorting the array, it is possible because you know where the element has to be inserted. O(1) extra space and O(n) time complexity. You just have to be careful with the indices, for simplicity I assumed first index is 1 here (not 0) so we don't have to do +1 or -1.
Edit: without modifying the input array
This algorithm is based on the idea that we have to find the entry point of the permutation cycle, then we also found a duplicate (again 1-based array for simplicity):
Example:
2 3 4 1 5 4 6 7 8
Entry: 8 7 6
Permutation cycle: 4 1 2 3
As we can see the duplicate (4) is the first number of the cycle.
Finding the permutation cycle
x = last element
x = element at position x
repeat step 2. n times (in total), this guarantees that we entered the cycle
Measuring the cycle length
a = last x from above, b = last x from above, counter c = 0
a = element at position a, b = elment at position b, b = element at position b, c++ (so we make 2 steps forward with b and 1 step forward in the cycle with a)
if a == b the cycle length is c, otherwise continue with step 2.
Finding the entry point to the cycle
x = last element
x = element at position x
repeat step 2. c times (in total)
y = last element
if x == y then x is a solution (x made one full cycle and y is just about to enter the cycle)
x = element at position x, y = element at position y
repeat steps 5. and 6. until a solution was found.
The 3 major steps are all O(n) and sequential therefore the overall complexity is also O(n) and the space complexity is O(1).
Example from above:
x takes the following values: 8 7 6 4 1 2 3 4 1 2
a takes the following values: 2 3 4 1 2
b takes the following values: 2 4 2 4 2
therefore c = 4 (yes there are 5 numbers but c is only increased when making steps, not initially)
x takes the following values: 8 7 6 4 | 1 2 3 4
y takes the following values: | 8 7 6 4
x == y == 4 in the end and this is a solution!
Example 2 as requested in the comments: 3 1 4 6 1 2 5
Entering cycle: 5 1 3 4 6 2 1 3
Measuring cycle length:
a: 3 4 6 2 1 3
b: 3 6 1 4 2 3
c = 5
Finding the entry point:
x: 5 1 3 4 6 | 2 1
y: | 5 1
x == y == 1 is a solution
Here is a possible implementation:
function checkDuplicate(arr) {
console.log(arr.join(", "));
let len = arr.length
,pos = 0
,done = 0
,cur = arr[0]
;
while (done < len) {
if (pos === cur) {
cur = arr[++pos];
} else {
pos = cur;
if (arr[pos] === cur) {
console.log(`> duplicate is ${cur}`);
return cur;
}
cur = arr[pos];
}
done++;
}
console.log("> no duplicate");
return -1;
}
for (t of [
[0, 1, 2, 3]
,[0, 1, 2, 1]
,[1, 0, 2, 3]
,[1, 1, 0, 2, 4]
]) checkDuplicate(t);
It is basically the solution proposed by #maraca (typed too slowly!) It has constant space requirements (for the local variables), but apart from that only uses the original array for its storage. It should be O(n) in the worst case, because as soon as a duplicate is found, the process terminates.
If you are allowed to non-destructively modify the input vector, then it is pretty easy. Suppose we can "flag" an element in the input by negating it (which is obviously reversible). In that case, we can proceed as follows:
Note: The following assume that the vector is indexed starting at 1. Since it is probably indexed starting at 0 (in most languages), you can implement "Flag item at index i" with "Negate the item at index i-1".
Set i to 0 and do the following loop:
Increment i until item i is unflagged.
Set j to i and do the following loop:
Set j to vector[j].
if the item at j is flagged, j is a duplicate. Terminate both loops.
Flag the item at j.
If j != i, continue the inner loop.
Traverse the vector setting each element to its absolute value (i.e. unflag everything to restore the vector).
It depends what tools are you(your app) can use. Currently a lot of frameworks/libraries exists. For exmaple in case of C++ standart you can use std::map<> ,as maraca mentioned.
Or if you have time you can made your own implementation of binary tree, but you need to keep in mind that insert of elements differs in comarison with usual array. In this case you can optimise search of duplicates as it possible in your particular case.
binary tree expl. ref:
https://www.wikiwand.com/en/Binary_tree
Given an array of n elements, a k-partitioning of the array would be to split the array in k contiguous subarrays such that the maximums of the subarrays are non-increasing. Namely max(subarray1) >= max(subarray2) >= ... >= max(subarrayK).
In how many ways can an array be partitioned into valid partitions like the ones mentioned before?
Note: k isn't given as input or anything, I mereley used it to illustrate the general case. A partition could have any size from 1 to n, we just need to find all the valid ones.
Example, the array [3, 2, 1] can be partitioned in 4 ways, you can see them below:
The valid partitions :[3, 2, 1]; [3, [2, 1]]; [[3, 2], 1]; [[3], [2], [1]].
I've found a similar problem related to linear partitioning, but I couldn't find a way to adapt the thinking to this problem. I'm pretty sure this is dynamic programming, but I haven't been able to properly identify
how to model the problem using a recurrence relation.
How would you solve this?
Call an element of the input a tail-max if it is at least as great as all elements that follow. For example, in the following input:
5 9 3 3 1 2
the following elements are tail-maxes:
5 9 3 3 1 2
^ ^ ^ ^
In a valid partition, every subarray must contain the next tail-max at or after the subarray's starting position; otherwise, the next tail-max will be the max of some later subarray, and the condition of non-increasing subarray maximums will be violated.
On the other hand, if every subarray contains the next tail-max at or after the subarray's starting position, then the partition must be valid, as the definition of a tail-max ensures that the maximum of a later subarray cannot be greater.
If we identify the tail-maxes of an array, for example
1 1 9 2 1 6 5 1
. . X . . X X X
where X means tail-max and . means not, then we can't place any subarray boundaries before the first tail-max, because if we do, the first subarray won't contain a tail-max. We can place at most one subarray boundary between a tail-max and the next; if we place more, we get a subarray that doesn't contain a tail-max. The last tail-max must be the last element of the input, so we can't place a subarray boundary after the last tail-max.
If there are m non-tail-max elements between a tail-max and the next, that gives us m+2 options: m+1 places to put an array boundary, or we can choose not to place a boundary between these elements. These factors are multiplicative.
We can make one pass from the end of the input to the start, identifying the lengths of the gaps between tail-maxes and multiplying together the appropriate factors to solve the problem in O(n) time:
def partitions(array):
tailmax = None
factor = 1
result = 1
for i in reversed(array):
if tailmax is None:
tailmax = i
continue
factor += 1
if i >= tailmax:
# i is a new tail-max.
# Multiply the result by a factor indicating how many options we
# have for placing a boundary between i and the old tail-max.
tailmax = i
result *= factor
factor = 1
return result
Update: Sorry I misunderstanding the problem. In this case, split the arrays to sub-arrays where every tails is the max element in the array, then it will work in narrow cases. e.g. [2 4 5 9 6 8 3 1] would be split to [[2 4 5 9] 6 8 9 3 1] first. Then we can freely chose range 0 - 5 to decide whether following are included. You can use an array to record the result of DP. Our goal is res[0]. We already have res[0] = res[5] + res[6] + res[7] + res[8] + res[9] + res[10] in above example and res[10] = 1
def getnum(array):
res = [-1 for x in range(len(array))]
res[0] = valueAt(array, res, 0)
return res[0]
def valueAt(array, res, i):
m = array[i]
idx = i
for index in range(i, len(array), 1):
if array[index] > m:
idx = index
m = array[index]
value = 1;
for index in range(idx + 1, len(array), 1):
if res[index] == -1:
res[index] = valueAt(array, res, index)
value = value + res[index]
return value;
Worse than the answer above in time consuming. DP always costs a lot.
Old Answer: If no duplicate elements in an array is allowed, the following way would work:
Notice that the number of sub-arrays is not depends on the values of elements if no duplicate. We can remark the number is N(n) if there is n elements in array.
The largest element must be in the first sub-arrays, other elements can be in or not in the first sub-array. Depends on whether they are in the first sub-array, the number of partitions for the remaining elements varies.
So,
N(n) = C(n-1, 1)N(n-1) + C(n-1, 2)N(n-2) + ... + C(n-1, n-1)N(0)
where C(n,k) means:
Then it can be solved by DP.
Hope this helps
This question already has answers here:
Finding out the duplicate element in an array
(2 answers)
Closed 6 years ago.
Can anyone could help me how to solve this code in C? I think that I have to use big O notation as a solution, but I have no idea about it.
The question: There is an array T sized N+1 where numbers from 1 to N are random. One number x is repeated twice (position is also random).
What should be the fastest way to find value of this number x?
For example:
N = 7
[6 3 5 1 3 7 4 2]
x=3
The sum of numbers 1..N is N*(N+1)/2.
So, the extra number is:
extra_number = sum(all N+1 numbers) - N*(N+1)/2
Everything is O(1) except the sum. The sum can be computed in O(N) time.
The overall algorithm is O(N).
Walk the array using the value as the next array index (minus 1), marking the ones visited with a special value (like 0 or the negation). O(n)
On average, only half the elements are visited.
v
6 3 5 1 3 7 4 2
v
. 3 5 1 3 7 4 2
v
. 3 5 1 3 7 . 2
v
. 3 5 1 . 7 . 2
v
. 3 5 . . 7 . 2
v !! all ready visited. Previous 3 is repeated.
. 3 5 . . 7 . 2
No overflow problem caused by adding up the sum. Of course the array needs to be modified (or a sibling bool array of flags is needed.)
This method works even if more than 1 value is repeated.
The algorithm given by Klaus has O(1) memory requirements, but requires to sum all the elements from the given array, which may be quite large to iterate (sum) all over them.
Another approach is to iterate over array and increment the occurence counter once per iteration, so the algorithm can be stopped instantly once it finds the duplicate, though the worst case scenario is to scan through all the elements. For example:
#define N 8
int T[N] = {6, 3, 5, 1, 3, 7, 4, 2};
int occurences[N+1] = {0};
int duplicate = -1;
for (int i = 0; i < N; i++) {
occurences[T[i]]++;
if (occurences[T[i]] == 2) {
duplicate = T[i];
break;
}
}
Note that this method is also immune to integer overflow, that is N*(N+1)/2. might be larger than integer data type can possibly hold.
Given an array of positive integers a I want to output array of integers b so that b[i] is the closest number to a[i] that is smaller then a[i], and is in {a[0], ... a[i-1]}. If such number doesn't exist, then b[i] = -1.
Example:
a = 2 1 7 5 7 9
b = -1 -1 2 2 5 7
b[0] = -1 since there is no number that is smaller than 2
b[1] = -1 since there is no number that is smaller than 1 from {2}
b[2] = 2, closest number to 7 that is smaller than 7 from {2,1} is 2
b[3] = 2, closest number to 5 that is smaller than 5 from {2,1,7} is 2
b[4] = 5, closest number to 7 that is smaller than 7 from {2,1,7,5} is 5
I was thinking about implementing balanced binary tree, however it will require a lot of work. Is there an easier way of doing this?
Here is one approach:
for i ← 1 to i ← (length(A)-1) {
// A[i] is added in the sorted sequence A[0, .. i-1] save A[i] to make a hole at index j
item = A[i]
j = i
// keep moving the hole to next smaller index until A[j - 1] is <= item
while j > 0 and A[j - 1] > item {
A[j] = A[j - 1] // move hole to next smaller index
j = j - 1
}
A[j] = item // put item in the hole
// if there are elements to the left of A[j] in sorted sequence A[0, .. i-1], then store it in b
// TODO : run loop so that duplicate entries wont hamper results
if j > 1
b[i] = A[j-1]
else
b[1] = -1;
}
Dry run:
a = 2 1 7 5 7 9
a[1] = 2
its straight forward, set b[1] to -1
a[2] = 1
insert into subarray : [1 ,2]
any elements before 1 in sorted array ? no.
So set b[2] to -1 . b: [-1, -1]
a[3] = 7
insert into subarray : [1 ,2, 7]
any elements before 7 in sorted array ? yes. its 2
So set b[3] to 2. b: [-1, -1, 2]
a[4] = 5
insert into subarray : [1 ,2, 5, 7]
any elements before 5 in sorted array ? yes. its 2
So set b[4] to 2. b: [-1, -1, 2, 2]
and so on..
Here's a sketch of a (nearly) O(n log n) algorithm that's somewhere in between the difficulty of implementing an insertion sort and balanced binary tree: Do the problem backwards, use merge/quick sort, and use binary search.
Pseudocode:
let c be a copy of a
let b be an array sized the same as a
sort c using an O(n log n) algorithm
for i from a.length-1 to 1
binary search over c for key a[i] // O(log n) time
remove the item found // Could take O(n) time
if there exists an item to the left of that position, b[i] = that item
otherwise, b[i] = -1
b[0] = -1
return b
There's a few implementation details that can make this have poor runtime.
For instance, since you have to remove items, doing this on a regular array and shifting things around will make this algorithm still take O(n^2) time. So, you could store key-value pairs instead. One would be the key, and the other would be the number of those keys (kind of like a multiset implemented on an array). "Removing" one would just be subtracting the second item from the pair and so on.
Eventually you will be left with a bunch of 0-value keys. This would eventually make the if there exists an item to the left take roughly O(n) time, and therefore, the entire algorithm would degrade to a O(n^2) for that reason. So another optimization might be to batch remove all of them periodically. For instance, when 1/2 of them are 0-values, perform a pruning.
The ideal option might be to implement another data structure that has a much more favorable remove time. Something along the lines of a modified unrolled linked list with indices could work, but it would certainly increase the implementation complexity of this approach.
I've actually implemented this. I used the first two optimizations above (storing key-value pairs for compression, and pruning when 1/2 of them are 0s). Here's some benchmarks to compare using an insertion sort derivative to this one:
a.length This method Insert sort Method
100 0.0262ms 0.0204ms
1000 0.2300ms 0.8793ms
10000 2.7303ms 75.7155ms
100000 32.6601ms 7740.36 ms
300000 98.9956ms 69523.6 ms
1000000 333.501 ms ????? Not patient enough
So, as you can see, this algorithm grows much, much slower than the insertion sort method I posted before. However, it took 73 lines of code vs 26 lines of code for the insertion sort method. So in terms of simplicity, the insertion sort method might still be the way to go if you don't have time requirements/the input is small.
You could treat it like an insertion sort.
Pseudocode:
let arr be one array with enough space for every item in a
let b be another array with, again, enough space for all elements in a
For each item in a:
perform insertion sort on item into arr
After performing the insertion, if there exists a number to the left, append that to b.
Otherwise, append -1 to b
return b
The main thing you have to worry about is making sure that you don't make the mistake of reallocating arrays (because it would reallocate n times, which would be extremely costly). This will be an implementation detail of whatever language you use (std::vector's reserve for C++ ... arr.reserve(n) for D ... ArrayList's ensureCapacity in Java...)
A potential downfall with this approach compared to using a binary tree is that it's O(n^2) time. However, the constant factors using this method vs binary tree would make this faster for smaller sizes. If your n is smaller than 1000, this would be an appropriate solution. However, O(n log n) grows much slower than O(n^2), so if you expect a's size to be significantly higher and if there's a time limit that you are likely to breach, you might consider a more complicated O(n log n) algorithm.
There are ways to slightly improve the performance (such as using a binary insertion sort: using binary search to find the position to insert into), but generally they won't improve performance enough to matter in most cases since it's still O(n^2) time to shift elements to fit.
Consider this:
a = 2 1 7 5 7 9
b = -1 -1 2 2 5 7
c 0 1 2 3 4 5 6 7 8 9
0 - - - - - - - - - -
Where the index of C is value of a[i] such that 0,3,4,6,8 would have null values.
and the 1st dimension of C contains the highest to date closest value to a[i]
So in step by a[3] we have the following
c 0 1 2 3 4 5 6 7 8 9
0 - -1 -1 - - 2 - 2 - -
and by step a[5] we have the following
c 0 1 2 3 4 5 6 7 8 9
0 - -1 -1 - - 2 - 5 - 7
This way when we get to the 2nd 7 at a[4] we know that 2 is the largest value to date and all we need to do is loop back through a[i-1] until we encounter a 7 again comparing the a[i] value to that in c[7] if bigger, replace c[7]. Once a[i-1] = the 7 we put c[7] into b[i] and move on to next a[i].
The main downfalls to this approach that I can see are:
footprint size depending on how big the c[] needs to be dimensioned..
the fact that you have to revisit elements of a[] that you've already touched. If the distribution of data is such that there are significant spaces between the two 7's then keeping track of the highest value as you go would presumably be faster. Alternatively it might be better to gather statistics on the a[i] up front to know what distributions exist and then use a hybrid method maintaining the max until such time that no more instances of that number are in the statistics.