Selection sort: What is n-1? - c

int main() {
int arr[] = { 64, 25, 12, 22, 11 };
int n = sizeof(arr) / sizeof(arr[0]);
selectionSort(arr, n);
return 0;
}
void selectionSort(int arr[], int n) {
int i, j, min_idx;
// One by one move boundary of unsorted subarray
for (i = 0; i < n - 1; i++) {
// Find the minimum element in unsorted array
min_idx = i;
for (j = i + 1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;
// Swap the found minimum element with the first element
swap(&arr[min_idx], &arr[i]);
}
}
I have see this C language code that'll do sorting algorithms called Selection Sort. But my question is in the selectionSort function.
Why in the first for loop, is the condition i < n - 1 whereas in the second loop it is j < n?
What will i < n - 1 do exactly? and why different cases for the second loop? Can you please explain this code to me like I'm in sixth grade of elementary school. Thank You.

The first loop has to iterate up to index n-2 (thus i < n-1) because the second for loop has to check numbers i+1 up to n-1 (thus j < n). If i could get the value n - 1, then the access in if (arr[j] < arr[min_idx]) would be out of bounds, specifically arr[j] would be out of bounds for j==n.
You could think that this implementation of selection sort moves from left to right on the array, leaving always a sorted array on its left. That's why the second for loop starts visiting elements from index i+1.
You could find many resources online to visualize how selection sort works, e.g., Selection sort in Wikipedia

The implementation on Wikipedia is annotated and explains it.
/* advance the position through the entire array */
/* (could do i < aLength-1 because single element is also min element) */
for (i = 0; i < aLength-1; i++)
Selection sort works by finding the smallest element and swapping it in place. When there's only one unsorted element left it is the smallest unsorted element and it is at the end of the sorted array.
For example, let's say we have {3, 5, 1}.
i = 0 {3, 5, 1} // swap 3 and 1
^
i = 1 {1, 5, 3} // swap 5 and 3
^
i = 2 {1, 3, 5} // swap 5 and... 5?
^
For three elements we only need two swaps. For n elements we only need n-1 swaps.
It's an optimization which might improve performance a bit on very small arrays, but otherwise inconsequential in an O(n^2) algorithm like selection sort.

Why in the first for loop, the condition is i < n-1? But in the second loop is j < n?
The loop condition for the inner loop is j < n because the index of the last element to be sorted is n - 1, so that when j >= n we know that it is past the end of the data.
The loop condition for the outer loop could have been i < n, but observe that no useful work would then be done on the iteration when i took the value n - 1. The initial value of j in the inner loop is i + 1, which in that case would be n. Thus no iterations of the inner loop would be performed.
But no useful work is not the same as no work at all. On every outer-loop iteration in which i took the value n - 1, some bookkeeping would be performed, and arr[i] would be swapped with itself. Stopping the outer loop one iteration sooner avoids that guaranteed-useless extra work.
All of this is directly related to the fact that no work needs to be expended to sort a one-element array.

Here is the logic of these nested loops:
for each position i in the array
find the smallest element of the slice starting at this position extending to the end of the array
swap the smallest element and the element at position i
The smallest element of the 1 element slice at the end of the array is obviously already in place, so there is no need to run the last iteration of the outer loop. That's the reason for the outer loop to have a test i < n - 1.
Note however that there is a nasty pitfall in this test: if instead of int we use size_t for the type of the index and count of elements in the array, which is more correct as arrays can have more elements nowadays than the range of type int, i < n - 1 would be true for i = 0 and n = 0 because n - 1 is not negative but the largest size_t value which is huge. In other words, the code would crash on an empty array.
It would be safer to write:
for (i = 0; i + 1 < n; i++) { ...

Related

Find the maximum length subarray condition 2 * min > max

This was an interview question I was recently asked at Adobe:
In an array, find the maximum length subarray with the condition 2 * min > max, where min is the minimum element of the subarray, and max is the maximum element of the subarray.
Does anyone has any approach better than O(n^2)?
Of course, we can't sort, as a subarray is required.
Below is my O(n^2) approach:
max=Integer.MIN_VALUE;
for (int i=0; i<A.length-1;i++)
for(j=i+1;j<A.length;j++)
{
int min =findMin(A,i,j);
int max =findMAx(A,i,j);
if(2*min<=max) {
if(j-i+1>max)
max = j-i+1
}
}
Does anybody know an O(n) solution?
Let A[i…j] be the subarray consisting of A[i], A[i+1], … A[j].
Observations:
If A[i…j] doesn't satisfy the criterion, then neither does A[i…(j+1)], because 2·min(A[i…(j+1)]) ≤ 2·min(A[i…j]) ≤ max(A[i…j]) ≤ max(A[i…(j+1)]). So you can abort your inner loop as soon as you find a j for which condition is not satisfied.
If we've already found a subarray of length L that meets the criterion, then there's no need to consider any subarray with length ≤ L. So you can start your inner loop with j = i + maxLength rather than j = i + 1. (Of course, you'll need to initialize maxLength to 0 rather than Integer.MIN_VALUE.)
Combining the above, we have:
int maxLength = 0;
for (int i = 0; i < A.length; ++i) {
for (int j = i + maxLength; j < A.length; ++j) {
if (findMin(A,i,j) * 2 > findMax(A,i,j)) {
// success -- now let's look for a longer subarray:
maxLength = j - i + 1;
} else {
// failure -- keep looking for a subarray this length:
break;
}
}
}
It may not be obvious at first glance, but the inner loop now goes through a total of only O(n) iterations, because j can only take each value at most once. (For example, if i is 3 and maxLength is 5, then j starts at 8. If we A[3…8] meets the criterion, we increment maxLength until we find a subarray that doesn't meet the criterion. Once that happens, we progress from A[i…(i+maxLength)] to A[(i+1)…((i+1)+maxLength)], which means the new loop starts with a greater j than the previous loop left off.)
We can make this more explicit by refactoring a bit to model A[i…j] as a sliding-and-potentially-expanding window: incrementing i removes an element from the left edge of the window, incrementing j adds an element to the right edge of the window, and there's never any need to increment i without also incrementing j:
int maxLength = 0;
int i = 0, j = 0;
while (j < A.length) {
if (findMin(A,i,j) * 2 > findMax(A,i,j)) {
// success -- now let's look for a longer subarray:
maxLength = j - i + 1;
++j;
} else {
// failure -- keep looking for a subarray this length:
++i;
++j;
}
}
or, if you prefer:
int maxLength = 0;
int i = 0;
for (int j = 0; j < A.length; ++j) {
if (findMin(A,i,j) * 2 > findMax(A,i,j)) {
// success -- now let's look for a longer subarray:
maxLength = j - i + 1;
} else {
// failure -- keep looking for a subarray this length:
++i;
}
}
Since in your solution, the inner loop iterates a total of O(n2) times, and you've stated that your solution runs in O(n2) time, we could argue that, since the above has the inner loop iterate only O(n) times, the above must run in O(n) time.
The problem is, that premise is really very questionable; you haven't indicated how you would implement findMin and findMax, but the straightforward implementation would take O(j−i) time, such that your solution actually runs in O(n3) rather than O(n2). So if we reduce the number of inner loop iterations from O(n2) to O(n), that just brings the total time complexity down from O(n3) to O(n2).
But, as it happens, it is possible to calculate the min and max of these subarrays in amortized O(1) time and O(n) extra space, using "Method 3" at https://www.geeksforgeeks.org/sliding-window-maximum-maximum-of-all-subarrays-of-size-k/. (Hat-tip to גלעד ברקן for pointing this out.) The way it works is, you maintain two deques, minseq for calculating min and maxseq for calculating max. (I'll only explain minseq; maxseq is analogous.) At any given time, the first element (head) of minseq is the index of the min element in A[i…j]; the second element of minseq is the index of the min element after the first element; and so on. (So, for example, if the subarray is [80,10,30,60,50] starting at index #2, then minseq will be [3,4,6], those being the indices of the subsequence [10,30,50].) Whenever you increment i, you check if the old value of i is the head of minseq (meaning that it's the current min); if so, you remove the head. Whenever you increment j, you repeatedly check if the tail of minseq is the index of an element that's greater or equal to the element at j; if so, you remove the tail and repeat. Once you've removed all such tail elements, you add j to the tail. Since each index is added to and removed from the deque at most once, this bookkeeping has a total cost of O(n).
That gives you overall O(n) time, as desired.
There's a simple O(n log n) time and O(n) space solution since we know the length of the window is bound, which is to binary search for the window size. For each chosen window size, we iterate over the array once, and we make O(log n) such traversals. If the window is too large, we won't find a solution and try a window half the size; otherwise we try a window halfway between this and the last successful window size. (To update the min and max in the sliding window we can use method 3 described here.)
Here's an algorithm in O(n lg k) time, where n is the length of the array and k the length of the maxmimum subarray having 2 * min > max.
Let A the array. Let's start with the following invariant: for j between 0 and length A, SA(j) is empty or 2 * min > max. It is extremely easy to initialize: take the empty subarray from indices 0 to 0. (Note that SA(j) may be empty because A[j] may be zero or negative: you don't have 2 * min > max because min >= 2 * min > max is impossible.)
The algorithm is: for each j, we set SA(j) = SA(j-1) + A[j]. But if A[j] >= 2 * min(SA(j-1)), then the invariant is broken. To restore the invariant, we have to remove all the elements e from SA(j) that meet A[j] >= 2 * e. In the same way, the invariant is broken if 2 * A[j] <= max(SA(j-1)). To restore the invariant, we have to remove all the elements e from SA(j) that meet 2 * A[j] <= e.
On the fly, we keep a track of the longest SA(j) found and return it.
Hence the algorithm:
SA(0) <- A[0..1] # 1 excluded -> empty subarray
ret <- SA(0)
for j in 1..length(A):
if A[j] >= 2 * min(SA(j-1)):
i <- the last index having A[j] >= 2 * A[i]
SA(j) <- A[i+1..j+1]
else if 2 * A[j] <= max(SA(j-1)):
i <- the last index having 2 * A[j] <= A[i]
SA(j) <- A[i+1..j+1]
if length(SA(j)) > length(ret):
ret <- SA(j)
return ret
The question is: how do we find the last index i having A[j] >= 2 * A[i]? If we iterate over SA(j-1), we need k steps at most, and then the time complexity will be O(n k) (we start with j-1 and look for the last value that keeps the invariant).
But there is a better solution. Imagine we have a min heap that stores elements of SA(j-1) along with their positions. The first element is the minimum of SA(j-1), let i0 be its index. We can remove all elements from the start of SA(j-1) to i0 included. Now, are we sure that A[j] >= 2 * A[i] for all remaining is? No: there is maybe more elements that are to small. Hence we remove the elements one after the other until the invariant is restored.
We'll need a max heap to, to handle the other situation 2 * A[j] <= max(SA(j-1)).
The easier is to create an ad hoc queue that has the following operations:
add(v): add an element v to the queue
remove_until_min_gt(v): remove elements from start of the queue until the minimum is greater than v
remove_until_max_lt(v): remove elements from start of the queue until the maximum is less than v
maximum: get the maximum of the queue
minimum: get the minimum of the queue
With two heaps, maximum and minimum are O(1), but the other operations are O(lg k).
Here is a Python implementation that keep indices of the start and the en of the queue:
import heapq
class Queue:
def __init__(self):
self._i = 0 # start in A
self._j = 0 # end in A
self._minheap = []
self._maxheap = []
def add(self, value):
# store the value and the indices in both heaps
heapq.heappush(self._minheap, (value, self._j))
heapq.heappush(self._maxheap, (-value, self._j))
# update the index in A
self._j += 1
def remove_until_min_gt(self, v):
return self._remove_until(self._minheap, lambda x: x > v)
def remove_until_max_lt(self, v):
return self._remove_until(self._maxheap, lambda x: -x < v)
def _remove_until(self, heap, check):
while heap and not check(heap[0][0]):
j = heapq.heappop(heap)[1]
if self._i < j + 1:
self._i = j + 1 # update the start index
# remove front elements before the start index
# there may remain elements before the start index in the heaps,
# but the first element is after the start index.
while self._minheap and self._minheap[0][1] < self._i:
heapq.heappop(self._minheap)
while self._maxheap and self._maxheap[0][1] < self._i:
heapq.heappop(self._maxheap)
def minimum(self):
return self._minheap[0][0]
def maximum(self):
return -self._maxheap[0][0]
def __repr__(self):
ns = [v for v, _ in self._minheap]
return f"Queue({ns})"
def __len__(self):
return self._j - self._i
def from_to(self):
return self._i, self._j
def find_min_twice_max_subarray(A):
queue = Queue()
best_len = 0
best = (0, 0)
for v in A:
queue.add(v)
if 2 * v <= queue.maximum():
queue.remove_until_max_lt(v)
elif v >= 2 * queue.minimum():
queue.remove_until_min_gt(v/2)
if len(queue) > best_len:
best_len = len(queue)
best = queue.from_to()
return best
You can see that every element of A except the last one may pass through this queue, thus the O(n lg k) time complexity.
Here's a test.
import random
A = [random.randint(-10, 20) for _ in range(25)]
print(A)
# [18, -4, 14, -9, 8, -6, 12, 13, -7, 7, -2, 14, 7, 9, -9, 9, 20, 19, 14, 13, 14, 14, 2, -8, -2]
print(A[slice(*find_min_twice_max_subarray(A))])
# [20, 19, 14, 13, 14, 14]
Obviously, if there was a way to find the start index that restores the invariant in O(1), we would have a time complexity in  O(1). (This reminds me how the KMP algorithm finds the best new start in a string matching problem, but I don't know if it is possible to create something similar here.)

Bubble Sort Outer Loop and N-1

I've read multiple posts on Bubble Sort, but still have difficulty verbalizing why my code works, particularly with respect to the outer loop.
for (int i = 0; i < (n - 1); i++)
{
for (int j = 0; j < (n - i - 1); j++)
{
if (array[j] > array[j + 1])
{
int temp = array[j];
array[j] = array[j + 1];
array[j + 1] = temp;
}
}
}
For any array of length n, at most n-1 pairwise comparisons are possible. That said, if we stop at i < n-1, we never see the final element. If, in the worst case, the array's elements (I'm thinking ints here) are in reverse order, we cannot assume that it is in its proper place. So, if we never examine the final array element in the outer loop, how can this possibly work?
Array indexing is done as 0 to n-1. If there are 10 elements in array, indexing will be n-1. So in first, iteration of inner loop (n-1) comparison will take place. First pass of bubble sort will bubble up the largest number to its position.
In the next iteration (n-1-1) iteration will take place and it will bubble up the second largest value to its place and so on until the whole array is sorted.
In this line you are accessing 1 element ahead of current position of j
array[j + 1];
In first iteration of the loop you run j from 0 to j<(n-0-1), so last index of array which you can get is j less than n, as you accessing array[j+1]. So if you declare you array as array[n], this will get the last element for your array.
n is typically the number of elements in your array, so if 10 elements in the array, the elements would be indexed from 0 - 9. You would not want to access array[10] in the outer loop as this would yield a segfault for accessing out of array bounds, hence the use of "n -1" in the loop condition statement. In C, when writing and calling a function that includes iterating an array, the size of the array is also passed as a parameter.
The n means the "number of all the elements". The initial number in the loops is 0, ranging from 0 to (n-1); so we will get n elements; all the elements will be travelsaled.

Smallest Lexicographic Subsequence of size k in an Array

Given an Array of integers, Find the smallest Lexical subsequence with size k.
EX: Array : [3,1,5,3,5,9,2] k =4
Expected Soultion : 1 3 5 2
The problem can be solved in O(n) by maintaining a double ended queue(deque). We iterate the element from left to right and ensure that the deque always holds the smallest lexicographic sequence upto that point. We should only pop off element if the current element is smaller than the elements in deque and the total elements in deque plus remaining to be processed are at least k.
vector<int> smallestLexo(vector<int> s, int k) {
deque<int> dq;
for(int i = 0; i < s.size(); i++) {
while(!dq.empty() && s[i] < dq.back() && (dq.size() + (s.size() - i - 1)) >= k) {
dq.pop_back();
}
dq.push_back(s[i]);
}
return vector<int> (dq.begin(), dq.end());
}
Here is a greedy algorithm that should work:
Choose Next Number ( lastChoosenIndex, k ) {
minNum = Find out what is the smallest number from lastChoosenIndex to ArraySize-k
//Now we know this number is the best possible candidate to be the next number.
lastChoosenIndex = earliest possible occurance of minNum after lastChoosenIndex
//do the same process for k-1
Choose Next Number ( lastChoosenIndex, k-1 )
}
Algorithm above is high complexity.
But we can pre-sort all the array elements paired with their array index and do the same process greedily using a single loop.
Since we used sorting complexity still will be n*log(n)
Ankit Joshi's answer works. But I think it can be done with just a vector itself, not using a deque as all the operations done are available in vector too. Also in Ankit Joshi's answer, the deque can contain extra elements, we have to manually pop off those elements before returning. Add these lines before returning.
while(dq.size() > k)
{
dq.pop_back();
}
It can be done with RMQ in O(n) + Klog(n).
Construct an RMQ in O(n).
Now find the sequence where every ith element will be the smallest no. from pos [x(i-1)+1 to n-(K-i)] (for i [1 to K] , where x0 = 0, xi is the position of the ith smallest element in the given array)
If I've understood the question right, here's a DP Algorithm that should work but it takes O(NK) time.
//k is the given size and n is the size of the array
create an array dp[k+1][n+1]
initialize the first column with the maximum integer value (we'll need it later)
and the first row with 0's (keep element dp[0][0] = 0)
now run the loop while building the solution
for(int i=1; i<=k; i++) {
for(int j=1; j<=n; j++) {
//if the number of elements in the array is less than the size required (K)
//initialize it with the maximum integer value
if( j < i ) {
dp[i][j] = MAX_INT_VALUE;
}else {
//last minimum of size k-1 with present element or last minimum of size k
dp[i][j] = minimun (dp[i-1][j-1] + arr[j-1], dp[i][j-1]);
}
}
}
//it consists the solution
return dp[k][n];
The last element of the array contains the solution.
I suggest you can try use modified merge sort. The place for
modified is merge part, discard the duplicate value.
select the smallest four
The complexity is o(n logn)
Still thinking whether complexity can be o(n)

Applying a function on sorted array

Taken from the google interview question here
Suppose that you have a sorted array of integers (positive or negative). You want to apply a function of the form f(x) = a * x^2 + b * x + c to each element x of the array such that the resulting array is still sorted. Implement this in Java or C++. The input are the initial sorted array and the function parameters (a, b and c).
Do you think we can do it in-place with less than O(n log(n)) time where n is the array size (e.g. apply a function to each element of an array, after that sort the array)?
I think this can be done in linear time. Because the function is quadratic it will form a parabola, ie the values decrease (assuming a positive value for 'a') down to some minimum point and then after that will increase. So the algorithm should iterate over the sorted values until we reach/pass the minimum point of the function (which can be determined by a simple differentiation) and then for each value after the minimum it should just walk backward through the earlier values looking for the correct place to insert that value. Using a linked list would allow items to be moved around in-place.
The quadratic transform can cause part of the values to "fold" over the others. You will have to reverse their order, which can easily be done in-place, but then you will need to merge the two sequences.
In-place merge in linear time is possible, but this is a difficult process, normally out of the scope of an interview question (unless for a Teacher's position in Algorithmics).
Have a look at this solution: http://www.akira.ruc.dk/~keld/teaching/algoritmedesign_f04/Artikler/04/Huang88.pdf
I guess that the main idea is to reserve a part of the array where you allow swaps that scramble the data it contains. You use it to perform partial merges on the rest of the array and in the end you sort back the data. (The merging buffer must be small enough that it doesn't take more than O(N) to sort it.)
If a is > 0, then a minimum occurs at x = -b/(2a), and values will be copied to the output array in forward order from [0] to [n-1]. If a < 0, then a maximum occurs at x = -b/(2a) and values will be copied to the output array in reverse order from [n-1] to [0]. (If a == 0, then if b > 0, do a forward copy, if b < 0, do a reverse copy, If a == b == 0, nothing needs to be done). I think the sorted array can be binary searched for the closest value to -b/(2a) in O(log2(n)) (otherwise it's O(n)). Then this value is copied to the output array and the values before (decrementing index or pointer) and after (incrementing index or pointer) are merged into the output array, taking O(n) time.
static void sortArray(int arr[], int n, int A, int B, int C)
{
// Apply equation on all elements
for (int i = 0; i < n; i++)
arr[i] = A*arr[i]*arr[i] + B*arr[i] + C;
// Find maximum element in resultant array
int index=-1;
int maximum = -999999;
for (int i = 0; i< n; i++)
{
if (maximum < arr[i])
{
index = i;
maximum = arr[i];
}
}
// Use maximum element as a break point
// and merge both subarrays usin simple
// merge function of merge sort
int i = 0, j = n-1;
int[] new_arr = new int[n];
int k = 0;
while (i < index && j > index)
{
if (arr[i] < arr[j])
new_arr[k++] = arr[i++];
else
new_arr[k++] = arr[j--];
}
// Merge remaining elements
while (i < index)
new_arr[k++] = arr[i++];
while (j > index)
new_arr[k++] = arr[j--];
new_arr[n-1] = maximum;
// Modify original array
for (int p = 0; p < n ; p++)
arr[p] = new_arr[p];
}

Suggest an Efficient Algorithm

Given an Array arr of size 100000, each element 0 <= arr[i] < 100. (not sorted, contains duplicates)
Find out how many triplets (i,j,k) are present such that arr[i] ^ arr[j] ^ arr[k] == 0
Note : ^ is the Xor operator. also 0 <= i <= j <= k <= 100000
I have a feeling i have to calculate the frequencies and do some calculation using the frequency, but i just can't seem to get started.
Any algorithm better than the obvious O(n^3) is welcome. :)
It's not homework. :)
I think the key is you don't need to identify the i,j,k, just count how many.
Initialise an array size 100
Loop though arr, counting how many of each value there are - O(n)
Loop through non-zero elements of the the small array, working out what triples meet the condition - assume the counts of the three numbers involved are A, B, C - the number of combinations in the original arr is (A+B+C)/!A!B!C! - 100**3 operations, but that's still O(1) assuming the 100 is a fixed value.
So, O(n).
Possible O(n^2) solution, if it works: Maintain variable count and two arrays, single[100] and pair[100]. Iterate the arr, and for each element of value n:
update count: count += pair[n]
update pair: iterate array single and for each element of index x and value s != 0 do pair[s^n] += single[x]
update single: single[n]++
In the end count holds the result.
Possible O(100 * n) = O(n) solution.
it solve problem i <= j <= k.
As you know A ^ B = 0 <=> A = B, so
long long calcTripletsCount( const vector<int>& sourceArray )
{
long long res = 0;
vector<int> count(128);
vector<int> countPairs(128);
for(int i = 0; i < sourceArray.size(); i++)
{
count[sourceArray[i]]++; // count[t] contain count of element t in (sourceArray[0]..sourceArray[i])
for(int j = 0; j < count.size(); j++)
countPairs[j ^ sourceArray[i]] += count[j]; // countPairs[t] contain count of pairs p1, p2 (p1 <= p2 for keeping order) where t = sourceArray[i] ^ sourceArray[j]
res += countPairs[sourceArray[i]]; // a ^ b ^ c = 0 if a ^ b = c, we add count of pairs (p1, p2) where sourceArray[p1] ^ sourceArray[p2] = sourceArray[i]. it easy to see that we keep order(p1 <= p2 <= i)
}
return res;
}
Sorry for my bad English...
I have a (simple) O(n^2 log n) solution which takes into account the fact that i, j and k refer to indices, not integers.
A simple first pass allow us to build an array A of 100 values: values -> list of indices, we keep the list sorted for later use. O(n log n)
For each pair i,j such that i <= j, we compute X = arr[i]^arr[j]. We then perform a binary search in A[X] to locate the number of indices k such that k >= j. O(n^2 log n)
I could not find any way to leverage sorting / counting algorithm because they annihilate the index requirement.
Sort the array, keeping a map of new indices to originals. O(nlgn)
Loop over i,j:i<j. O(n^2)
Calculate x = arr[i] ^ arr[j]
Since x ^ arr[k] == 0, arr[k] = x, so binary search k>j for x. O(lgn)
For all found k, print mapped i,j,k
O(n^2 lgn)
Start with a frequency count of the number of occurrences of each number between 1 and 100, as Paul suggests. This produces an array freq[] of length 100.
Next, instead of looping over triples A,B,C from that array and testing the condition A^B^C=0,
loop over pairs A,B with A < B. For each A,B, calculate C=A^B (so that now A^B^C=0), and verify that A < B < C < 100. (Any triple will occur in some order, so this doesn't miss triples. But see below). The running total will look like:
Sum+=freq[A]*freq[B]*freq[C]
The work is O(n) for the frequency count, plus about 5000 for the loop over A < B.
Since every triple of three different numbers A,B,C must occur in some order, this finds each such triple exactly once. Next you'll have to look for triples in which two numbers are equal. But if two numbers are equal and the xor of three of them is 0, the third number must be zero. So this amounts to a secondary linear search for B over the frequency count array, counting occurrences of (A=0, B=C < 100). (Be very careful with this case, and especially careful with the case B=0. The count is not just freq[B] ** 2 or freq[0] ** 3. There is a little combinatorics problem hiding there.)
Hope this helps!

Resources