You are given a list of integers nums of even length. Consider an operation where you pick any number in nums and update it with a value between [1, max(nums)]. Return the number of operations required such that for every i, nums[i] + nums[n - 1 - i] equals to the same number. The problem can be solved greedily.
Note: n is the size of the array and max(nums) is the maximum element in nums.
For example: nums = [1,5,4,5,9,3] the expected operations are 2.
Explanation: The maxnums is 9, so I can change any element of nums to any number between [1, 9] which costs one operation.
Choose 1 at index 0 and change it to 6
Choose 9 at index 4 and change it to 4.
Now this makes the nums[0] + nums[5] = nums[1] + nums[4] = nums[2] + nums[3] = 9. We had changed 2 numbers and it cost us 2 operations which is the minimum for this input.
The approach that I've used is to find the median of the sums and use that to find the number of operations greedily.
Let us find the all the sums of the array based on the given condition.
Sums can be calculated by nums[i] + nums[n-1-i].
Let i = 0, nums[0] + nums[6-1-0] = 4.
i = 1, nums[1] + nums[6-1-1] = 14.
i = 2, nums[2] + nums[6-1-2] = 9.
Store these sums in an array and sort it.
sums = [4,9,14] after sorting. Now find the median from sums which is 9 as it is the middle element.
Now I use this median to equalize the sums and we can find the number of operations. I've also added the code that I use to calculate the number of operations.
int operations = 0;
for(int i=0; i<nums.size()/2; i++) {
if(nums[i] + nums[nums.size()-1-i] == mid)
continue;
if(nums[i] + nums[nums.size()-1-i] > mid) {
if(nums[i] + 1 <= mid || 1 + nums[nums.size()-1-i] <= mid) {
operations++;
} else {
operations += 2;
}
} else if (maxnums + nums[nums.size()-1-i] >= mid || nums[i] + maxnums >= mid) {
operations++;
} else {
operations += 2;
}
}
The total operations for this example is 2 which is correct.
The problem here is that, for some cases choosing the median gives the wrong result. For example, the nums = [10, 7, 2, 9, 4, 1, 7, 3, 10, 8] expects 5 operations but my code gives 6 if the median (16) was chosen.
Is choosing the median not the most optimal approach? Can anyone help provide a better approach?
I think the following should work:
iterate pairs of numbers
for each pair, calculate the sum of that pair, as well as the min and max sum that can be achieved by changing just one of the values
update a dictionary/map with -1 when starting a new "region" requiring one fewer change, and +1 when that region is over
iterate the boundaries in that dictionary and update the total changes needed to find the sum that requires the fewest updates
Example code in Python, giving 9 as the best sum for your example, requiring 5 changes.
from collections import defaultdict
nums = [10, 7, 2, 9, 4, 1, 7, 3, 10, 8]
m = max(nums)
pairs = [(nums[i], nums[-1-i]) for i in range(len(nums)//2)]
print(pairs)
score = defaultdict(int)
for a, b in map(sorted, pairs):
low = a + 1
high = m + b
score[low] -= 1
score[a+b] -= 1
score[a+b+1] += 1
score[high+1] += 1
print(sorted(score.items()))
cur = best = len(nums)
num = None
for i in sorted(score):
cur += score[i]
print(i, cur)
if cur < best:
best, num = cur, i
print(best, num)
The total complexity of this should be O(nlogn), needing O(n) to create the dictionary, O(nlogn) for sorting, and O(n) for iterating the sorted values in that dictionary. (Do not use an array or the complexity could be much higher if max(nums) >> len(nums))
(UPDATED receiving additional information)
The optimal sum must be one of the following:
a sum of a pair -> because you can keep both numbers of that pair
the min value of a pair + 1 -> because it is the smallest possible sum you only need to change 1 of the numbers for that pair
the max value of a pair + the max overall value -> because it is the largest possible sum you only need to change 1 of the numbers for that pair
Hence, there are order N possible sums.
The total number of operations for this optimal sum can be calculated in various ways.
The O(N²) is quite trivial. And you can implement it quite easily if you want to confirm other solutions work.
Making it O(N log N)
getting all possible optimal sums O(N)
for each possible sum you can calculate occ the number of pairs having that exact sum and thus don't require any manipulation. O(N)
For all other pairs you just need to know if it requires 1 or 2 operations to get to that sum. Which is 2 when it is either impossible if the smallest of the pair is too big to reach sum with the smallest possible number or when the largest of the pair is too small to reach the sum with the largest possible number. Many data structures could be used for that (BIT, Tree, ..). I just used a sorted list and applied binary search (not exhaustively tested though). O(N log N)
Example solution in java:
int[] nums = new int[] {10, 7, 2, 9, 4, 1, 7, 3, 10, 8};
// preprocess pairs: O(N)
int min = 1
, max = nums[0];
List<Integer> minList = new ArrayList<>();
List<Integer> maxList = new ArrayList<>();
Map<Integer, Integer> occ = new HashMap<>();
for (int i=0;i<nums.length/2;i++) {
int curMin = Math.min(nums[i], nums[nums.length-1-i]);
int curMax = Math.max(nums[i], nums[nums.length-1-i]);
min = Math.min(min, curMin);
max = Math.max(max, curMax);
minList.add(curMin);
maxList.add(curMax);
// create all pair sums
int pairSum = nums[i] + nums[nums.length-1-i];
int currentOccurences = occ.getOrDefault(pairSum, 0);
occ.put(pairSum, currentOccurences + 1);
}
// sorting 0(N log N)
Collections.sort(minList);
Collections.sort(maxList);
// border cases
for (int a : minList) {
occ.putIfAbsent(a + max, 0);
}
for (int a : maxList) {
occ.putIfAbsent(a + min, 0);
}
// loop over all condidates O(N log N)
int best = (nums.length-2);
int med = max + min;
for (Map.Entry<Integer, Integer> entry : occ.entrySet()) {
int sum = entry.getKey();
int count = entry.getValue();
int requiredChanges = (nums.length / 2) - count;
if (sum > med) {
// border case where max of pair is too small to be changed to pair of sum
requiredChanges += countSmaller(maxList, sum - max);
} else if (sum < med) {
// border case where having a min of pair is too big to be changed to pair of sum
requiredChanges += countGreater(minList, sum - min);
}
System.out.println(sum + " -> " + requiredChanges);
best = Math.min(best, requiredChanges);
}
System.out.println("Result: " + best);
}
// O(log N)
private static int countGreater(List<Integer> list, int key) {
int low=0, high=list.size();
while(low < high) {
int mid = (low + high) / 2;
if (list.get(mid) <= key) {
low = mid + 1;
} else {
high = mid;
}
}
return list.size() - low;
}
// O(log N)
private static int countSmaller(List<Integer> list, int key) {
int low=0, high=list.size();
while(low < high) {
int mid = (low + high) / 2;
if (list.get(mid) < key) {
low = mid + 1;
} else {
high = mid;
}
}
return low;
}
Just to offer some theory -- we can easily show that the upper bound for needed changes is n / 2, where n is the number of elements. This is because each pair can be made in one change to anything between 1 + C and max(nums) + C, where C is any of the two elements in a pair. For the smallest C, we can bind max(nums) + 1 at the highest; and for the largest C, we can bind 1 + max(nums) at the lowest.
Since those two bounds at the worst cases are equal, we are guaranteed there is some solution with at most N / 2 changes that leaves at least one C (array element) unchanged.
From that we conclude that an optimal solution either (1) has at least one pair where neither element is changed and the rest require only one change per pair, or (2) our optimal solution has n / 2 changes as discussed above.
We can therefore proceed to test each existing pair's single or zero change possibilities as candidates. We can iterate over a sorted list of two to three possibilities per pair, labeled with each cost and index. (Other authors on this page have offered similar ways and code.)
Given an array of N elements representing the permutation atoms, is there an algorithm like that:
function getNthPermutation( $atoms, $permutation_index, $size )
where $atoms is the array of elements, $permutation_index is the index of the permutation and $size is the size of the permutation.
For instance:
$atoms = array( 'A', 'B', 'C' );
// getting third permutation of 2 elements
$perm = getNthPermutation( $atoms, 3, 2 );
echo implode( ', ', $perm )."\n";
Would print:
B, A
Without computing every permutation until $permutation_index ?
I heard something about factoradic permutations, but every implementation i've found gives as result a permutation with the same size of V, which is not my case.
Thanks.
As stated by RickyBobby, when considering the lexicographical order of permutations, you should use the factorial decomposition at your advantage.
From a practical point of view, this is how I see it:
Perform a sort of Euclidian division, except you do it with factorial numbers, starting with (n-1)!, (n-2)!, and so on.
Keep the quotients in an array. The i-th quotient should be a number between 0 and n-i-1 inclusive, where i goes from 0 to n-1.
This array is your permutation. The problem is that each quotient does not care for previous values, so you need to adjust them. More explicitly, you need to increment every value as many times as there are previous values that are lower or equal.
The following C code should give you an idea of how this works (n is the number of entries, and i is the index of the permutation):
/**
* #param n The number of entries
* #param i The index of the permutation
*/
void ithPermutation(const int n, int i)
{
int j, k = 0;
int *fact = (int *)calloc(n, sizeof(int));
int *perm = (int *)calloc(n, sizeof(int));
// compute factorial numbers
fact[k] = 1;
while (++k < n)
fact[k] = fact[k - 1] * k;
// compute factorial code
for (k = 0; k < n; ++k)
{
perm[k] = i / fact[n - 1 - k];
i = i % fact[n - 1 - k];
}
// readjust values to obtain the permutation
// start from the end and check if preceding values are lower
for (k = n - 1; k > 0; --k)
for (j = k - 1; j >= 0; --j)
if (perm[j] <= perm[k])
perm[k]++;
// print permutation
for (k = 0; k < n; ++k)
printf("%d ", perm[k]);
printf("\n");
free(fact);
free(perm);
}
For example, ithPermutation(10, 3628799) prints, as expected, the last permutation of ten elements:
9 8 7 6 5 4 3 2 1 0
Here's a solution that allows to select the size of the permutation. For example, apart from being able to generate all permutations of 10 elements, it can generate permutations of pairs among 10 elements. Also it permutes lists of arbitrary objects, not just integers.
function nth_permutation($atoms, $index, $size) {
for ($i = 0; $i < $size; $i++) {
$item = $index % count($atoms);
$index = floor($index / count($atoms));
$result[] = $atoms[$item];
array_splice($atoms, $item, 1);
}
return $result;
}
Usage example:
for ($i = 0; $i < 6; $i++) {
print_r(nth_permutation(['A', 'B', 'C'], $i, 2));
}
// => AB, BA, CA, AC, BC, CB
How does it work?
There's a very interesting idea behind it. Let's take the list A, B, C, D. We can construct a permutation by drawing elements from it like from a deck of cards. Initially we can draw one of the four elements. Then one of the three remaining elements, and so on, until finally we have nothing left.
Here is one possible sequence of choices. Starting from the top we're taking the third path, then the first, the the second, and finally the first. And that's our permutation #13.
Think about how, given this sequence of choices, you would get to the number thirteen algorithmically. Then reverse your algorithm, and that's how you can reconstruct the sequence from an integer.
Let's try to find a general scheme for packing a sequence of choices into an integer without redundancy, and unpacking it back.
One interesting scheme is called decimal number system. "27" can be thought of as choosing path #2 out of 10, and then choosing path #7 out of 10.
But each digit can only encode choices from 10 alternatives. Other systems that have a fixed radix, like binary and hexadecimal, also can only encode sequences of choices from a fixed number of alternatives. We want a system with a variable radix, kind of like time units, "14:05:29" is hour 14 from 24, minute 5 from 60, second 29 from 60.
What if we take generic number-to-string and string-to-number functions, and fool them into using mixed radixes? Instead of taking a single radix, like parseInt('beef', 16) and (48879).toString(16), they will take one radix per each digit.
function pack(digits, radixes) {
var n = 0;
for (var i = 0; i < digits.length; i++) {
n = n * radixes[i] + digits[i];
}
return n;
}
function unpack(n, radixes) {
var digits = [];
for (var i = radixes.length - 1; i >= 0; i--) {
digits.unshift(n % radixes[i]);
n = Math.floor(n / radixes[i]);
}
return digits;
}
Does that even work?
// Decimal system
pack([4, 2], [10, 10]); // => 42
// Binary system
pack([1, 0, 1, 0, 1, 0], [2, 2, 2, 2, 2, 2]); // => 42
// Factorial system
pack([1, 3, 0, 0, 0], [5, 4, 3, 2, 1]); // => 42
And now backwards:
unpack(42, [10, 10]); // => [4, 2]
unpack(42, [5, 4, 3, 2, 1]); // => [1, 3, 0, 0, 0]
This is so beautiful. Now let's apply this parametric number system to the problem of permutations. We'll consider length 2 permutations of A, B, C, D. What's the total number of them? Let's see: first we draw one of the 4 items, then one of the remaining 3, that's 4 * 3 = 12 ways to draw 2 items. These 12 ways can be packed into integers [0..11]. So, let's pretend we've packed them already, and try unpacking:
for (var i = 0; i < 12; i++) {
console.log(unpack(i, [4, 3]));
}
// [0, 0], [0, 1], [0, 2],
// [1, 0], [1, 1], [1, 2],
// [2, 0], [2, 1], [2, 2],
// [3, 0], [3, 1], [3, 2]
These numbers represent choices, not indexes in the original array. [0, 0] doesn't mean taking A, A, it means taking item #0 from A, B, C, D (that's A) and then item #0 from the remaining list B, C, D (that's B). And the resulting permutation is A, B.
Another example: [3, 2] means taking item #3 from A, B, C, D (that's D) and then item #2 from the remaining list A, B, C (that's C). And the resulting permutation is D, C.
This mapping is called Lehmer code. Let's map all these Lehmer codes to permutations:
AB, AC, AD, BA, BC, BD, CA, CB, CD, DA, DB, DC
That's exactly what we need. But if you look at the unpack function you'll notice that it produces digits from right to left (to reverse the actions of pack). The choice from 3 gets unpacked before the choice from 4. That's unfortunate, because we want to choose from 4 elements before choosing from 3. Without being able to do so we have to compute the Lehmer code first, accumulate it into a temporary array, and then apply it to the array of items to compute the actual permutation.
But if we don't care about the lexicographic order, we can pretend that we want to choose from 3 elements before choosing from 4. Then the choice from 4 will come out from unpack first. In other words, we'll use unpack(n, [3, 4]) instead of unpack(n, [4, 3]). This trick allows to compute the next digit of Lehmer code and immediately apply it to the list. And that's exactly how nth_permutation() works.
One last thing I want to mention is that unpack(i, [4, 3]) is closely related to the factorial number system. Look at that first tree again, if we want permutations of length 2 without duplicates, we can just skip every second permutation index. That'll give us 12 permutations of length 4, which can be trimmed to length 2.
for (var i = 0; i < 12; i++) {
var lehmer = unpack(i * 2, [4, 3, 2, 1]); // Factorial number system
console.log(lehmer.slice(0, 2));
}
It depends on the way you "sort" your permutations (lexicographic order for example).
One way to do it is the factorial number system, it gives you a bijection between [0 , n!] and all the permutations.
Then for any number i in [0,n!] you can compute the ith permutation without computing the others.
This factorial writing is based on the fact that any number between [ 0 and n!] can be written as :
SUM( ai.(i!) for i in range [0,n-1]) where ai <i
(it's pretty similar to base decomposition)
for more information on this decomposition, have a look at this thread : https://math.stackexchange.com/questions/53262/factorial-decomposition-of-integers
hope it helps
As stated on this wikipedia article this approach is equivalent to computing the lehmer code :
An obvious way to generate permutations of n is to generate values for
the Lehmer code (possibly using the factorial number system
representation of integers up to n!), and convert those into the
corresponding permutations. However the latter step, while
straightforward, is hard to implement efficiently, because it requires
n operations each of selection from a sequence and deletion from it,
at an arbitrary position; of the obvious representations of the
sequence as an array or a linked list, both require (for different
reasons) about n2/4 operations to perform the conversion. With n
likely to be rather small (especially if generation of all
permutations is needed) that is not too much of a problem, but it
turns out that both for random and for systematic generation there are
simple alternatives that do considerably better. For this reason it
does not seem useful, although certainly possible, to employ a special
data structure that would allow performing the conversion from Lehmer
code to permutation in O(n log n) time.
So the best you can do for a set of n element is O(n ln(n)) with an adapted data structure.
Here's an algorithm to convert between permutations and ranks in linear time. However, the ranking it uses is not lexicographic. It's weird, but consistent. I'm going to give two functions, one that converts from a rank to a permutation, and one that does the inverse.
First, to unrank (go from rank to permutation)
Initialize:
n = length(permutation)
r = desired rank
p = identity permutation of n elements [0, 1, ..., n]
unrank(n, r, p)
if n > 0 then
swap(p[n-1], p[r mod n])
unrank(n-1, floor(r/n), p)
fi
end
Next, to rank:
Initialize:
p = input permutation
q = inverse input permutation (in linear time, q[p[i]] = i for 0 <= i < n)
n = length(p)
rank(n, p, q)
if n=1 then return 0 fi
s = p[n-1]
swap(p[n-1], p[q[n-1]])
swap(q[s], q[n-1])
return s + n * rank(n-1, p, q)
end
The running time of both of these is O(n).
There's a nice, readable paper explaining why this works: Ranking & Unranking Permutations in Linear Time, by Myrvold & Ruskey, Information Processing Letters Volume 79, Issue 6, 30 September 2001, Pages 281–284.
http://webhome.cs.uvic.ca/~ruskey/Publications/RankPerm/MyrvoldRuskey.pdf
Here is a short and very fast (linear in the number of elements) solution in python, working for any list of elements (the 13 first letters in the example below) :
from math import factorial
def nthPerm(n,elems):#with n from 0
if(len(elems) == 1):
return elems[0]
sizeGroup = factorial(len(elems)-1)
q,r = divmod(n,sizeGroup)
v = elems[q]
elems.remove(v)
return v + ", " + ithPerm(r,elems)
Examples :
letters = ['a','b','c','d','e','f','g','h','i','j','k','l','m']
ithPerm(0,letters[:]) #--> a, b, c, d, e, f, g, h, i, j, k, l, m
ithPerm(4,letters[:]) #--> a, b, c, d, e, f, g, h, i, j, m, k, l
ithPerm(3587542868,letters[:]) #--> h, f, l, i, c, k, a, e, g, m, d, b, j
Note: I give letters[:] (a copy of letters) and not letters because the function modifies its parameter elems (removes chosen element)
The following code computes the kth permutation for given n.
i.e n=3.
The various permutations are
123
132
213
231
312
321
If k=5, return 312.
In other words, it gives the kth lexicographical permutation.
public static String getPermutation(int n, int k) {
char temp[] = IntStream.range(1, n + 1).mapToObj(i -> "" + i).collect(Collectors.joining()).toCharArray();
return getPermutationUTIL(temp, k, 0);
}
private static String getPermutationUTIL(char temp[], int k, int start) {
if (k == 1)
return new String(temp);
int p = factorial(temp.length - start - 1);
int q = (int) Math.floor(k / p);
if (k % p == 0)
q = q - 1;
if (p <= k) {
char a = temp[start + q];
for (int j = start + q; j > start; j--)
temp[j] = temp[j - 1];
temp[start] = a;
}
return k - p >= 0 ? getPermutationUTIL(temp, k - (q * p), start + 1) : getPermutationUTIL(temp, k, start + 1);
}
private static void swap(char[] arr, int j, int i) {
char temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
private static int factorial(int n) {
return n == 0 ? 1 : (n * factorial(n - 1));
}
It is calculable. This is a C# code that does it for you.
using System;
using System.Collections.Generic;
namespace WpfPermutations
{
public class PermutationOuelletLexico3<T>
{
// ************************************************************************
private T[] _sortedValues;
private bool[] _valueUsed;
public readonly long MaxIndex; // long to support 20! or less
// ************************************************************************
public PermutationOuelletLexico3(T[] sortedValues)
{
if (sortedValues.Length <= 0)
{
throw new ArgumentException("sortedValues.Lenght should be greater than 0");
}
_sortedValues = sortedValues;
Result = new T[_sortedValues.Length];
_valueUsed = new bool[_sortedValues.Length];
MaxIndex = Factorial.GetFactorial(_sortedValues.Length);
}
// ************************************************************************
public T[] Result { get; private set; }
// ************************************************************************
/// <summary>
/// Return the permutation relative to the index received, according to
/// _sortedValues.
/// Sort Index is 0 based and should be less than MaxIndex. Otherwise you get an exception.
/// </summary>
/// <param name="sortIndex"></param>
/// <param name="result">Value is not used as inpu, only as output. Re-use buffer in order to save memory</param>
/// <returns></returns>
public void GetValuesForIndex(long sortIndex)
{
int size = _sortedValues.Length;
if (sortIndex < 0)
{
throw new ArgumentException("sortIndex should be greater or equal to 0.");
}
if (sortIndex >= MaxIndex)
{
throw new ArgumentException("sortIndex should be less than factorial(the lenght of items)");
}
for (int n = 0; n < _valueUsed.Length; n++)
{
_valueUsed[n] = false;
}
long factorielLower = MaxIndex;
for (int index = 0; index < size; index++)
{
long factorielBigger = factorielLower;
factorielLower = Factorial.GetFactorial(size - index - 1); // factorielBigger / inverseIndex;
int resultItemIndex = (int)(sortIndex % factorielBigger / factorielLower);
int correctedResultItemIndex = 0;
for(;;)
{
if (! _valueUsed[correctedResultItemIndex])
{
resultItemIndex--;
if (resultItemIndex < 0)
{
break;
}
}
correctedResultItemIndex++;
}
Result[index] = _sortedValues[correctedResultItemIndex];
_valueUsed[correctedResultItemIndex] = true;
}
}
// ************************************************************************
/// <summary>
/// Calc the index, relative to _sortedValues, of the permutation received
/// as argument. Returned index is 0 based.
/// </summary>
/// <param name="values"></param>
/// <returns></returns>
public long GetIndexOfValues(T[] values)
{
int size = _sortedValues.Length;
long valuesIndex = 0;
List<T> valuesLeft = new List<T>(_sortedValues);
for (int index = 0; index < size; index++)
{
long indexFactorial = Factorial.GetFactorial(size - 1 - index);
T value = values[index];
int indexCorrected = valuesLeft.IndexOf(value);
valuesIndex = valuesIndex + (indexCorrected * indexFactorial);
valuesLeft.Remove(value);
}
return valuesIndex;
}
// ************************************************************************
}
}
If you store all the permutations in memory, for example in an array, you should be able to bring them back out one at a time in O(1) time.
This does mean you have to store all the permutations, so if computing all permutations takes a prohibitively long time, or storing them takes a prohibitively large space then this may not be a solution.
My suggestion would be to try it anyway, and come back if it is too big/slow - there's no point looking for a "clever" solution if a naive one will do the job.
I'm trying to find the second smallest element in an array of n elements using only n + ceil(lg n) - 2 comparisons. The hint in CLRS says to find the smallest element.
This takes n - 1 comparisons so I'm left with ceil(lg n) - 1 comparisons to find the second smallest, once I know the largest.
Any ideas?
Thanks,
bclayman
Let's say we've got a list a1...an with n being a power of 2.
First pair the elements up, let's say a1 with a2, a3 with a4 and so on, and compare them with each other. This gives you n/2 comparisons.
Advance all the winners to the next round, which only has n/2 elements now, and repeat the same process. This requires n/4 more comparisons.
Repeat the above until you've only got 1 element left, the ultimate winner. To get there you had to do n/2 + n/4 + ... + 1 = n-1 comparisons.
That's great but which one could be the second smallest? Well, it has to be one of the elements your winner had beaten along the way to the top. There are lg n such losers, so you need to compare them amongst each other to find the smallest (requiring a further lg n - 1 comparisons).
And the smallest of the losers is the second smallest overall.
It's easy to prove why the above method always finds the second smallest: since it's smaller than every element but the ultimate winner, it would win every round apart from the one against the champion.
If n isn't a power of 2, the process is almost exactly the same, except the list of losers will be as long as it would be for the next exact power of 2 which is why you end up with ceil(lg n).
Here is a basic implementation in JavaScript, not sure it fully respects your O() requirements in all cases though. The initial array will also affect the comparison count.
var elements = [ 12, 1 , 3, 4, 65, 7, -43, 8, 3, 8, 45, 2 ];
var nCompare = 0;
var smallest = elements[0], smaller = elements[0];
for(var i = 1; i < elements.length; ++i) {
++nCompare;
if(elements[i] < smaller) {
++nCompare;
if(elements[i] < smallest) {
smaller = smallest;
smallest = elements[i];
}
else
smaller = elements[i];
}
}
document.body.innerHTML = '<pre>\n' +
'Elements: [ ' + elements.join(', ') + ' ]\n' +
'# element: ' + elements.length + '\n' +
'\n' +
'Smallest: ' + smallest + '\n' +
'2nd smallest: ' + smaller + '\n' +
'# compare: ' + nCompare +
'</pre>';
Below is a solution with O(n) complexity in Java:
public class MainClass {
public static void main(String[] args) {
int[] a = { 4, 2, 8, -2, 56, 0 };
c(a);
}
private static void c(int[] a) {
int s = Integer.MAX_VALUE;
int ss = Integer.MAX_VALUE;
for (int i : a) {
if (i < s) {
ss = s;
s = i;
} else if (i < ss) {
ss = i;
}
}
System.out.println("smallest : " + s + " second smallest : " + ss);
}
}
Output : smallest : -2 second smallest : 0
I think their is no need to for cel(log n) -1 additional comparison as it can be done in O(n) only i.e in one scan with the help of an extra variable as given below:-
for(i,0,no_of_elem-1)
{
if(min>elem[i])
{
second_smallest=min;
min=elem[i];
}
}
You just store previous minimum in a variable as that will be your answer after scanning all elements.
I need to distribute a large integer budget randomly among a small array with n elements, so that all elements in the array will have the same distribution and sum up to budget and each element in the array gets at least min.
I have an algorithm that runs in O(budget):
private int[] distribute(int budget, int n, int min) {
int[] subBudgets = new int[n];
for (int i = 0; i < n; i++) {
subBudgets[i] = min;
}
budget -= n * min;
while (budget > 0) {
subBudgets[random.nextInt(n)]++;
budget--;
}
return subBudgets;
}
However, when budget increases, it can be very expensive. Is there any algorithm that runs in O(n) or even better?
First generate n random numbers x[i], sum them up and then divide budget by the sum and you will get k. Then assign k*x[i] to each array element. It is simple and O(n).
If you want there at least min value in each element you can modify above algorithm by filling all elements by min (or use k*x[i] + min) and subcontracting n*min from budget before starting above algorithm.
If you need working with integers you can approach problem by using real value k and rounding k*x[i]. Then you have to track accumulating rounding error and add or subtract accumulated error from calculated value if it reach whole unit. You have to also assign remaining value into last element to reach whole budget.
P.S.: Note this algorithm can be used with easy in pure functional languages. It is reason why I like this whole family of algorithms generating random numbers for each member and then do some processing afterward. Example of implementation in Erlang:
-module(budget).
-export([distribute/2, distribute/3]).
distribute(Budget, N) ->
distribute(Budget, N, 0).
distribute(Budget, N, Min) when
is_integer(Budget), is_integer(N), N > 0,
is_integer(Min), Min >= 0, Budget >= N*Min ->
Xs = [random:uniform() || _ <- lists:seq(1,N) ],
Rest = Budget - N*Min,
K = Rest / lists:sum(Xs),
F = fun(X, {Bgt, Err, Acc}) ->
Y = X*K + Err,
Z = round(Y),
{Bgt - Z, Y - Z, [Z + Min | Acc]}
end,
{Bgt, _, T} = lists:foldl(F, {Rest, 0.0, []}, tl(Xs)),
[Bgt + Min | T].
Same algorithm in C++ (?? I dunno.)
private int[] distribute(int budget, int n, int min) {
int[] subBudgets = new int[n];
double[] rands = new double[n];
double k, err = 0, sum = 0;
budget -= n * min;
for (int i = 0; i < n; i++) {
rands[i] = random.nextDouble();
sum += rands[i];
}
k = (double)budget/sum;
for (int i = 1; i < n; i++) {
double y = k*rands[i] + err;
int z = floor(y+0.5);
subBudgets[i] = min + z;
budget -= z;
err = y - z;
}
subBudgets[0] = min + budget;
return subBudgets;
}
Sampling from the Multinomial Distribution
The way that you are currently distributing the dollars left over after min has been given to each subbudget involves performing a fixed number budget of random "trials", where on each trial you randomly select one of n categories, and you want to know how many times each category is selected. This is modeled by a multinomial distribution with the following parameters:
Number of trials (called n on the WP page): budget
Number of categories (called k on the WP page): n
Probability of category i in each trial, for 1 <= i <= n: 1/n
The way you are currently doing it is a good way if the number of trials is around the same size as the number of categories, or less. But if the budget is large, there are other more efficient ways of sampling from this distribution. The easiest way I know of is to notice that a multinomial distribution with k categories can be repeatedly decomposed into binomial distributions by grouping categories together: instead of directly how many selections there are for each of the k categories, we express this as a sequence of questions: "How to split the budget between the first category and the other k-1?" We next ask "How to split the remainder between the second category and the other k-2?", etc.
So the top level binomial has category (subbudget) 1 vs. everything else. Decide the number of dollars that go to subbudget 1 by taking 1 sample from a binomial distribution with parameters n = budget and p = 1/n (how to do this is described here); this will produce some number 0 <= x[1] <= n. To find the number of dollars that go to subbudget 2, take 1 sample from a binomial distribution on the remaining money, i.e. using parameters n = budget - x[1] and p = 1/(n-1). After getting subbudget 2's amount x[2], subbudget 3's will be found using parameters n = budget - x[1] - x[2] and p = 1/(n-2), and so on.
Integrating #Hynek -Pichi- Vychodil's idea and my original algorithm, I came up with the following algorithm that runs in O(n) and all rounding errors are uniformly distributed to the array:
private int[] distribute(int budget, int n, int min) {
int[] subBudgets = new int[n];
for (int i = 0; i < n; i++) {
subBudgets[i] = min;
}
budget -= n * min;
if (budget > 3 * n) {
double[] rands = new double[n];
double sum = 0;
for (int i = 0; i < n; i++) {
rands[i] = random.nextDouble();
sum += rands[i];
}
for (int i =0; i < n; i++) {
double additionalBudget = budget / sum * rands[i];
subBudgets[i] += additionalBudget;
budget -= additionalBudget;
}
}
while (budget > 0) {
subBudgets[random.nextInt(n)]++;
budget--;
}
return subBudgets;
}
Let me demonstrate my algorithm using an example:
Assume budget = 100, n = 5, min = 10
Initialize the array to:
[10, 10, 10, 10, 10] => current sum = 50
Generate a random integer ranging from 0 to 50 (50 is the result of budget - current sum):
Say the random integer is 20 and update the array:
[30, 10, 10, 10, 10] => current sum = 70
Generate a random integer ranging from 0 to 30 (30 is the result of budget - current sum):
Say the random integer is 5 and update the array:
[30, 15, 10, 10, 10] => current sum = 75
Repeat the process above and the last element is whatever is left.
Finally, shuffle the array to get the final result.