Fast Algorithm For Making Change With 6 Denominations: Interview Practice - arrays

I came to a solution for this problem, but it takes O(n^2). Is it possible to do better?
Problem: Suppose we want to make change for D dollars. We have an array A with N elements. The denominations exist within the array as dollar values, but we do not know the exact denominations in advanced. However, we are given that 0 < A[j] < 125*N. The restrictions are, we only have 6 of each type of denomination and we must be able to determine if we can give change using exactly 6 total bills (we can repeat bills and assume bills come in any type, so we can have 4$ bills)..
Ex:
If A = [3,4,6,5,20,18,10,30] and D = 50. Then the algorithm returns true since 5+5+5+5+10+20.
My attempts:
I tried sorting and then dividing but then I get stuck because I am not sure how to eliminate possible choices since I do not know exactly what is in the array. Better yet, without explicitly going through in O(n^2) time, I am not sure how to for sure say that it is not possible. Is it possible to take advantage of the fact that I know I am restricted to exactly 6 bills?

To me it looks like a typical recursion problem. Lets write a function that will check if we can make change for the D dollars. For that we will take the first bill (lets say it's $3), remove it from the the D and then recursively check if we can make change for the D - 3 dollars.
We can make this solution much faster if we don't check the combinations that we have already checked. So if we already know that bills 3, 5, 10 don't fit our needs then we don't need to check the combination 5, 10, 3 either. For that we need firstly to sort the A array and then pass the number of last used bill (last_bill_id) to the check function. Inside the function we don't need to check any combinations with bills with number less than last_bill_id.
Full solution in python:
A = [3, 4, 6, 5, 20, 18, 10, 30]
D = 50
def check(counters, current_sum, depth, last_bill_id):
global A
if depth > 6: # max amount of bills is 6
return False
if depth == 6: # we used 6 bill, did we get the correct sum?
return current_sum == 0
if current_sum <= 0: # we gave too much change
return False
# current_sum > 0 and depth < 6
for i in xrange(last_bill_id, len(A)):
if counters[i] < 6:
# we can use i-th bill another time
counters[i] += 1
if check(counters, current_sum - A[i], depth + 1, i):
return True
counters[i] -= 1
return False
# init counters with zeros
counters = [0] * len(A)
# check if we can change for `D`
A = sorted(A) # sort A before the function
print 'Can make change:', check(counters, D, 0, 0)
# print bills with counters
for i, c in enumerate(counters):
if c > 0:
print '$%d x %d' % (A[i], c)
Output:
Can make change: True
$3 x 4
$18 x 1
$20 x 1
EDIT
Previous solution has complexity O(n^6). But actually we can make it even faster with memoization (or, we put it in the other way, dynamic programming). Lets sort the A array and repeat every number in it 6 times, so we'll get something like A = [3, 3, 3, 3, 3, 3, 5, 5, ...]. Now lets fill the 3D matrix M[,,], where M[bills_num, i, d] is true iff we can make change for the d dollars with bills_num bills starting in i-th position of the A array. The result will be in the cell M[6, 0, D]. This matrix has size 6 x (6 * n) x D, so we can fill it in O(6 * (6 * n) * D) == O(n * D) time (with the recursive approach similar to the solution before). Code in python:
A = [3, 4, 6, 5, 20, 18, 10, 30]
D = 50
# sort A and repeat 6 times
A = sorted(A * 6)
# create matrix M, where:
# 0 == uncomputed, 1 == True, -1 == False
arr1d = lambda x: [0] * x
arr2d = lambda x, y: [arr1d(y) for i in xrange(x)]
arr3d = lambda x, y, z: [arr2d(y, z) for i in xrange(x)]
M = arr3d(6 + 1, len(A), D + 1)
def fill_m(bills_num, start_pos, d):
global A, M
if d == 0: # can make change for 0 only with 0 bills
return True if bills_num == 0 else False
if d < 0 or bills_num <= 0 or start_pos >= len(A):
return False
if M[bills_num][start_pos][d] == 0:
# need to compute cell value
if fill_m(bills_num, start_pos + 1, d):
M[bills_num][start_pos][d] = 1
elif fill_m(bills_num - 1, start_pos + 1, d - A[start_pos]):
M[bills_num][start_pos][d] = 1
else:
M[bills_num][start_pos][d] = -1
return M[bills_num][start_pos][d] == 1
print 'Can make change for $', D, fill_m(6, 0, D)

Related

Array Partition I (How to prove this in math)

This is a question from leetcode.
Given an integer array nums of 2n integers, group these integers into n pairs (a1, b1), (a2, b2), ..., (an, bn) such that the sum of min(ai, bi) for all i is maximized. Return the maximized sum.
Input: nums = [1, 4, 3, 2]
Output: 4
Explanation: All possible pairings (ignoring the ordering of elements) are:
1. (1, 4), (2, 3) -> min(1, 4) + min(2, 3) = 1 + 2 = 3
2. (1, 3), (2, 4) -> min(1, 3) + min(2, 4) = 1 + 2 = 3
3. (1, 2), (3, 4) -> min(1, 2) + min(3, 4) = 1 + 3 = 4
So the maximum possible sum is 4.
I resolved this by trying multiple examples and found out if I sorted and arranged the pair like 1 2 | 3 4, the min value of each pair is what I want. Since the array is sorted, the positions of min values are fixed; hence I can get the value by index + 2. Although it works, it's more like a guss. Does anyone know how to prove this in mathematics to make the logic more rigourously?
def arrayPairSum(nums: List[int]) -> int:
nums_sort = sorted(nums)
res = 0
i = 0
while i < len(nums):
res += nums_sort[i]
i += 2
return res
You can use induction. Let's say you have a sorted array a. The smallest number, a[0] will always be the smallest of whatever pair it occurs in. The maximum sum will occur if you select its partner to be a[1], the next smallest number. You can show this by selecting some other number a[m] to be its partner. In that case, at least one of the other pairs will have minimum a[1], which is by definition less than it would otherwise have. You can proceed to apply the same argument to the remaining elements a[2:].
Alternatively, you can start from the other end. a[-1] is guaranteed to never figure in the sum because it is the maximum of whatever pair it will occur in. If you pair it with anything other than a[-2], the total sum will not be maximized: some smaller a[m] will represent the pair containing a[-1] in the sum, while a[-2] will be larger than any a[n] it is paired with, and therefore will not appear in the sum.
Both arguments yield the same result: the maximum sum is over the even indices of the sorted array.
As mentioned in the comments, the following two implementations will be more efficient than a raw for loop:
def arrayPairSum(nums: List[int]) -> int:
return sum(sorted(nums)[::2])
OR
def arrayPairSum(nums: List[int]) -> int:
nums.sort()
return sum(nums[::2])
The latter does the sorting in-place, and is probably faster if allowed.

comprehension with Nested loops

I've to find how many different permutations are possible to create the sum of 65 using exactly three coins.
given list :
coins = [200, 100, 50, 20, 10, 5]
what I've tried so far:
len(set([x + y + z for x in coins for y in coins for z in coins if (x+y+z)%65 == 0]))
I've think I maybe should use the import function?
for example with a random list:
import itertools
len(list(itertools.permutations([1,2,3])))
Your nested loops approach technically doesn't actually solve the problem since it will over-count by including multiples of your target sum 65. You will need to count only those which are exactly 65 in order to solve the problem -- see here:
coins = [200, 100, 50, 20, 10, 5]
target = 65
original = len(set([x + y + z for x in coins for y in coins for z in coins if (x + y + z) % 65 == 0]))
print('Original:', original) # prints 3
new = list(set([x + y + z for x in coins for y in coins for z in coins if (x + y + z) % target == 0])).count(target)
print('Ignoring multiples of 65:', new) # prints 1
Now, for your second point, this is where it gets a little confusing. The above actually counts the number of combinations and not the number of permutations. In contrast, your original statement was for the number of permutations and your example with itertools also uses permutations. I'm not sure what your exact intention is, but itertools provides a method for both.
However, your example len(list(itertools.permutations([1, 2, 3]))) would return a count of all permutations rather than only those which sum to your target value. So, if you do len(list(itertools.permutations(coins))), the permutations would be of length 6 by default -- you have to supply a second argument to itertools.permutations for your target length of 3.
Hence, the following should work for you:
import itertools
perms = list(itertools.permutations(coins, 3)) # permutations of length 3
total_perms = len([perm for perm in perms if sum(perm) == target]) # count only those which sum to 65
print('Total permutations:', total_perms) # prints 6
However, if you actually were interested in the number of combinations -- which is what the original nested loops method does, then you would have to use itertools.combinations instead:
combs = list(itertools.combinations(coins, 3)) # combinations of length 3
total_combs = len([comb for comb in combs if sum(comb) == target]) # count only those which sum to 65
print('Total combinations:', total_combs) # prints 1
Good luck!

Ruby - Max Array Sum from non-adjacent integers in array

This code walks through the array and returns the largest sum of non-adjacent integers. This is from HackerRank - Can anyone explain why this works? Its a solution I found online but I don't understand it and didn't figure this out myself.
Thanks!
https://www.hackerrank.com/challenges/max-array-sum/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=dynamic-programming
def maxSubsetSum(arr)
incl = 0
excl = 0
temp = 0
for i in 0...arr.size
temp = incl
incl = [arr[i]+excl, temp].max
excl = temp
end
return [incl, excl].max
end
maxSubsetSum([1,3,5,2,4,6,8])
This is some pretty ugly (by ugly I mean unidiomatic) Ruby code so let's clean it up before we proceed:
def maxSubsetSum(arr)
incl = 0
excl = 0
temp = 0
arr.each do |value|
temp = incl
incl = [value + excl, temp].max
excl = temp
end
[incl, excl].max
end
maxSubsetSum([1,3,5,2,4,6,8])
Now we can start to analyze this code. I've gone through and written the values of each variable at each step in the loop:
value = 1
temp = 0
incl = 1
excl = 0
value = 3
temp = 1
incl = 3
excl = 1
value = 5
temp = 3
incl = 6
excl = 3
value = 2
temp = 6
incl = 6
excl = 6
value = 4
temp = 6
incl = 10
excl = 6
value = 6
temp = 10
incl = 12
excl = 10
value = 8
temp = 12
incl = 18
excl = 12
(return 18)
At any given point, the program is determining whether or not it should "use" a value -- the disadvantage to using a value is that you cannot use the value after it, as that is adjacent. At every step in the process, it's comparing adding the current value to excl (which represents the best sum at the previous step without including that value) with incl (technically temp but temp holds incl at that stage), which represents the value at the previous iteration of including the value.
temp is not remembered across loops; after each iteration of the loop, the only values that matter are incl and excl. To reiterate, at the end of each loop, incl holds the best sum that includes the previous number, and excl holds the best sum that does not include the previous number. At each step in the loop, incl and excl are re-computed to reflect the inclusion or exclusion of the new value.
To show that this process does work, let's consider the above array but with an extra element at the end, 7. So now our array looks like this: [1,3,5,2,4,6,8,7]. We already have most of the work done from the previous listing. We set temp to 18, incl becomes [7 + 12, 18].max which is 19, and excl becomes 18. Now we can see that including this last number means that we get a number larger than the previous result, so we have to use it, and that means that we can't use the 8 we previously used to get our result.
This process is known as dynamic programming, where in order to determine the answer to the overall question, you break the problem down and add complexity to it. In this case, we break the array down and then slowly add back each value, keeping track of what the best results for the previous part are.
Understand the algorithm
The code is fairly straightforward once you understand the algorithm being used. A Google search turned up many hits. See, for example, this article. They all seem to use the same dynamic programming approach.
The general solution produces both the maximum sum and the array of elements from the original array that produce that sum. Here only the sum is needed so the solution is simplified slightly. I will first describe the algorithm for producing the general solution, then will simplify to compute only the maximum sum.
Compute general solution
If the array is a, the nth step computes two values: the maximum sum among the first n elements, a[0], a[1],...,a[n-1], if the nth element is included and the maxium sum if that element is excluded. This is easy because those two results have already been computed for the previous element, a[n-2]. When the last element of the array has been processed the maximum sum equals the larger of the maximum sum when the last element is included and the maximum sum when the last element is excluded. To construct the array that yields the maximum sum a simple walk-back is employed.
Let b be an array of hashes that we will build. Initially,
b[0] = { included: 0, excluded: 0 }
Then for each n = 1,..., m, where m = a.size,
b[n] = { included: a[n-1] + b[n-1][:excluded],
excluded: [b[n-1][:included], b[n-1][:excluded]].max }
After b[m] has been computed, the maximum total equals
[b[m][:included], b[m][:excluded]].max
The walk-back to construct the array that yields the largest sum is outlined in the example below.
Consider the following.
arr = [1, 3, -2, 7, 4, 6, 8, -3, 2]
b = (1..arr.size).each_with_object([{ included: 0, excluded: 0 }]) do |n,b|
b[n] = { included: arr[n-1] + b[n-1][:excluded],
excluded: [b[n-1][:included], b[n-1][:excluded]].max }
end
#=> [{:included=> 0, :excluded=> 0}, n arr[n-1]
# {:included=> 1, :excluded=> 0}, 1 1
# {:included=> 3, :excluded=> 1}, 2 3
# {:included=>-1, :excluded=> 3}, 3 -2
# {:included=>10, :excluded=> 3}, 4 7
# {:included=> 7, :excluded=>10}, 5 4
# {:included=>16, :excluded=>10}, 6 6
# {:included=>18, :excluded=>16}, 7 8
# {:included=>13, :excluded=>18}, 8 -3
# {:included=>20, :excluded=>18}] 9 2
I've included arr[-1] for each value of n above for easy reference. The largest sum is seen to be [20, 18].max #=> 20, which includes the last element, arr[9] #=> 2. Hence, arr[8] could not be included. Note that b[8][:excluded] + arr[8] #=> 18 + 2 => 20, which is b[9][:included].
Since arr[8] is excluded arr[7] could be included or excluded. We see that b[7][:included] == b[8][:excluded] == 18 and b[7][:excluded] == 16 < 18 == b[8][:exclued], which tells us that arr[7] is included. The same reasoning is used to walk back to the beginning of b, demonstrating that the elements that sum to 20 form the array are [3, 7, 8, 2]. In general, multiple optimal solutions are of course possible.
Compute maximum sum only
If, as in this question, only the maximum sum is required, and not the array of elements that produce it, there is no need for b to be an array. We may simply write
b = { included: a[0], excluded: 0 }
Then for each n = 1,..., m-1
b[:included], b[:excluded] =
a[n] + b[:excluded], [b[:included], b[:excluded]].max
We may wrap this in a method as follows.
def max_subset_sum(arr)
arr.size.times.with_object({ included: 0, excluded: 0 }) do |n,h|
h[:included], h[:excluded] =
arr[n] + h[:excluded], [h[:included], h[:excluded]].max
end.values.max
end
max_subset_sum arr
#=> 20 (= 3+7+8+2)
If desired, we can write this with two-element arrays rather than hashes (closer to the code in the question), though I don't think it's as clear.
def optimize(arr)
arr.size.times.with_object([0, 0]) do |n,a|
a[0], a[1] = arr[n] + a[1], [a[0], a[1]].max
end.max
end
optimize arr
#=> 20
Notice that I've used parallel assignment to avoid the create of a temporary variable.

Arrays: Find minimum number of swaps to make bitonicity of array minimum?

Suppose we are given an array of integer. All adjacent elements are guaranteed to be distinct. Let us define bitonicity of this array a as bt using the following relation:
bt_array[i] = 0, if i == 0;
= bt_array[i-1] + 1, if a[i] > a[i-1]
= bt_array[i-1] - 1, if a[i] < a[i-1]
= bt_array[i-1], if a[i] == a[i-1]
bt = last item in bt_array
We say the bitonicity of an array is minimum when its bitonicity is 0 if it has an odd number of elements, or its bitonicity is +1 or -1 if it has an even number of elements.
The problem is to design an algorithm that finds the fewest number of swaps required in order to make the bitonicity of any array minimum. The time complexity of this algorithm should be at worst O(n), n being the number of elements in the array.
For example, suppose a = {34,8,10,3,2,80,30,33,1}
Its initial bt is -2. Minimum would be 0. This can be achieved by just 1 swap, namely swapping 2 and 3. So the output should be 1.
Here are some test cases:
Test case 1: a = {34,8,10,3,2,80,30,33,1}, min swaps = 1 ( swap 2 and 3)
Test case 2: {1,2,3,4,5,6,7}: min swaps = 2 (swap 7 with 4 and 6 with 5)
Test case 3: {10,3,15,7,9,11}: min swaps = 0. bt = 1 already.
And a few more:
{2,5,7,9,5,7,1}: current bt = 2. Swap 5 and 7: minSwaps = 1
{1,7,8,9,10,13,11}: current bt = 4: Swap 1,8 : minSwaps = 1
{13,12,11,10,9,8,7,6,5,4,3,2,1}: current bt = -12: Swap (1,6),(2,5) and (3,4) : minSwaps = 3
I was asked this question in an interview, and here's what I came up with:
1. Sort the given array.
2. Reverse the array from n/2 to n-1.
3. Compare from the original array how many elements changed their position.
Return half of it.
And my bit of code that does this:
int returnMinSwaps(int[] a){
int[] a = {1,2,3,4,5,6,7};
int[] b = a;
Arrays.sort(b);
for(int i=0; i<= b.length/2 - 1; i++){
swap(b[b.length - i], b[b.length/2 - i]);
}
int minSwaps = 0;
for(int i=0;i<b.length;i++){
if(a[i] != b[i])
minSwaps++;
}
return minSwaps/2;
}
Unfortunately, I am not getting correct minimum number of ways for some test cases using this logic. Also, I am sorting the array which is making it in O(n log n) and it needs to be done in O(n).
URGENT UPDATE: T3 does not hold!
Consider α = [0, 7, 8, 3, 4, 10, 1, 6, 9, 2, 5]. There is no Sij(α) that can lower |B(α)| by more than 2.
Thinking on amendments to the method…
Warning
This solution only works when there are no array elements that are equal.
Feel free to propose generalizations by editing the answer.
Go straight to Conclusion if you want to skip the boring part.
Introduction
Let`s define the swap operator Sij over the array a:
Sij(a) : [… ai, … aj, …] → [… aj, … ai, …]   ∀i, j ∈ [0; |a|) ∩ ℤ : i ≠ j
Let`s also refer to the bitonicity as B(a), and define it more formally:
The obvious facts:
Swaps are symmetric:
Sij(a) = Sji(a)
Two swaps are independent if their target positions don`t intersect:
Sij(Skl(a)) = Skl(Sij(a))   ∀i, j, k, l : {i, j} ∩ {k, l} = ∅
Two 2-dependent swaps undo one another:
Sij(Sij(a)) = a
Two 1-dependent swaps abide to the following:
Sjk(Sij(a)) = Sij(Sik(a)) = Sik(Sjk(a))
Bitonicity difference is always even for equally sized arrays:
(B(a) – B(a')) mod 2 = 0   ∀a, a' : |a| = |a'|
Naturally, ∀i : 0 < i < |a|,
B([ai–1, ai]) – B([a'i–1, a'i]) = sgn(ai – ai–1) – sgn(a'i – a'i–1),
which can either be 1 – 1 = 0, or 1 – –1 = 2, or –1 – 1 = –2, or –1 – –1 = 0, and any number of ±2`s and 0`s summed yield an even result.
N.B.: this is only true if all elements in a differ from one another, same with a'!
Theorems
[T1]   |B(Sij(a)) – B(a)| ≤ 4   ∀a, Sij(a)
Without loss of generality, let`s assume that:
0 < i, j < |a| – 1
j – i ≥ 2
ai–1 < ai+1
aj–1 < aj+1
Depending on ai, 3 cases are possible:
ai–1 < ai < ai+1: sgn(ai – ai–1) + sgn(ai+1 – ai) = 1 + 1 = 2
ai < ai–1 < ai+1: sgn(ai – ai–1) + sgn(ai+1 – ai) = –1 + 1 = 0
ai–1 < ai+1 < ai: sgn(ai – ai–1) + sgn(ai+1 – ai) = 1 + –1 = 0
When altering ai and leaving all other elements of a intact, |B(a') – B(a)| ≤ 2 (where a' is the resulting array, for which the above 3 cases also apply), since no other terms of B(a) changed their value, except those two from the 1-neighborhood of ai.
Sij(a) implies what`s described above to happen twice, once for ai and once for aj.
Thus, |B(Sij(a)) – B(a)| ≤ 2 + 2 = 4.
Analogously, for each of the corners and j – i = 1 the max. possible delta is 2, which is ≤ 4.
Finally, this straightforwardly extrapolates to ai–1 > ai+1 and aj–1 > aj+1.
QED
[T2]   ∀a : |B(a)| ≥ 2   ∃Sij(a) : |B(Sij(a))| = |B(a)| – 2
{proof in progress, need to sleep}
[T3]   ∀a : |B(a)| ≥ 4   ∃Sij(a) : |B(Sij(a))| = |B(a)| – 4
{proof in progress, need to sleep}
Conclusion
From T1, T2 and T3, the minimal number of swaps needed to minimize |B(a)| equals:
⌊|B(a)| / 4⌋ + ß,
where ß equals 1 if |B(a)| mod 4 ≥ 2, 0 otherwise.

Modulo remainder equals to zero, only first two products taken into account in the loop

I'm trying to find all the divisors ("i" in my case) of a given number ("a" in my case) with no remainder (a % i == 0). I'm running a loop that goes trough all the vales of i starting from 1 up to the value of a. The problem is that only first 2 products of a % i == 0 are taken into account. The rest is left out. Why is that?
Here the code in python3:
a = 999
i = 1
x = 0
d = []
while (i < a):
x = a / i
if(x % i == 0):
d.append(i)
i += 1
print (d)
The output of the code is:
[1, 3]
instead of listing all the divisors.
I have checked for different values of a and can't find the error.
The behavior of the script is correct. See for yourself:
I think it's your logic, and what you are trying to achieve is:
a = 999
i = 1
d = []
while (i < a):
if(a % i == 0):
d.append(i)
i += 1
print (d)
Outputs:
[1, 3, 9, 27, ...]
To complement Anton's answer, a more Pythonic way to loop would be:
a, d = 999, []
for i in range(1, a):
if a%i == 0:
d.append(i)
You can also take advantage of the fact that object have a Boolean value:
if not a%i:
Or you can use a list comprehension:
d = [i for i in range(1, a) if not a%i]

Resources