Finding total ways of selecting items such that no two are consecutive - permutation

There are n items in a line. We have to find the number of ways the items can be selected with the restriction that no two consecutive items can be selected.
I tried to do it with recurrence relation but not able to reach on any. Please help me to solve the problem.

After searching on the net I got the solution of above problem.
Say there are N items. If N is even we can select almost N/2 items such that no two are consecutive and if N is odd, we can select almost (N+1)/2 items. Let K is maximum items that can be selected.
We can select 1 to K items.
For selecting one item, we can select any item.
For selecting two items, we keep N-2 items in a sequence. The circles below represent the items in the sequence. And we have total of N-1 spaces beginning from left of first item to right of last item. The spaces are represented by '_' underscore. If we select two of any space, and replace them with item, then we will have N items and selected two items will not be consecutive as no two spaces are consecutive.
_ o _ o _ o _ o _ o _ o _ o _ o _ o _ o _
For selecting p items, we will keep N-p items in the sequence which will result in N-p+1 spaces. We can select any p spaces from these N-p+1 spaces.
So total possible ways will become
NC1 + N-1C2 + N-2C3 + ... + N-K+1CK which is sum of first N Fibonacci numbers (1,1,2,3,5,...).
Also sum of first N Fibonacci numbers is F(n+2) - 1

It seems hard to understand the way you explained why it is Fibonacci series.
I have an easier way of explaining the same as below.
Suppose we express the number of combinations for n items as T(n).
If we do not select the first item then the number of combinations is same as the number of combinations for the remaining n-1 items, which is T(n-1).
If we select the first item (we cannot select the second item as it is consecutive to the first position) then the number of combinations is same as the number of combinations for remaining n-2 items, which is T(n-2).
Therefore the below conclusion.
T(n) = T(n-1) + T(n-2).
T(1) = 2 (1. selected and 2. not selected)
T(2) = 3 (1. both not selected, 2. only first selected, 3. only second selected)
This is a Fibonacci series and can be computed in O(n) time complexity.

I think you can do this by building an array of length n with each place on the array representing the number of ways the items could be selected if that place was the first one that was selected. (Selecting from left to right.)
Psuedo code (untested):
int[] list = new int[n];
int total = 0;
for(int position = n-1; position >= 0; position--)
{
list[position] = 1;
for(int subPos = position + 2; subPos < n; subPos++)
{
list[position] += list[subPos];
}
total += list[position];
}
Explanation:
The value in list[i] when this has finished running represents the number of ways of picking items from the line with item i being the left most item that is picked.
Obviously there is only one way of picking items such that the right most item is the left most item that is picked. If n = 5, the pickings could be represented like this in that case: 00001
Similarly, for the second most right item there is only one way to pick items such that it is the left most item: 00010.
For the third most right item, there is 1 way to pick it where you only pick that item, then you must add on the number of ways of picking each of the items that might be picked second (this is what the second loop is for). So that item would have: 00100 and 00101.
Fourth most right item: 01000, 01010, 01001.
Fith most right item (first item on the left): 10000, 10100, 10101, 10010, 10001.
So the array for n=5 would end up with these values: {5,3,2,1,1}
And the total would then be: 5 + 3 + 2 + 1 + 1 = 12

Its a simple solution.
Say you need to select 3 numbers out of first 100 natural numbers, such that no two are consecutive.
Consider first 98 natural numbers, and randomly select 3 natural numbers (a, b, c) in 98C3 ways.
We know 0 < a, b, c < 99 and |a-b|, |b-c|, |a-c| >= 1 (since a, b, c are different ).
let A=a+0 ; B=b+1 ; C=c+2 ;
So we now know difference between any two of A, B and C is greater than 1 (i.e A, B and C cannot be consecutive numbers).
And 0 < A, B, C<101. A, B and C satisfy all conditions for the required question.
So solution is 98 C 3.
Generalizing → selecting p items from N, such that no two are consecutive is (N-p-1) C p.

Ans) (n+1-r)C r
Suppose that we have n items. And we want to select or choose r items making sure that no two objects are selected consecutively. We are going to represent the objects with '0'.
So now we have a sequence of 00000000.......0000 {upto n terms} Now when ever we choose an item from the 'i'th position i={1,......,n} , let us represent it as 1. So if we choose one item from position 2, the new sequence becomes 01000000....000 {upto n terms}. Now if we have to choose r elements, there will be r 1 s in the binary sequence that we are designing. but the interesting point is that between two successive 1 s , there can be any natural number of 0's. So if we place n -r (r 1's will also be there ! ) zeros together, then there are total n-r+1 gaps (left and right of each zeros). Then we will have to place the r 1 s in position. We can choose the position for r 1 s from amongst n-r+1 positions. This can be done in (n+1-r)C r.
Hope you understand.

Related

Maximum adjacent product sum (Interview question)

We have an array of integers where integer in each position is seen as its value. Each time a position is selected, you will earn the amount associated with it multiplied by its adjacent position's value (left and right side). After a position has been selected it would be removed from the array and its left and right positions would become adjacent to each other.
If there are no adjacent positions assume a value of 1 for the same. For example, if there is only single position left and you select it then it's value will be multiplied by 1 as both left and right adjacent positions.
Find out what can be maximum amount earned at the end after selecting all positions.
I have implemented a dynamic programming approach to it using the following recurrence relation : First we observe that if we somehow in the process as mentioned in question encounter a step where we multiply arr[position_p] and arr[position_q], then all positions in between position_p and position_q should have already been chosen, if any.
For simplicity let us assume array indices start from 1 and position 0 and position n+1 contain value 1 in accordance with the question, where n is the number of elements in array.
So we need to select positions p+1 to q-1 in such an order that maximizes the amount.
Using this, we obtain recurrence relation :
If f(p,q) is maximum amount obtained by choosing only from positions p+1 to q-1, then we have :
f(p, q) = max ( f(p,k) + f(k,q) + arr[p] * arr[k] * arr[q] ) for k between p and q (Excluding p and q)
where k is last position chosen from positions p+1 to q-1 before choosing either p or q
And here is the python implementation :
import numpy as np
n = int(input("Enter the no. of inputs : "))
arr = [1]
arr = arr + list( map( int, input("Enter the list : ").split() ) )
arr.append(1)
# matrix created to memoize values instead of recomputing
mat = np.zeros( (n+2, n+2), dtype = "i8" )
# Bottom-up dynamic programming approach
for row in range ( n + 1, -1, -1 ) :
for column in range ( row + 2, n + 2 ) :
# This initialization to zero may not work when there are negative integers in the list.
max_sum = 0
# Recurrence relation
# mat[row][column] should have the maximmum product sum from indices row+1 until column-1
# And arr[row] and arr[column] are boundary values for sub_array
# By above notation, if column <= row + 1, then there would be no elements between them and thus mat[row][column] should remain zero
for k in range ( row + 1 , column ) :
max_sum = max( max_sum, mat[row][k] + mat[k][column] + ( arr[row] * arr[k] * arr[column] ) )
mat[row][column] = max_sum
print(mat[0][n+1])
The problem is that I have seen the following question in a programming round of interview before some time back. Though my solution seems to be working, it has O(n^3) time complexity and O(n^2) space complexity.
Can I do better, what about the case when all values of array positions are positive (original question assumes this). And any help on reducing space complexity is also appreciated.
Thank you.
Edit :
Though this is no proof, as suggested by #risingStark I have seen the same question on LeetCode also where all correct algorithms seem to have used O(n^2) space running in O(n^3) time for general case solution.

Maximize number of inversion count in array

We are given an unsorted array A of integers (duplicates allowed) with size N possibly large. We can count the number of pairs with indices i < j, for which A[i] < A[j], let's call this X.
We can change maximum one element from the array with a cost equal to the difference in absolute values (for instance, if we replace element on index k with the new number K, the cost Y is | A[k] - K |).
We can only replace this element with other elements found in the array.
We want to find the minimum possible value of X + Y.
Some examples:
[1,2,2] should return 1 (change the 1 to 2 such that the array becomes [2,2,2])
[2,2,3] should return 1 (change the 3 to 2)
[2,1,1] should return 0 (because no changes are necessary)
[1,2,3,4] should return 6 (this is already the minimum possible value)
[4,4,5,5] should return 3 (this can accomplished by changing the first 4 into a 5 or the last 5 in a 4)
The number of pairs can be found with a naive O(n²) solution, here in Python:
def calc_x(arr):
n = len(arr)
cnt = 0
for i in range(n):
for j in range(i+1, n):
if arr[j] > arr[i]:
cnt += 1
return cnt
A brute-force solution is easily written as for example:
def f(arr):
best_val = calc_x(arr)
used = set(arr)
for i, v in enumerate(arr):
for replacement in used:
if replacement == v:
continue
arr2 = arr[0:i] + replacement + arr[i:]
y = abs(replacement - v)
x = calc_x(arr2)
best_val = min(best_val, x + y)
return best_val
We can count for each element the number of items right of it larger than itself in O(n*log(n)) using for instance an AVL-tree or some variation on merge sort.
However, we still have to search which element to change and what improvement it can achieve.
This was given as an interview question and I would like some hints or insights as how to solve this problem efficiently (data structures or algorithm).
Definitely go for a O(n log n) complexity when counting inversions.
We can see that when you change a value at index k, you can either:
1) increase it, and then possibly reduce the number of inversions with elements bigger than k, but increase the number of inversions with elements smaller than k
2) decrease it (the opposite thing happens)
Let's try not to count x every time you change a value. What do you need to know?
In case 1):
You have to know how many elements on the left are smaller than your new value v and how many elements on the right are bigger than your value. You can pretty easily check that in O (n). So what is your x now? You can count it with the following formula:
prev_val - your previous value
prev_x - x that you've counted at the beginning of your program
prev_l - number of elements on the left smaller than prev_val
prev_r - number of elements on the right bigger than prev_val
v - new value
l - number of elements on the right smaller than v
r - number of elements on the right bigger than v
new_x = prev_x + r + l - prev_l - prev_r
In the second case you pretty much do the opposite thing.
Right now you get something like O( n^3 ) instead of O (n^3 log n), which is probably still bad. Unfortunately that's all what I came up for now. I'll definitely tell you if I come up with sth better.
EDIT: What about memory limit? Is there any? If not, you can just for each element in the array make two sets with elements before and after the current one. Then you can find the amount of smaller/bigger in O (log n), making your time complexity O (n^2 log n).
EDIT 2: We can also try to check, what element would be the best to change to a value v, for every possible value v. You can make then two sets and add/erase elements from them while checking for every element, making the time complexity O(n^2 log n) without using too much space. So the algorithm would be:
1) determine every value v that you can change any element, calculate x
2) for each possible value v:
make two sets, push all elements into the second one
for each element e in array:
add previous element (if there's any) to the first set and erase element e from the second set, then count number of bigger/smaller elements in set 1 and 2 and calculate new x
EDIT 3: Instead of making two sets, you could go with prefix sum for a value. That's O (n^2) already, but I think we can go even better than this.

element address in 3 dimensional array

I am looking for the formulas to find the memory location of an element in a 3-D Array for row major and for column major. After using my logic I end up with the following formulas.
say array is A[L][M][N].
row-major:Loc(A[i][j][k])=base+w(M*N(i-x)+N*(j-y)+(k-z))
column-major:Loc(A[i][j][k])=base+w(M*n(i-x)+M*(k-z)+(j-y))
where x, y, z are lower bounds of 1st(L) 2nd(M) and 3rd(N) index.
I tried this formula and got the correct result but when I applied this formula on a Question in the book then the answer did not match. Please can anyone help me out with this.
Formula for 3D Array
Row Major Order:
Address of
A[I, J, K] = B + W * [(D - Do)*RC + (I - Ro)*C + (J - Co)]
Column Major Order:
Address of
A[I, J, K] = B + W * [(D - Do)*RC + (I - Ro) + (J - Co)*R]
Where:
B = Base Address (start address)
W = Weight (storage size of one element stored in the array)
R = Row (total number of rows)
C = Column (total number of columns)
D = Width (total number of cells depth-wise)
Ro = Lower Bound of Row
Co = Lower Bound of Column
Do = Lower Bound of Width
Right one is:
row-major:Loc(A[i][j][k])=base+w(N*(i-x)+(j-y)+M*N(k-z))
column-major:Loc(A[i][j][k])=base+w((i-x)+M*N(k-z)+M*(j-y))
Thanks! #Vinay Yadav for your comment. As suggested by Vinay please visit the link to understand this in great detail: https://eli.thegreenplace.net/2015/memory-layout-of-multi-dimensional-arrays.
Keep this in mind and you never get it wrong:
Row Major: Lexicographical Order
Column Major: Co-lexicographical Order
If you don't know what Co-lexicographical and Lexicographical are: Check out this Wikipedia page for more. Let me highlight important part for you, do give it a read:
The words in a lexicon (the set of words used in some language) have a
conventional ordering, used in dictionaries and encyclopedias, that
depends on the underlying ordering of the alphabet of symbols used to
build the words. The lexicographical order is one way of formalizing
word order given the order of the underlying symbols.
The formal notion starts with a finite set A, often called the
alphabet, which is totally ordered. That is, for any two symbols a and
b in A that are not the same symbol, either a < b or b < a.
The words of A are the finite sequences of symbols from A, including
words of length 1 containing a single symbol, words of length 2 with 2
symbols, and so on, even including the empty sequence varepsilon with no symbols
at all. The lexicographical
order on the set of all these finite words orders the words as
follows:
Given two different words of the same length, say a = a1a2...ak and b
= b1b2...bk, the order of the two words depends on the alphabetic order of the symbols in the first place i where the two words differ
(counting from the beginning of the words): a < b if and only if ai <
bi in the underlying order of the alphabet A. If two words have
different lengths, the usual lexicographical order pads the shorter
one with "blanks" (a special symbol that is treated as smaller than
every element of A) at the end until the words are the same length,
and then the words are compared as in the previous case.
After this you can learn about Co-Lexicographical Order from the same Wikipedia page mentioned above. Above quoted part is taken directly from the Motivation and Definition titled part of the Wikipedia page mentioned above. Visit it once and you will have a better understanding of both.
You just need to find the Lexicographical and Co-Lexicographical position of (i, j, k) among all possible (foo1, foo2, foo3) in the array A of yours:
foo1 -> L possibilities: [Lower Bound x, Upper Bound x + L - 1]
foo2 -> M possibilities: [Lower Bound y, Upper Bound y + M - 1]
foo3 -> N possibilities: [Lower Bound z, Upper Bound z + N - 1]
Based on the this knowledge, you will get that:
1). Number of elements A[foo1][foo2][foo3] (foo1, foo2, foo3) present before element A[i][j][k] (i, j, k) in Row Major Order or Lexicographical Order are:
[ (i - x)*M*N + (j - y)*N + (k - z) ]
2). Number of elements A[foo1][foo2][foo3] (foo1, foo2, foo3) present before element A[i][j][k] (i, j, k) in Column Major Order or Co-lexicographical Order are:
[ (i - x) + (j - y)*L + (k - z)*L*M ]
Now, you can do the rest of your calculation where you bring in your base and W thing to get the final answer you need.

Finding all possible groups of two numbers in an array

If I have an input array, and I have to find all possible group of 2 numbers that satisfy the condition that a%b = k and a is towards left of b in the array. Here, k is the input and the array is the input as well. I am doing it in O(n^2). Simply by taking two loops and finding such numbers, can I do better?
For eg:
7 3 1 and `k = 1`
Then, (7,3) form one such group and I have to find such groups.

Find the Element Occurring b times in an an array of size n*k+b

Description
Given an Array of size (n*k+b) where n elements occur k times and one element occurs b times, in other words there are n+1 distinct Elements. Given that 0 < b < k find the element occurring b times.
My Attempted solutions
Obvious solution will be using hashing but it will not work if the numbers are very large. Complexity is O(n)
Using map to store the frequencies of each element and then traversing map to find the element occurring b times.As Map's are implemented as height balanced trees Complexity will be O(nlogn).
Both of my solution were accepted but the interviewer wanted a linear solution without using hashing and hint he gave was make the height of tree constant in tree in which you are storing frequencies, but I am not able to figure out the correct solution yet.
I want to know how to solve this problem in linear time without hashing?
EDIT:
Sample:
Input: n=2 b=2 k=3
Aarray: 2 2 2 3 3 3 1 1
Output: 1
I assume:
The elements of the array are comparable.
We know the values of n and k beforehand.
A solution O(n*k+b) is good enough.
Let the number occuring only b times be S. We are trying to find the S in an array of n*k+b size.
Recursive Step: Find the median element of the current array slice as in Quick Sort in lineer time. Let the median element be M.
After the recursive step you have an array where all elements smaller than M occur on the left of the first occurence of M. All M elements are next to each other and all element larger than M are on the right of all occurences of M.
Look at the index of the leftmost M and calculate whether S<M or S>=M. Recurse either on the left slice or the right slice.
So you are doing a quick sort but delving only one part of the divisions at any time. You will recurse O(logN) times but each time with 1/2, 1/4, 1/8, .. sizes of the original array, so the total time will still be O(n).
Clarification: Let's say n=20 and k = 10. Then, there are 21 distinct elements in the array, 20 of which occur 10 times and the last occur let's say 7 times. I find the medium element, let's say it is 1111. If the S<1111 than the index of the leftmost occurence of 1111 will be less than 11*10. If S>=1111 then the index will be equal to 11*10.
Full example: n = 4. k = 3. Array = {1,2,3,4,5,1,2,3,4,5,1,2,3,5}
After the first recursive step I find the median element is 3 and the array is something like: {1,2,1,2,1,2,3,3,3,5,4,5,5,4} There are 6 elements on the left of 3. 6 is a multiple of k=3. So each element must be occuring 3 times there. So S>=3. Recurse on the right side. And so on.
An idea using cyclic groups.
To guess i-th bit of answer, follow this procedure:
Count how many numbers in array has i-th bit set, store as cnt
If cnt % k is non-zero, then i-th bit of answer is set. Otherwise it is clear.
To guess whole number, repeat the above for every bit.
This solution is technically O((n*k+b)*log max N), where max N is maximal value in the table, but because number of bits is usually constant, this solution is linear in array size.
No hashing, memory usage is O(log k * log max N).
Example implementation:
from random import randint, shuffle
def generate_test_data(n, k, b):
k_rep = [randint(0, 1000) for i in xrange(n)]
b_rep = [randint(0, 1000)]
numbers = k_rep*k + b_rep*b
shuffle(numbers)
print "k_rep: ", k_rep
print "b_rep: ", b_rep
return numbers
def solve(data, k):
cnts = [0]*10
for number in data:
bits = [number >> b & 1 for b in xrange(10)]
cnts = [cnts[i] + bits[i] for i in xrange(10)]
return reduce(lambda a,b:2*a+(b%k>0), reversed(cnts), 0)
print "Answer: ", solve(generate_test_data(10, 15, 13), 3)
In order to have a constant height B-tree containing n distinct elements, with height h constant, you need z=n^(1/h) children per nodes: h=log_z(n), thus h=log(n)/log(z), thus log(z)=log(n)/h, thus z=e^(log(n)/h), thus z=n^(1/h).
Example, with n=1000000, h=10, z=3.98, that is z=4.
The time to reach a node in that case is O(h.log(z)). Assuming h and z to be "constant" (since N=n.k, then log(z)=log(n^(1/h))=log(N/k^(1/h))=ct by properly choosing h based on k, you can then say that O(h.log(z))=O(1)... This is a bit far-fetched, but maybe that was the kind of thing the interviewer wanted to hear?
UPDATE: this one use hashing, so it's not a good answer :(
in python this would be linear time (set will remove the duplicates):
result = (sum(set(arr))*k - sum(arr)) / (k - b)
If 'k' is even and 'b' is odd, then XOR will do. :)

Resources