Array containing n-2 integers 1 to n, O(n) algorithm to find missing 2 numbers with O(1) extra space? [duplicate] - arrays

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Find two missing numbers
I been thinking for a while and can't seem to get an answer to this... So an array with n-2 unique integers in the range from 1 to n and O(1) space in addition to the space used by the array is given. How can you find the two integers from 1 to n that is missing in the array in O(n) time?
So for example, a = [4,3,1,6] and O(1) extra space
How can you find 2, 5 in O(n) time?

Here's an idea: Just keep some statistic that gives you information about the missing numbers. For example, if you calculate the sum of all your numbers as S, then:
(1+2+..+N) - S = a+b
where a and b are your missing numbers. In your example, you get:
1+2+3+4+5+6 - 4+3+1+6 = 7 = a+b
You could then also do the same, for example, for multiplication and get:
(1*2*..*N) / S = a*b
in your case:
(1*2*3*4*5*6) / 72 = 10 = a*b
so the answer is 2 and 5.
Basically there are a lot of statistics you can use in this way...

Related

Efficient removal of duplicates in array

How can duplicates be removed and recorded from an array with the following constraints:
The running time must be at most O(n log n)
The additional memory used must be at most O(n)
The result must fulfil the following:
Duplicates must be moved to the end of the original array
The order of the first occurrence of each unique element must be preserved
For example, from this input:
int A[] = {2,3,7,3,2,11,2,3,1,15};
The result should be similar to this (only the order of duplicates may differ):
2 3 7 11 1 15 3 3 2 2
As I understand it, the goal is to split an array into two parts: unique elements and duplicates in such a way that the order of the first occurrence of the unique elements is preserved.
Using the the array of the OP as an example:
A={2,3,7,3,2,11,2,3,1,15}
A solution could do the following::
Initialize the helper array with indices 0, ..., n-1:
B={0,1,2,3,4,5,6,7,8,9}
Sort the pairs (A[i],B[i]) using A[i] as key and with a stable sorting algorithm of complexity O(n log n):
A={1,2,2,2,3,3,3,7,11,15}
B={8,0,4,6,1,3,7,2,5, 9}
With n being the size of the array, go through the pairs (A[i],B[i]) and for all duplicates (A[i]==A[i-1]), add n to B[i]:
A={1,2, 2, 2,3, 3, 3,7,11,15}
B={8,0,14,16,1,13,17,2, 5, 9}
Sort the pairs (A[i],B[i]) again, but now using B[i] as key:
A={2,3,7,11,1,15, 3, 2, 2, 3}
B={0,1,2, 5,8, 9,13,14,16,17}
A then contains the desired result.
Steps 1 and 3 are O(n) and steps 2 and 4 can be done in O(n log n), so overall complexity is O(n log n).
Note that this method also preserves the order of duplicates. If you want them sorted, you can assign indices n, n+1, ... in step 3 instead of adding n.
Here is a very important hint: when an algorithm is permitted O(n) extra space, that is not the same as saying it can only use the same amount of memory as the input array!
For example, given the input array int array[] = {2,3,7,3,2,11,2,3,1,15}; (10 elements)That is a total space of 10 * sizeof(int) bytes.On a 64-bit machine an int is 8 bytes long, making the array 80 bytes of data.
However, I can use more space for my extra array than just 80 bytes! In fact, I can make a histogram structure that looks like this:
struct histogram
{
bool is_used; // Is this element in use in the histogram?
int value; // The integer value represented by this element
size_t index; // The index in the output array of the FIRST instance of the value
size_t count; // The number of times the value appears in the source array
};
typedef struct histogram histogram;
And since that is a fixed, finite amount of space, I can feel totally free to allocate n of them!
histogram * new_histogram( size_t size )
{
return calloc( size, sizeof(struct histogram) );
}
On my machine that’s 240 bytes.
And yes, this absolutely, totally complies with the O(n) extra space requirement! (Because we are only using space for n extra items. Bigger items, yes, but only n of them.)
Goals
So, why make a histogram with all that extra stuff in it?
We are counting duplicates — suggesting that we should be looking at a Counting Sort, and hence, a histogram.
Accept integers in a range beyond [0,n).
The example array has 10 items, so our histogram should only have 10 slots. But there are integer values larger than 9.
Keep all the non-duplicate values in the same order as input
So we need to track the index of the first instance of each value in the input array.
We are obviously not sorting the data, but the basic idea behind a Counting Sort is to build a histogram and then use that histogram to overwrite the array with the ordered elements.
This is a powerful idea. We are going to tweak it.
The Algorithm
Remember that our input array is also our output array! So we will overwrite the array’s input values with our algorithm.
Let’s look at our example again:
2 3 7 3 2 11 2 3 1 15
  0    1    2    3    4    •5     6    7    8     9
❶ Build the histogram:
0 1 2 3 4 5 6 7 8 9 (index in histogram)
used?: no yes yes yes yes yes no yes no no
value: 0 11 2 3 1 15 0 7 0 0
index: 0 3 0 1 4 5 0 2 0 0
count: 0 1 3 3 1 1 0 1 0 0
I used a simple non-negative modulo function to get a hash index into the histogram: abs(value) % histogram_size, then found the first matching or unused entry, again modulo the histogram size. Our histogram has a single collision: 1 and 11 (mod 10) both hash to 1. Since we encountered 11 first it gets stored at index 1 of the histogram, and for 1 we had to seek to the first unused index: 4.
We can see that the duplicate values all have a count of 2 or more, and all non-duplicate values have a count of 1.
The magic here is the index value. Look at 11. It’s index is 3, not 5. If we look at our desired output we can see why:
2 3 7 11 1 15   2 2 3 3.
  0    1    2    •3     4     5       6    7    8    9
The 11 is in index 3 of the output. This is a very simple counting trick when building the histogram. Keep a running index that we only increment when we first add a value to the histogram. This index is where the value should appear in the ouput!
❷ Use the histogram to put the non-duplicate values into the array.
Clearly, anything with a non-zero count appears at least once in the input, so it must also be output.
Here’s where our magic histogram index first helps us. We already know exactly where in the array to put the value!
2 3 7 11 1 15
  0    1    2     3     4     5    ⟵   index into the array to put the value
You should take a moment to compare the array output index with the index values stored in the histogram above and convince yourself that it works.
❸ Use the histogram to put the duplicate values into the array.
So, at what index do we start putting duplicates into the array? Do we happen to have some magic index laying around somewhere that could help? From when we built the histogram?
Again stating the obvious, anything with a count greater than 1 is a value with duplicates. For each duplicate, put count-1 copies into the array.
We don’t care what order the duplicates appear, so we’ll just take them in the order they are stored in the histogram.
Complexity
The complexity of a Counting Sort is O(n+k): one pass over the input array (to build the histogram) and one pass over the histogram data (to rebuild the array in sorted order).
Our modification is: one pass over the input array (to build the histogram), then one pass over the histogram to build the non-duplicate partition, then one more pass over the histogram to build the duplicates partition. That’s a complexity of O(n+2k).
In both cases it reduces to an O(n) worst-case complexity. In fact, it is also an Ω(n) best-case complexity, making it a Θ(n) complexity — it takes the same processing per element no matter what the input.
Aaaaaahhhh! I gotta code that!!!?
Yep. It is a only a tiny bit more complex than you are used to. Remember, you only need a few things:
An array of integer values (obtained from the user?)
A histogram array
A function to turn an integer value into an index into the histogram
A function that does the three things:
Build the histogram from the array
Use the histogram to write the non-duplicate values back into the array in the correct spots
Use the histogram to write the duplicate values to the end of the array
Ability to print an integer array
Your main() should look something like this:
int main(void)
{
// Get number of integers to input
int size = 0;
scanf( "%d", &n );
// Allocate and get the integers
int * array = malloc( size );
for (int n = 0; n < size; n++)
scanf( "%d", &array[n] );
// Partition the array between non-duplicate and duplicate values
int pivot = partition( array, size );
// Print the results
print_array( "non-duplicates:", array, pivot );
print_array( "duplicates: ", array+pivot, size-pivot );
free( array );
return 0;
}
Notice the complete lack of input error checking. You can assume that your professor will test your program without inputting hello or anything like that.
You can do this!

Determine the adjacency of two fibonacci number

I have many fibonacci numbers, if I want to determine whether two fibonacci number are adjacent or not, one basic approach is as follows:
Get the index of the first fibonacci number, say i1
Get the index of the second fibonacci number, say i2
Get the absolute value of i1-i2, that is |i1-i2|
If the value is 1, then return true.
else return false.
In the first step and the second step, it may need many comparisons to get the correct index by using accessing an array.
In the third step, it need one subtraction and one absolute operation.
I want to know whether there exists another approach to quickly to determine the adjacency of the fibonacci numbers.
I don't care whether this question could be solved mathematically or by any hacking techniques.
If anyone have some idea, please let me know. Thanks a lot!
No need to find the index of both number.
Given that the two number belongs to Fibonacci series, if their difference is greater than the min. number among them then those two are not adjacent. Other wise they are.
Because Fibonacci series follows following rule:
F(n) = F(n-1) + F(n-2) where F(n)>F(n-1)>F(n-2).
So F(n) - F(n-1) = F(n-2) ,
=> Diff(n,n-1) < F(n-1) < F(n-k) for k >= 1
Difference between two adjacent fibonaci number will always be less than the min number among them.
NOTE : This will only hold if numbers belong to Fibonacci series.
Simply calculate the difference between them. If it is smaller than the smaller of the 2 numbers they are adjacent, If it is bigger, they are not.
Each triplet in the Fibonacci sequence a, b, c conforms to the rule
c = a + b
So for every pair of adjacent Fibonaccis (x, y), the difference between them (y-x) is equal to the value of the previous Fibonacci, which of course must be less than x.
If 2 Fibonaccis, say (x, z) are not adjacent, then their difference must be greater than the smaller of the two. At minimum, (if they are one Fibonacci apart) the difference would be equal to the Fibonacci between them, (which is of course greater than the smaller of the two numbers).
Since for (a, b, c, d)
since c= a+b
and d = b+c
then d-b = (b+c) - b = c
By Binet's formula, the nth Fibonacci number is approximately sqrt(5)*phi**n, where phi is the golden ration. You can use base phi logarithms to recover the index easily:
from math import log, sqrt
def fibs(n):
nums = [1,1]
for i in range(n-2):
nums.append(sum(nums[-2:]))
return nums
phi = (1 + sqrt(5))/2
def fibIndex(f):
return round((log(sqrt(5)*f,phi)))
To test this:
for f in fibs(20): print(fibIndex(f),f)
Output:
2 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55
11 89
12 144
13 233
14 377
15 610
16 987
17 1597
18 2584
19 4181
20 6765
Of course,
def adjacentFibs(f,g):
return abs(fibIndex(f) - fibIndex(g)) == 1
This fails with 1,1 -- but there is little point for explicit testing special logic for such an edge-case. Add it in if you want.
At some stage, floating-point round-off error will become an issue. For that, you would need to replace math.log by an integer log algorithm (e.g. one which involves binary search).
On Edit:
I concentrated on the question of how to recover the index (and I will keep the answer since that is an interesting problem in its own right), but as #LeandroCaniglia points out in their excellent comment, this is overkill if all you want to do is check if two Fibonacci numbers are adjacent, since another consequence of Binet's formula is that sufficiently large adjacent Fibonacci numbers have a ratio which differs from phi by a negligible amount. You could do something like:
def adjFibs(f,g):
f,g = min(f,g), max(f,g)
if g <= 34:
return adjacentFibs(f,g)
else:
return abs(g/f - phi) < 0.01
This assumes that they are indeed Fibonacci numbers. The index-based approach can be used to verify that they are (calculate the index and then use the full-fledged Binet's formula with that index).

Efficient algorithm to print sum of elements at all possible subsequences of length 2 to n+1 [duplicate]

This question already has answers here:
Sum of products of elements of all subarrays of length k
(2 answers)
Permutation of array
(13 answers)
Closed 7 years ago.
I will start with an example. Suppose we have an array of size 3 with elements a, b and c like: (where a, b and c are some numerical values)
|1 | 2| 3| |a | b| c|
(Assume index starts from 1 as shown in the example above)
Now all possible increasing sub-sequence of length 2 are:
12 23 13
so the sum of product of elements at those indexes is required, that is, ab+bc+ac
For length 3 we have only one increasing sub-sequence, that is, 123 so abc should be printed.
For length 4 we have no sequence so 0 is printed and the program terminates.
So output for the given array will be:
ab+bc+ac,abc,0
So for example if the elements a, b and c are 1, 2 and 3 respectively then the output should be 11,6,0
Similarly, for an array of size 4 with elements a,b,c,d the output will be:
ab+ac+ad+bc+bd+cd,abc+abd+acd+bcd,abcd,0
and so on...
Now obviously brute force will be too inefficient for large value of array size. I was wondering if there is an efficient algorithm to compute the output for an array of given size?
Edit 1: I tried finding a pattern. For example for an array of size 4:
The first value we need is :(ab+ac+bc)+d(a+b+c)= ab+ac+ad+bc+bd+cd (Take A=ab+ac+bd)
then the second value we need is:(abc) +d(A) = abc+abd+acd+bcd(B=abc)
then the third value we need is : (0) +d(B) = abcd(Let's take 0 as C)
then the fourth value we need is: +d(C) = 0
But it still requires a lot of computation and I can't figure out an efficient way to implement this.
Edit 2: My question is different then this since:
I don't need all possible permutations. I need all possible increasing sub-sequences from length 2 to n+1.
I also don't need to print all possible such sequences, I just need the value thus obtained (as explained above) and hence I am looking for some maths concept or/and some dynamic programming approach to solve this problem efficiently.
Note I am finding the set of all possible such increasing sub-sequences based on the index value and then computing based on the values at those index position as explained above.
As a post that seems to have disappeared pointed out one way is to get a recurrence relation. Let S(n,k) be the sum over increasing subsequences (of 1..n) of length k of the product of the array elements indexed by the sequence. Such a subsequence either ends in n or not; in the first case it's the concatenation of a subsequence of length k-1 of 1..n-1 and {n}; in the second case it's a subsequence of 1..n-1 of length k. Thus:
S(n,k) = S(n-1,k) + A[n] * S(n-1,k-1)
For this always to make sense we need to add:
S(n,0) = 1
S(n,m) = 0 for m>n

Possible Combination that could be made with the same number

I'm currently working on a C program that will calculate the number of possibilities that could be made with the same number. For example: 444 should produce 6 number of possibilities (it counts as 4,4,4,44,44,444). I currently think to use a for loop with if statement in it to solve the problem. Thank you
It's unclear what you are asking, but if I understand correctly there is a very simple formula for that
Given a number of n caracteres, you will have:
1 subset of size n
2 subsets of size n-1
...
n subset of size 1
So you re computing the sum of i for i in 1..n which is
n*(n+1)/2
In your exemple 444 is of size 3, and 3*(3+1)/2 = 6

Can we use binary search to find most frequently occuring integer in sorted array? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Problem:
Given a sorted array of integers find the most frequently occurring integer. If there are multiple integers that satisfy this condition, return any one of them.
My basic solution:
Scan through the array and keep track of how many times you've seen each integer. Since it's sorted, you know that once you see a different integer, you've gotten the frequency of the previous integer. Keep track of which integer had the highest frequency.
This is O(N) time, O(1) space solution.
I am wondering if there's a more efficient algorithm that uses some form of binary search. It will still be O(N) time, but it should be faster for the average case.
Asymptotically (big-oh wise), you cannot use binary search to improve the worst case, for the reasons the answers above mine have presented. However, here are some ideas that may or may not help you in practice.
For each integer, binary search for its last occurrence. Once you find it, you know how many times it appears in the array, and can update your counts accordingly. Then, continue your search from the position you found.
This is advantageous if you have only a few elements that repeat a lot of times, for example:
1 1 1 1 1 2 2 2 2 3 3 3 3 3 3 3 3 3 3
Because you will only do 3 binary searches. If, however, you have many distinct elements:
1 2 3 4 5 6
Then you will do O(n) binary searches, resulting in O(n log n) complexity, so worse.
This gives you a better best case and a worse worst case than your initial algorithm.
Can we do better? We could improve the worst case by finding the last occurrence of the number at position i like this: look at 2i, then at 4i etc. as long as the value at those positions are the same. If they are not, look at (i + 2i) / 2 etc.
For example, consider the array:
i
1 2 3 4 5 6 7 ...
1 1 1 1 1 2 2 2 2 3 3 3 3 3 3 3 3 3 3
We look at 2i = 2, it has the same value. We look at 4i = 4, same value. We look at 8i = 8, different value. We backtrack to (4 + 8) / 2 = 6. Different value. Backtrack to (4 + 6) / 2 = 5. Same value. Try (5 + 6) / 2 = 5, same value. We search no more, because our window has width 1, so we're done. Continue the search from position 6.
This should improve the best case, while keeping the worst case as fast as possible.
Asymptotically, nothing is improved. To see if it actually works better on average in practice, you'll have to test it.
Binary search, which eliminates half of the remaining candidates, probably wouldn't work. There are some techniques you could use to avoid reading every element in the array. Unless your array is extremely long or you're solving a problem for curiosity, the naive (linear scan) solution is probably good enough.
Here's why I think binary search wouldn't work: start with an array: given the value of the middle item, you do not have enough information to eliminate the lower or upper half from the search.
However, we can scan the array in multiple passes, each time checking twice as many elements. When we find two elements that are the same, make one final pass. If no other elements were repeated, you've found the longest element run (without even knowing how many of that element is in the sorted list).
Otherwise, investigate the two (or more) longer sequences to determine which is longest.
Consider a sorted list.
Index 0 1 2 3 4 5 6 7 8 9 a b c d e f
List 1 2 3 3 3 3 3 3 3 4 5 5 6 6 6 7
Pass1 1 . . . . . . 3 . . . . . . . 7
Pass2 1 . . 3 . . . 3 . . . 5 . . . 7
Pass3 1 2 . 3 . x . 3 . 4 . 5 . 6 . 7
After pass 3, we know that the run of 3's must be at least 5, while the longest run of any other number is at most 3. Therefore, 3 is the most frequently occurring number in the list.
Using the right data structures and algorithms (use binary-tree-style indexing), you can avoid reading values more than once. You can also avoid reading the 3 (marked as an x in pass 3) since you already know its value.
This solution has running time O(n/k) which degrades to O(n) for k=1 for a list with n elements and a longest run of k elements. For small k, the naive solution will perform better due to simpler logic, data structures, and higher RAM cache hits.
If you need to determine the frequency of the most common number, it would take O((n/k) log k) as indicated by David to find the first and last position of the longest run of numbers using binary search on up to n/k groups of size k.
The worst case cannot be better than O(n) time. Consider the case where each element exists once, except for one element which exists twice. In order to find that element, you'd need to look at every element in the array until you find it. This is because knowing the value of any array element does not give you any information regarding the location of the duplicate element, until it's actually found. This is in contrast to binary search, where the value of an array element allows you to rule out many other elements.
No, in the worst case we have to scan at least n - 2 elements, but see
below for an algorithm that exploits inputs with many duplicates.
Consider an adversary that, for the first n - 3 distinct probes into the
n-element array, returns m for the value at index m. Now the algorithm
knows that the array looks like
1 2 3 ... i-1 ??? i+1 ... j-1 ??? j+1 ... k-1 ??? k+1 ... n-2 n-1 n.
Depending on what the ???s are, the sole correct answer could be j-1
or j+1, so the algorithm isn’t done yet.
This example involved an array where there were very few duplicates. In
fact, we can design an algorithm that, if the most frequent element
occurs k times out of n, uses O((n/k) log k) probes into the array. For
j from ceil(log2(n)) - 1 down to 0, examine the subarray consisting of
every (2**j)th element. Stop if we find a duplicate. The cost so far
is O(n/k). Now, for each element in the subarray, use binary search to
find its extent (O(n/k) searches in subarrays of size O(k), for a total
of O((n/k) log k)).
It can be shown that all algorithms have a worst case of Omega((n/k) log
k), making this one optimal in the worst case up to constant factors.

Resources